id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2304.08000
Adjoints of Matroids
We show that an adjoint of a loopless matroid is connected if and only if it itself is connected. Our first goal is to study the adjoint of modular matroids. We prove that a modular matroid has only one adjoint (up to isomorphism) which can be given by its opposite lattice, and proceed to present some alternative characterizations of modular matroids associated to adjoints and opposite lattices. The other purpose is to investigate the adjoint sequence $ad^0M,adM,ad^2M,\ldots$ of a connected matroid $M$. We classify such adjoint sequences into three types: finite, cyclic and convergent. For the first two types, the adjoint sequences eventually stabilize at the finite projective geometries except for free matroids. For the last type, the infinite non-repeating adjoint sequences are convergent to the infinite projective geometries.
Houshan Fu, Chunming Tang, Suijie Wang
2023-04-17T05:49:03Z
http://arxiv.org/abs/2304.08000v1
# Adjoints of Matroids ###### Abstract We show that an adjoint of a loopless matroid is connected if and only if it itself is connected. Our first goal is to study the adjoint of modular matroids. We prove that a modular matroid has only one adjoint (up to isomorphism) which can be given by its opposite lattice, and proceed to present some alternative characterizations of modular matroids associated to adjoints and opposite lattices. The other purpose is to investigate the adjoint sequence \(ad^{0}M,adM,ad^{2}M,\ldots\) of a connected matroid \(M\). We classify such adjoint sequences into three types: finite, cyclic and convergent. For the first two types, the adjoint sequences eventually stabilize at the finite projective geometries except for free matroids. For the last type, the infinite non-repeating adjoint sequences are convergent to the infinite projective geometries. **Keywords:** Matroid, adjoint, modular matroid, projective geometry. MSC classes: 05B35. ## 1 Introduction To explore the incidence relations among flats of a geometric lattice, Cheung [9] introduced adjoint of the geometric lattice in 1974. Given a geometric lattice \(\mathcal{L}\), its opposite lattice \(\mathcal{L}^{op}\) obtained from \(\mathcal{L}\) by reversing the order relation is not necessarily geometric lattice. If \(\mathcal{L}^{op}\) can be embedded into a geometric lattice \(\mathcal{L}^{\Delta}\) and this embedding sends all atoms of \(\mathcal{L}^{op}\) bijectively onto atoms of \(\mathcal{L}^{\Delta}\), then \(\mathcal{L}^{\Delta}\) is called an adjoint of \(\mathcal{L}\). Given a matroid \(M\), all flats of \(M\) automatically form a geometric lattice \(\mathcal{L}(M)\). A matroid \(N\) is said to be an adjoint of \(M\) if \(\mathcal{L}(N)\) is an adjoint of \(\mathcal{L}(M)\), which was extended to an oriented matroid by Bachem and Kern [3] in 1986. In general, a matroid with rank no more than three has always an adjoint. Nevertheless, when the rank of a matroid is greater than three, it may fail to admit an adjoint, such as, Cheung [9] showed that Vamos matroid does not have an adjoint. Even though a matroid has an adjoint, it is not necessarily unique. The existence and uniqueness of an adjoint is always an important topic in the field. For example, Alfter and Hochstattler [1] presented a class of matroids of rank four with adjoint including a non-linear example; Alfter, Kern and Wanka [2] found that a non-Desargues matroid admits an adjoint, and its dual matroid fail to have an adjoint. More results related to this topic, see [5, 8, 16, 17]. In general, the extension lattice of a matroid is a very important tool to address this issue of the existence and uniqueness of adjoints. See [4] for a further discussion of extension lattices. [2] and [9] provided an alternative characterization of adjoints that a matroid \(N\) is an adjoint of a matroid \(M\) if and only if the geometric lattice \(\mathcal{L}(N)\) of \(N\) can be embedded into the extension lattice \(\mathcal{E}(M)\) of \(M\), taking the atoms of \(\mathcal{L}(N)\) bijectively onto the atoms of \(\mathcal{E}(M)\). From the lattice theory, the well-known point-hyperplane duality from projective geometry tells us that the extension lattice of a connected modular matroid may be regarded as its dual geometry. On the other hand, the opposite lattice of a connected modular matroid determines a projective geometry (matroid), which is also its an adjoint. Then a natural question is whether the dual geometry obtained from the hyperplanes of a connected modular matroid is isomorphic to the projective geometry given by its opposite lattice. If this holds, the next question arises: for a fixed modular matroid, whether the matroid determined by its opposite lattice is the unique adjoint or not up to isomorphism. This poses a further question: what kinds of matroids have only one adjoint up to isomorphism except for the modular matroids. In this paper, we make an attempt to answer the first two questions by proving that the opposite lattice of a connected modular matroid is isomorphic to its extension lattice. The last question seems likely to be not easy and could not have been solved. For representable matroids, there are some additional explanations for adjoints. When \(M\) is a vector matroid, Bixby and Coullard [7] constructed its an adjoint \(\sigma M\) in two equivalent ways: one is from cocircuits of \(M\), the other is from hyperplanes of \(M\), called the type I adjoint. This implies that if a matroid is representable, then it has always an adjoint. Subsequently, Hochstattler and Kromberg [16] made a detailed study of the type I adjoint. In 2015, Jurrius and Pellikaan [18] obtained the type I adjoint from the derived code given by the cocircuits. Most recently, Kung [19] made a more profound and full scale introduction [19] for cocircuit matroid that is exactly the type I adjoint. Another related notion is derived matroid. Initially, Rota proposed the program of finding the dependencies on the circuits of a matroid at the Bowdoin College Summer 1971 NSF Conference on Combinatorics. In 1980, Longyear [20] developed Rota's idea by introducing the derived matroid for binary matroid. Recently, Oxley and Wang [23] further gave a more general definition of the derived matroid for arbitrary representable matroids by the circuits, which is referred to as Oxley-Wang derived matroid in [14]. Noting that there is a one-to-one correspondence between hyperplanes of a matroid \(M\) and circuits of its dual matroid \(M^{*}\). For a fixed \(\mathbb{F}\)-represented matroid \(M\), this leads to the duality relation between the type I adjoint \(\sigma M\) of \(M\) and the Oxley-Wang derived matroid \(\delta_{OW}M^{*}\) of \(M^{*}\), that is, \[\sigma M\cong\delta_{OW}M^{*}. \tag{1.1}\] A geometric interpretation of the above relation can refer to [14]. One very significant work in [23] is that Oxley and Wang classified all Oxley-Wang derived sequences into three types: finite, cyclic, and divergent. Motivated by this, we shall try our best to describe the general adjoint sequences that may be not easy. As a byproduct, we shall further classify all type I adjoint sequences associated with an \(\mathbb{F}\)-represented matroid, which is implicit in the work of Kung [19]. The earlier results may see literatures [8, 18]. Additionally, Freij-Hollanti, Jurrius and Kuznetsova [14] constructed the combinatorial derived matroid for arbitrary matroids and provided some open questions and more ideas for further research. Our paper is organized as follows. We start by introducing the preliminaries on adjoints in Section2. Section3 is devoted to showing that when a matroid has no loops, its an adjoint is connected if and only if it itself is connected. We further verify that the existence of an adjoint of a matroid is completely determined by the existence of an adjoint of each connected submatroid. In Section4, we focus on investigating the adjoins of modular matroids. We show that the opposite lattice of a modular matroid can be identified with its extension lattice. This implies that a modular matroid has a unique adjoint up to isomorphism, and also yields some alternative characterizations of modular matroids associated to adjoints, opposite lattices and so on. Section5 is concentrated on classifying adjoint sequences. Roughly speaking, an adjoint sequence of a connected matroid will usually end with a projective geometry, please see Theorem5.1, Theorem5.4 and Theorem5.9 for more detailed information. As a byproduct of Section5, Section6 characterizes the type I adjoint sequences as well as gives a detailed proof of the duality relation in (1.1). In Section7, we provide another description of the adjoint by the cocircuits which leads to more ideas for further research associated with combinatorial derived matroids in [14]. ## 2 Preliminaries on adjoints The matroid terminology and notations will follow Oxley's book [22]. Throughout this paper, we always assume that matroids are non-empty and finite, unless otherwise mentioned. We shall present some necessary notations and omit their detailed explanations. **Notation 2.1**.: Let \(M\) be a matroid. Unless explicitly mentioned otherwise, \(E(M)\), \(\mathcal{I}(M)\), \(\mathcal{H}(M)\), \(\mathcal{C}(M)\) and \(\mathcal{L}(M)\) denote the ground set of \(M\), the set of all independent sets of \(M\), the set of all hyperplanes of \(M\), the set of all circuits of \(M\) and the set of all flats of \(M\) in turn. \(r_{M}(\cdot)\) is the rank function of \(M\). Notations \(\wedge_{\mathcal{L}}\), \(\vee_{\mathcal{L}}\) and \(\leq_{\mathcal{L}}\) denote the'meet', 'joint' and 'partial order' in the lattice \(\mathcal{L}\), respectively. Notation \(\sqcup\) denotes the disjoint union of sets. The original definition of the adjoint of a matroid is defined as follows. **Definition 2.2**.: Let \(M\) be a matroid. A matroid \(adM\) is an _adjoint_ of \(M\) if \(r(adM)=r(M)\), and there is an injection and order-reversing map \(\phi:\mathcal{L}(M)\to\mathcal{L}(adM)\) sending coatoms of \(\mathcal{L}(M)\) bijectively onto atoms of \(\mathcal{L}(adM)\). If an adjoint \(adM\) of a matroid \(M\) exists, we call the map \(\phi:\mathcal{L}(M)\to\mathcal{L}(adM)\) in Definition2.2 an _adjoint map_ of \(M\). The adjoint map \(\phi\) shows how the embedding works. Given any flat \(X\) of \(M\), the adjoint map \(\phi\) sends \(X\) to \[\phi(X)=\bigvee_{H\in\mathcal{H}(M),\,X\subseteq H}\phi(H)=\bigcup_{H\in \mathcal{H}(M),\,X\subseteq H}\phi(H). \tag{2.1}\] Associated with the adjoint map \(\phi\), below provides more fundamental properties of an adjoint, which may be used later. **Proposition 2.3** ([7, 17]).: _Let \(M\) be a matroid and \(adM\) be its an adjoint. For any flats \(X,\;Y\in\mathcal{L}(M)\), the adjoint map \(\phi:\mathcal{L}(M)\to\mathcal{L}(adM)\) has the following properties:_ 1. _If_ \(X\) _covers_ \(Y\) _in_ \(\mathcal{L}(M)\)_, then_ \(\phi(Y)\) _covers_ \(\phi(X)\) _in_ \(\mathcal{L}(adM)\)_._ 2. \(r_{\text{adM}}\big{(}\phi(X)\big{)}=r(M)-r_{M}(X)\)_._ 3. \(\phi(X)\wedge_{\mathcal{L}(adM)}\phi(Y)=\phi(X\vee_{\mathcal{L}(M)}Y)\)_._ * \(r_{adM}\big{(}\phi(X)\big{)}+r_{adM}\big{(}\phi(Y)\big{)}=r_{adM}\big{(}\phi(X) \wedge_{\mathcal{L}(adM)}\phi(Y)\big{)}+r_{adM}\big{(}\phi(X)\vee_{\mathcal{L}( adM)}\phi(Y)\big{)}\). * _If_ \(H_{1},H_{2},\ldots,H_{m}\in\mathcal{H}(M)\) _satisfy_ \[H_{1}\cap\cdots\cap H_{m}\subsetneqq H_{1}\cap\cdots\cap H_{m-1}\subsetneqq \cdots\subsetneqq H_{1}\cap H_{2}\subsetneqq H_{1},\] _then_ \(\phi(H_{1}),\phi(H_{2}),\ldots,\phi(H_{m})\) _is an independent set of_ \(adM\)_._ It is clear that each adjoint is simple. Moreover, the loops and parallel elements of a matroid have no effect on its adjoints. Thus, we have the following fact. **Fact 2.4**.: Let \(adM\) be an adjoint of a matroid \(M\). If \(e\) is a loop or a parallel element, then \(adM\) is also an adjoint of \(M\setminus e\). In end of this section, let us quickly recall the type I adjoint for representable matroids. For a field \(\mathbb{F}\), given an \(\mathbb{F}\)-represented matroid \((M,A):=M[A]\) on ground set \(E(M)=\{e_{1},e_{2},\ldots,e_{m}\}\) of rank \(r\), where the columns of the matrix \(A\in\mathbb{F}^{r\times m}\) are labelled, in order, \(e_{1},e_{2},\ldots,e_{m}\). Each hyperplane \(H\) of \(M\) naturally determines a hyperplane \(\operatorname{span}\{A_{e_{i}}:e_{i}\in H\}\) in \(\mathbb{F}^{r}\) that is spanned by all columns in \(A\) labelled by \(e_{i}\in H\). Let \(\boldsymbol{h}_{H}\) be the normal vector of \(\operatorname{span}\{A_{e_{i}}:e_{i}\in H\}\) in \(\mathbb{F}^{r}\) when \(H\in\mathcal{H}(M)\). The _type I adjoint_\(\sigma M\) of \(M\) is defined as \[\sigma M:=M\big{[}\boldsymbol{h}_{H}\mid H\in\mathcal{H}(M)\big{]}. \tag{2.2}\] ## 3 Connectivity We are now ready to study the connectivity of adjoints. To this end, we first introduce another way to characterize an adjoint of a matroid \(M\) associated with those sets, such a set consists of its all hyperplanes containing a fixed element of \(M\) except for loops. **Proposition 3.1**.: _Let \(M\) and \(adM\) be two matroids of the same rank \(r\). Then \(adM\) is an adjoint of \(M\) if and only if its ground set can be regarded as \(E(adM):=\mathcal{H}(M)\) such that the sets \(H[e]:=\{H\in\mathcal{H}(M)\mid e\in H\}\) are hyperplanes of \(adM\) for all \(e\in E(M)\) except for loops._ Proof.: Based on Fact 2.4, we may assume that \(M\) is simple. For the sufficiency, if \(adM\) is an adjoint of \(M\), then the equation (2.1) implies that the adjoint map \(\phi:\mathcal{L}(M)\to\mathcal{L}(adM)\) sends each \(e\in E(M)\) to \(\phi(e)=\bigcup_{H\in\mathcal{H}(M),\,e\in H}\phi(H)\in\mathcal{L}(adM)\). Immediately, we have \(r_{adM}(\phi(e))=r-1\) from the part (ii) in Proposition 2.3. Namely, every \(\phi(e)\) is a hyperplane of \(adM\). In this case, suppose \(E(adM):=\mathcal{H}(M)\) and \(\phi(H)=H\) for \(H\in\mathcal{H}(M)\). Then we obtain that \(H[e]=\phi(e)\) is a hyperplane of \(adM\). For the necessity, we define a map \(\psi:\mathcal{L}(M)\to\mathcal{L}(adM)\) such that \(\psi\) sends \(e\in E(M)\) to \(\psi(e)=H[e]\) and each flat \(X\) of \(M\) to \(\psi(X)=\bigcap_{e\in X}H[e]\). Obviously, \(\bigcap_{e\in X}H[e]\in\mathcal{L}(adM)\) since \(H[e]\) is a hyperplane of \(adM\). So \(\psi\) is well defined. Note from \(E(adM):=\mathcal{H}(M)\) that \(\psi\) automatically becomes a bijection between the coatoms of \(\mathcal{L}(M)\) and the atoms of \(\mathcal{L}(adM)\). To prove that \(adM\) is an adjoint of \(M\), it suffices to verify that \(\psi\) is an order-reversing and injective map. Given any flats \(X,Y\in\mathcal{L}(M)\). If \(X\subseteq Y\) in \(\mathcal{L}(M)\), then we arrive at \[\psi(Y)=\bigcap_{e\in Y}H[e]\supseteq\bigcap_{e\in X}H[e]=\psi(X).\] Namely, \(\psi\) is an order-reversing map. Recall from the definition of \(H[e]\) that \[\psi(X)=\bigcap_{e\in X}H[e]=\bigcap_{e\in X}\{H\in\mathcal{H}(M)\mid e\in H\}= \{H\in\mathcal{H}(M)\mid X\subseteq H\}.\] Likewise, we get \(\psi(Y)=\{H\in\mathcal{H}(M)\mid Y\subseteq H\}\). This implies that if \(X\neq Y\), then \(\psi(X)\neq\psi(Y)\). Hence, \(\psi\) is injective. We complete this proof. Given a basis \(B\) of a matroid \(M\) and an element \(e\) of \(B\), define \(H(e;B)\) is a hyperplane of \(M\) satisfying \(B\backslash e\subseteq H(e;B)\), the unique hyperplane \(H(e;B)\) is called the _fundamental hyperplane_ of \(e\) with respect to \(B\). Next we will use the equivalent characterization of adjoints in Proposition3.1 to show that all fundamental hyperplanes of a matroid with respect to its a fixed basis form a basis of its an adjoint. **Lemma 3.2**.: _Let \(M\) be a matroid of rank \(r\) and \(adM\) its an adjoint with ground set \(E(adM)=\mathcal{H}(M)\). If \(B\) is a basis of \(M\), then \(\{H(e;B)\mid e\in B\}\) is a basis of \(adM\)._ Proof.: Without loss of generality, let the basis \(B=\big{\{}e_{i}\mid i\in[r]\big{\}}\) of \(M\). Noting from the definition of the fundamental hyperplane that for any \(k\in[r]\), all fundamental hyperplanes \(H(e_{i};B)\) contain \(e_{k}\) except for \(H(e_{k};B)\). This indicates \[H(e_{1};B)\cap\cdot\cdot\cap H(e_{r};B)\subsetneqq H(e_{1};B)\cap\cdot\cdot \cap H(e_{r-1};B)\subsetneqq\cdots\subsetneqq H(e_{1};B)\cap H(e_{2};B) \subsetneqq H(e_{1};B).\] It follows from the property (v) in Proposition2.3 that \(\big{\{}H(e_{i};B)\mid i\in[r]\big{\}}\) is independent. Then \(\big{\{}H(e_{i};B)\mid i\in[r]\big{\}}\) is a basis of \(adM\) since the rank of \(adM\) equals \(r\). The following result states that if an adjoint \(adM\) is not connected, then the original matroid \(M\) is also not connected. **Theorem 3.3**.: _Let \(M\) be a loopless matroid and \(adM\) its an adjoint. If \(adM=N_{1}\oplus N_{2}\), then there are submatroids \(M_{1}\) and \(M_{2}\) of \(M\) such that \(M=M_{1}\oplus M_{2}\) and \(N_{i}=adM_{i}\) for \(i=1,2\)._ Proof.: We need only consider that \(M\) is simple from Fact2.4. By Proposition3.1, we may assume that \(E(adM)=E(N_{1})\sqcup E(N_{2})=\mathcal{H}(M)\), and the adjoint map \(\phi:\mathcal{L}(M)\to\mathcal{L}(adM)\) sends each element \(e\in E(M)\) to \(\phi(e)=H[e]\). Then \(adM=N_{1}\oplus N_{2}\) indicates that for each cocircuit \(C^{*}\) of \(adM\), \(C^{*}\) is contained in one of \(E(N_{1})\) and \(E(N_{2})\). It means that \(E(N_{1})\) and \(E(N_{2})\) automatically induce a partition of \(\mathcal{H}(adM)\) such that \(H\in\mathcal{H}_{E(N_{i})}\) if and only if \(H\in\mathcal{H}(adM)\) and \(E(N_{i})\subseteq H\) for \(i=1,2\). From the injectivity of \(\phi\), let \(E_{i}=\big{\{}e\in E(M)\mid H[e]\in\mathcal{H}_{E(N_{i})}\big{\}}\) and \(M_{i}=M\setminus E_{i}\) for each \(i\). Then \(E(M)=E_{1}\sqcup E_{2}\), \(E(M_{1})=E_{2}\) and \(E(M_{2})=E_{1}\). We claim that \(E_{1}\) and \(E_{2}\) are not empty set. Suppose \(E_{1}=\emptyset\). Namely, \(E_{2}=E(M)\). Then \(H[e]\in\mathcal{H}_{E(N_{2})}\) for all \(e\in E(M)\). Recall from the definition of \(\mathcal{H}_{E(N_{2})}\) that \(E(N_{2})\subseteq H[e]\) for each \(e\in E(M)\). This implies that \(E(M)\subseteq H\) for each hyperplane \(H\) of \(M\) in \(E(N_{2})\), a contradiction. Then we have verified \(E_{1}\neq\emptyset\). Likewise, we also obtain \(E_{2}\neq\emptyset\). To prove \(M=M_{1}\oplus M_{2}\), it is equivalent to showing \(r_{M}(E_{1})+r_{M}(E_{2})=r(M)\). Let \(r(M)=r\) and \(B=\{e_{1},e_{2}\ldots,e_{r}\}\) be a basis of \(M\). Using the same argument as in the proof of that \(E_{1}\neq\emptyset\), we can arrive at \(B\cap E_{i}\neq\emptyset\) for \(i=1,2\) as well. Suppose \(B_{1}=B\cap E_{1}=\{e_{1},\ldots,e_{k}\}\) and \(B_{2}=B\cap E_{2}=\{e_{k+1},\ldots,e_{r}\}\) for some positive integer \(k<r\). Recall the definitions of \(E_{1}\) and \(E_{2}\), we have \(B_{1}\subseteq H_{1}\) and \(B_{2}\subseteq H_{2}\) for all hyperplanes \(H_{1}\in E(N_{1})\) and \(H_{2}\in E(N_{2})\) of \(M\). It follows from the definition of the fundamental hyperplane that we obtain \(H(e_{i};B)\in E(N_{2})\) for all \(e_{i}\in B_{1}\) and \(H(e_{i};B)\in E(N_{1})\) for all \(e_{i}\in B_{2}\). According to Lemma3.2, \(r_{adM}(E(N_{1}))=|B_{2}|=r-k\) and \(r_{adM}(E(N_{2}))=|B_{1}|=k\) since \(adM=N_{1}\oplus N_{2}\). We assert \(r_{M}(E_{1})=k\) and \(r_{M}(E_{2})=r-k\). Otherwise, we may assume \(r_{M}(E_{1})=j\neq k\). Let \(B_{1}^{\prime}\) be a basis of \(E_{1}\) and \(B^{\prime}\) a basis of \(E(M)\) containing \(B_{1}^{\prime}\). Repeating the same argument as above, we get \(r_{adM}(E(N_{2}))=|B_{1}^{\prime}|=j\neq k\), a contradiction. Thus, we obtain \(r_{M}(E_{1})+r_{M}(E_{2})=r\). Moreover, notice from \(M=M_{1}\oplus M_{2}\) that the intervals \([\emptyset,E_{2}]\) and \([E_{1},E(M)]\) of \(\mathcal{L}(M)\) are isomorphic under the map \(\phi_{1}\), where \(\phi_{1}:[\emptyset,E_{2}]\to[E_{1},E(M)]\) sends \(X\) to \(X\sqcup E_{1}\). Then \(\phi\circ\phi_{1}:\mathcal{L}(M_{1})\to\mathcal{L}(adM)\) is injective, order-reversing, and onto \(E(N_{1})\). Thus \(N_{1}=adM_{1}\). Likewise, we can verify \(N_{2}=adM_{2}\) as well. This completes the proof. On the other hand, Bixby and Coullard [7] proved the opposite of Theorem3.3 that an adjoint of a direct sum of two matroids is the direct sum of the adjoints of these matroids. **Lemma 3.4** ([7], Lemma 4.2).: _Let \(M\) be a loopless matroid and \(adM\) its an adjoint. If \(M=M_{1}\oplus M_{2}\), then there are submatroids \(N_{1},N_{2}\) of \(adM\) such that \(adM=N_{1}\oplus N_{2}\), where \(N_{i}=adM_{i}\) for \(i=1,2\)._ The following result is the straightforward consequence of Theorem3.3 and Lemma3.4. **Corollary 3.5**.: _Let \(M\) be a loopless matroid and \(adM\) its an adjoint. Then \(M\) is connected if and only if \(adM\) is connected._ Below explains a close connection between the existence of an adjoint and the existence of an adjoint of every connected submatroid of the original matroid. **Corollary 3.6**.: _Let \(M\) be a loopless matroid and write as a direct sum of its connected components \(M_{1},\dots,M_{n}\). Then \(M\) has an adjoint if and only if each connected component \(M_{i}\) has an adjoint._ Proof.: Theorem3.3 has verified the sufficiency. For necessity, let \(adM_{i}\) be an adjoint of \(M_{i}\), \(\phi_{i}:\mathcal{L}(M_{i})\to\mathcal{L}(adM_{i})\) be the adjoint map for each \(i\in[n]\), and \(adM=adM_{1}\oplus\dots\oplus adM_{n}\). Define a map \(\phi:\mathcal{L}(M)\to\mathcal{L}(adM)\) such that for any flat \(X=\bigsqcup_{i=1}^{n}X_{i}\) with \(X_{i}\in\mathcal{L}(M_{i})\), \(\phi(X)=\bigsqcup_{i=1}^{n}\phi_{i}(X_{i})\). It is clear that \(r(M)=\sum_{i=1}^{n}r(M_{i})=\sum_{i=1}^{n}r(adM_{i})=r(adM)\). Moreover, all adjoint maps \(\phi_{i}\) guarantee that \(\phi\) is injective, order-reversing as well as onto \(E(adM)\). So \(adM\) is an adjoint of \(M\). ## 4 Modular matroids In this section, we shall focus on investigating modular matroids and their adjoints. Intuitively, the construction of an adjoint is closely related to modular matroids. The procedure from a matroid to its an adjoint will produce many more modular pairs and even many more modular flats. More precisely, taking two distinct flats \(X\) and \(Y\) of a matroid \(M\), we know that their ranks satisfy the following submodular inequality \[r_{M}(X\vee_{\mathcal{L}(M)}Y)+r_{M}(X\wedge_{\mathcal{L}(M)}Y)\leq r_{M}(X)+ r_{M}(Y).\] The equation does not always hold. When this holds, we call \((X,Y)\) a modular pair of flats. If all pairs of flats of \(M\) are modular, \(M\) is said to be a _modular matroid_. Thus, an adjoint \(adM\) of a matroid \(M\) is constructed by adding some elements into the geometric lattice \(\mathcal{L}(M)\) such that \(r_{adM}(\phi(X)\vee_{\mathcal{L}(adM)}\phi(Y))+r_{M}(\phi(X)\wedge_{\mathcal{L} (adM)}\phi(Y))=r_{adM}(\phi(X))+r_{adM}(\phi(Y))\) for all flats \(X,Y\in\mathcal{L}(M)\). First noting the fact that if \(M\) is a modular matroid, then the opposite lattice \(\mathcal{L}(M)^{op}\) is a geometric lattice. Immediately, \(\mathcal{L}(M)^{op}\) determines a matroid \(adM\) that is exactly an adjoint of \(M\) in this case. Conversely, if a matroid \(M\) has an adjoint \(adM\) such that \(\mathcal{L}(adM)\cong\mathcal{L}(M)^{op}\). Then \(\mathcal{L}(M)^{op}\) is a geometric lattice. This further implies that \(M\) is modular in the case. An immediate result from the preceding arguments is stated as folows. **Proposition 4.1**.: _Let \(M\) be a simple matroid. Then \(M\) is modular if and only if \(M\) has an adjoint \(adM\) such that \(\mathcal{L}(adM)\cong\mathcal{L}(M)^{op}\)._ Proposition 4.1 states that a modular matroid has always an adjoint that is isomorphic to its opposite lattice. It is natural to ask if for some fixed modular matroid \(M\), the adjoint given by the opposite lattice \(\mathcal{L}(M)^{op}\) is a unique adjoint of \(M\) up to isomorphism. To answer this problem, we need to introduce linear subclass and extension lattice of a matroid. In 1965, Crapo [10] used the linear subclasses of a matroid to characterize its all single-element extensions. A _linear subclass_\(\mathcal{H}\) of a matroid \(M\) is a subset of its hyperplanes with the following property: if \(H_{1}\) and \(H_{2}\) are the members of \(\mathcal{H}\) such that \(r_{M}(H_{1}\cap H_{2})=r(M)-2\), and \(H_{3}\) is a hyperplane containing \(H_{1}\cap H_{2}\), then \(H_{3}\in\mathcal{H}\). All linear subclasses of \(M\) form a lattice ordered by inclusion, called an _extension lattice_ of \(M\) and denoted by \(\mathcal{E}(M)\). In general, posets \(\mathcal{P}_{1}\) and \(\mathcal{P}_{2}\) are isomorphic each other if and only if there is a bijection \(\theta:\mathcal{P}_{1}\to\mathcal{P}_{2}\) such that for any members \(X,Y\) of \(\mathcal{P}_{1}\), \[X\leq_{\mathcal{P}_{1}}Y\text{ if and only if }\theta(X)\leq_{\mathcal{P}_{2}} \theta(Y),\] denoted by \(\mathcal{P}_{1}\cong\mathcal{P}_{2}\). When \(M\) is modular, the following result will show that the opposite lattice \(\mathcal{L}(M)^{op}\) of \(M\) is isomorphic to its extension lattice \(\mathcal{E}(M)\). Now let us recall the intersection property of a modular geometric lattice, which will be a crucial tool to prove the surjectivity in the proof of Lemma 4.2. A modular geometric lattice \(\mathcal{L}\) meets the _intersection property_: for any two distinct atoms \(X,Y\) of \(\mathcal{L}\) and an element \(Z\in\mathcal{L}\), if \(X\leq_{\mathcal{L}}Y\vee_{\mathcal{L}}Z\), then there exists an atom \(W\) of \(\mathcal{L}\) such that \(W\leq_{\mathcal{L}}(X\vee_{\mathcal{L}}Y)\wedge_{\mathcal{L}}Z\). We are now turning to the following key lemma. **Lemma 4.2**.: _Let \(M\) be a simple matroid of rank \(r\). If \(M\) is modular, then the opposite lattice \(\mathcal{L}(M)^{op}\) of \(M\) is isomorphic to its extension lattice \(\mathcal{E}(M)\)._ Proof.: Define a map \(\lambda:\mathcal{L}(M)^{op}\to\mathcal{E}(M)\) sending each member \(X\) in \(\mathcal{L}(M)^{op}\) to \(\lambda(X)=\mathcal{H}_{X}(M)\), where \(\mathcal{H}_{X}(M)=\{H\in\mathcal{H}(M)\mid X\subseteq H\}\). We begin by proving that \(\lambda\) is well defined. For the both cases \(r_{M}(X)=r\), \(r-1\), \(\mathcal{H}_{X}(M)\) are the linear subclasses of \(M\) obviously. Note that \(\mathcal{L}(M)^{op}\) is a modular geometric lattice since \(M\) is modular. For \(r_{M}(X)\leq r-2\), we have \(r_{M}(H_{1}\cap H_{2})=r_{M}(H_{1})+r_{M}(H_{2})-r_{M}(H_{1}\vee_{\mathcal{L}( M)}H_{2})=r-2\) for any distinct members \(H_{1},H_{2}\in\mathcal{H}_{X}(M)\). With further step, if \(H_{1}\cap H_{2}\subseteq H_{3}\), we have \(X\subseteq H_{1}\cap H_{2}\subseteq H_{3}\). Then \(H_{3}\in\mathcal{H}_{X}(M)\) and \(\mathcal{H}_{X}(M)\) is a linear subclass in this case. So \(\lambda\) is well defined. To prove \(\mathcal{L}(M)^{op}\cong\mathcal{E}(M)\), we shall show that \(\lambda\) is order-preserving, injective and surjective in turn. If \(X\leq_{\mathcal{L}(M)^{op}}Y\) in \(\mathcal{L}(M)^{op}\), then we have \(Y\subseteq X\). This means that if the hyperplane \(H\) of \(M\) contains \(X\), then \(H\) must contain \(Y\). Namely, \(\mathcal{H}_{X}(M)\subseteq\mathcal{H}_{Y}(M)\) in \(\mathcal{E}(M)\). So \(\lambda\) is an order-preserving map. Suppose \(X\neq Y\), then \(\mathcal{H}_{X}(M)\neq\mathcal{H}_{Y}(M)\) obviously. That is, \(\lambda\) is an injection. Now we are ready to prove the surjectivity of \(\lambda\). Given a linear subclass \(\mathcal{H}\subseteq\mathcal{H}(M)\) of \(M\), let \(X_{\mathcal{H}}=\bigcap_{H\in\mathcal{H}}H\). Obviously, \(X_{\mathcal{H}}\in\mathcal{L}(M)^{op}\). It remains to verify \(\lambda(X_{\mathcal{H}})=\mathcal{H}_{X_{\mathcal{H}}}=\mathcal{H}\). The both cases \(r_{M}(X_{\mathcal{H}})=r,r-1\) are trivial. So let \(r_{M}(X_{\mathcal{H}})=k\) for some \(0\leq k\leq r-2\). According to the definition of \(\mathcal{H}_{X_{\mathcal{H}}}\), we arrive at \(\mathcal{H}\subseteq\mathcal{H}_{X_{\mathcal{H}}}\). Next we will show \(\mathcal{H}\supseteq\mathcal{H}_{X_{\mathcal{H}}}\) in the case. Suppose existing \(H_{0}\in\mathcal{H}_{X_{\mathcal{H}}}\setminus\mathcal{H}\) satisfies that \(H\cap H^{\prime}\) is not contained in \(H_{0}\) for any \(H,H^{\prime}\in\mathcal{H}\). Then from \(X_{\mathcal{H}}\subseteq H_{0}\) we can choose a hyperplane sequence \(H_{1},H_{2},\ldots,H_{r-k}\) of \(M\) such that \[X_{\mathcal{H}}=\bigcap_{j=1}^{r-k}H_{j}\lessdot\bigcap_{j=1}^{r-k-1}H_{j} \lessdot\cdots\lessdot\bigcap_{j=1}^{i_{0}-1}H_{j}\lessdot\bigcap_{j=1}^{i_ {0}}H_{j}\lessdot\cdots\lessdot H_{1}\bigcap H_{2}\lessdot H_{1}\] and \(i_{0}\) is the minimal positive number for which \[\bigcap_{j=1}^{i_{0}-1}H_{j}\nsubseteq H_{0}\qquad\text{ and }\qquad\bigcap_{j=1}^{i_{0}}H_{j}\subseteq H_{0}. \tag{4.1}\] Obviously, we have \(i_{0}\geq 3\) and \(\bigcap_{j=1}^{i_{0}}H_{j}\subseteq\bigcap_{j=1}^{i_{0}-1}H_{j}\bigcap H_{0}\). From the modularity of \(M\), we know that \(\mathcal{L}(M)^{op}\) is a modular geometric lattice and \(\mathcal{H}(M)\) is the set of all its atoms. So \(\mathcal{L}(M)^{op}\) has the intersection property. From this property, we obtain from (4.1) that there is a hyperplane \(H_{0i_{0}}\) of \(M\) such that \[H_{i_{0}}\bigcap H_{0}\subseteq H_{0i_{0}}\qquad\text{ and }\qquad\bigcap_{j=1}^{i_{0}-1}H_{j} \subseteq H_{0i_{0}}.\] If \(H_{0i_{0}}\in\mathcal{H}\), then we can get \(H_{i_{0}}\cap H_{0i_{0}}\subseteq H_{0}\). That is, \(H_{0}\in\mathcal{H}\), a contradiction. Hence, \(H_{0i_{0}}\in\mathcal{H}_{X_{\mathcal{H}}}\setminus\mathcal{H}\) since \(X_{\mathcal{H}}\subseteq H_{i_{0}}\cap H_{0}\subseteq H_{0i_{0}}\). Moreover, the minimality of \(i_{0}\) guarantees \(H_{0i_{0}}\neq H_{0}\). Then we can replace \(H_{0}\) with \(H_{0i_{0}}\). Using the same argument as in the case \(H_{0}\), we can arrive at a positive integer \(i_{1}<i_{0}\) and \(H_{i_{0}i_{1}}\neq H_{0},H_{0i_{0}}\) having the same properties as \(i_{0}\) and \(H_{0i_{0}}\), respectively. The same step can go on and on. Then we can obtain infinitely many distinct integers \(i_{0},i_{1},\ldots\) lying in the interval \([3,i_{0}]\), which contradicts the fact that this interval \([3,i_{0}]\) contains only finitely many integers. The proof is completed. Lemma4.2 indicates that a modular matroid has only one adjoint up to isomorphism, which is given by its opposite lattice. The property makes modular matroids become a key ingredient in the classification problem of adjoint sequences later. **Theorem 4.3**.: _Let \(M\) be a simple matroid. If \(M\) is modular, then \(M\) has only one adjoint \(adM\) up to isomorphism and \(\mathcal{L}(adM)\cong\mathcal{L}(M)^{op}\)._ Proof.: Recall from Proposition4.1 that the modular matroid \(M\) has always an adjoint. Given an adjoint \(adM\) of \(M\). Recall from [9] that \(\mathcal{L}(adM)\) can be viewed as a sublattice of the extension lattice \(\mathcal{E}(M)\). Combining with the fact that the opposite lattice \(\mathcal{L}(M)^{op}\) of \(M\) is always regarded as a sublattice of \(\mathcal{L}(adM)\). Then, when \(M\) is modular, \(\mathcal{L}(M)^{op}\cong\mathcal{E}(M)\) in Lemma4.2 directly leads to that \(\mathcal{L}(adM)\cong\mathcal{L}(M)^{op}\). This further implies that \(M\) has only one adjoint up to isomorphism. From the classical projective geometry [6], we know that connected modular matroids can be identified with projective geometries except for free matroids. Let \(P\) and \(L\) be the disjoint set of points and lines, respectively, and \(\iota\) is an incidence relation between the points and the lines. If the triple \((P,L,\iota)\) satisfies the following incidence relations * Every two distinct points, \(a\) and \(b\), are on exactly one line \(ab\), * Every line contains at least three points, * If \(a,b,c\) and \(d\) are four distinct points, no three of which are collinear, and if the line \(ab\) intersects the line \(cd\), then the line \(ac\) intersects the line \(bd\), then the triple \((P,L,\iota)\) is called a _projective geometry_. In particular, the simple matroid associated with the vector space \(\mathbb{F}^{r}\) is a projective geometry, denoted by \(PG(r-1,\mathbb{F})\). When \(\mathbb{F}\) is the finite field \(GF(q)\) with \(q\) elements, it is customary to denote this projective geometry by \(PG(r-1,q)\). A subspace of a projective geometry \((P,L,\iota)\) is a subset \(P^{\prime}\subseteq P\) such that if \(a\) and \(b\) are distinct points of \(P^{\prime}\), then all points on the line \(ab\) are in \(P^{\prime}\). Let \(\mathcal{L}(P)\) be the poset of all subspaces of the projective geometry \((P,L,\iota)\), ordered by inclusion. From the classical lattice theory, we know that \(\mathcal{L}(P)\) is a modular geometric lattice. Let \((P,L,\iota)\) be a projective geometry of rank \(r\). The _dual geometry_\((P^{*},L^{*},\iota^{*})\) of \((P,L,\iota)\) is given by that all the coatoms of \(\mathcal{L}(P)\) are regarded as the points of \((P^{*},L^{*},\iota^{*})\), and \(au^{*}b\) is a line in \(L^{*}\) for any distinct points \(a,b\in P^{*}\) if and only if \(r_{\mathcal{L}(P)}(a\cap b)=r-2\). In 1967, Birkhoff [6] showed that using the finite projective geometries can characterize the modular matroids. **Proposition 4.4** ([6, 22]).: _Let \(M\) be a simple matroid. Then \(M\) is modular if and only if its every connected component is either the free matroid \(U_{1,1}\) or a finite projective geometry._ From this perspective, Lemma 4.2 implies that if \(M\) is a connected modular matroid, then the extension lattice \(\mathcal{E}(M)\) coincides with the lattice consisting of all the subspace of the dual geometry of \(M\). More specifically, hyperplanes of \(M\) can be identified with points and members of \(\mathcal{E}(M)\) with rank two can be regarded as lines in this case. Immediately, these points, lines and inclusion relations between them will determine a projective geometry, which is precisely the dual geometry of \(M\). Then the following proposition is a direct consequence of Proposition 4.1, which may refer to [13, 11.2.3 Proposition]. **Proposition 4.5**.: _Let \((P,L,\iota)\) be a finite projective geometry. Then the lattice of its dual geometry is isomorphic to its opposite lattice. Moreover, the dual geometry of \((P,L,\iota)\) is a finite projective geometry._ Next we will describe the link between modular matroids and their adjoints. For this purpose, we require the classical Coordinatization Theorem that a projective geometry of dimension at least three can be constructed as the projective geometry associated to a vector space over a field, which is also called Veblen-Young Theorem [24]. **Theorem 4.6** ([24], Coordinatization Theorem).: _Every projective geometry of rank \(r\geq 4\) is isomorphic to \(PG(r-1,\mathbb{F})\) for some field \(\mathbb{F}\). In particular, every finite projective geometry of rank \(r\geq 4\) is isomorphic to \(PG(r-1,q)\) for some finite field \(GF(q)\) with \(q\) elements._ **Remark 4.7**.: Let \(M\) be a simple connected matroid of rank \(r\). Proposition 4.4 and Theorem 4.6 indicate that if \(M\) is a modular matroid of rank \(r\geq 4\), then \(M\) is isomorphic to the projective geometry \(PG(r-1,q)\), and the unique adjoint \(adM\) is always isomorphic to the type I adjoint \(\sigma PG(r-1,q)\) of \(PG(r-1,q)\). **Theorem 4.8**.: _Let \(M\) be a simple matroid with no rank \(3\) connected components. Then \(M\) is modular if and only if \(M\) has an adjoint \(adM\) such that \(adM\cong M\)._ Proof.: For the necessity, first noting from \(adM\cong M\) that we have \(\mathcal{L}(adM)\cong\mathcal{L}(M)^{op}\). This means that \(\mathcal{L}(M)^{op}\) is a geometric lattice. So, \(M\) is modular obviously. For the sufficiency, from Lemma3.4 and Corollary3.6, we may assume that \(M\) is connected with rank \(r\). For \(r=1\) and \(2\), we have \(M\cong U_{1,1}\) and \(M\cong U_{2,m}\) for some positive integer \(m\geq 3\) from Proposition4.4. Then the both cases are trivial. For \(r>3\), according to Remark4.7, we need only consider this case that \(M\) is the finite projective geometry \(PG(r-1,q)\) for some finite field \(GF(q)\) with \(q\) elements, and \(adM\) is identified with the type I adjoint \(\sigma PG(r-1,q)\) of \(PG(r-1,q)\). From the elementary linear algebra, there is a natural one-to-one correspondence between all \((r-1)\)-dimensional subspaces and all \(1\)-dimensional subspaces of the vector space \(GF(q)^{r}\) such that each \((r-1)\)-dimensional subspace corresponds to its \(1\)-dimensional normal complement space in \(GF(q)^{r}\). This automatically yields the next bijection \[\Psi:\mathcal{H}\big{(}PG(r-1,q)\big{)}\to E\big{(}PG(r-1,q)\big{)},\quad H \mapsto\Psi(H)=\boldsymbol{h},\] such that \(\operatorname{span}(H)\oplus\operatorname{span}(\boldsymbol{h})=GF(q)^{r}\) for \(H\in\mathcal{H}\big{(}PG(r-1,q)\big{)}\). Combining with the definition of the type I adjoint in (2.2), we obtain \(PG(r-1,q)\cong\sigma PG(r-1,q)\) immediately. We complete the proof. Let \(M\) be a simple matroid. Suppose \(adM\) is an adjoint of \(M\). We conclude the preceding arguments by pointing out the following relations: \[adM\cong M\Leftrightarrow\mathcal{L}(M)\cong\mathcal{L}(M)^{op}\Rightarrow M \text{ is modular}\Leftrightarrow\mathcal{L}(adM)\cong\mathcal{L}(M)^{op}.\] When \(M\) is modular, \(\mathcal{L}(M)\) may be not isomorphic to \(\mathcal{L}(M)^{op}\). For all we know, the main reason comes from the fact that some projective planes are not self-dual. In end of this section, we further state that there is a close connection between the modularity of a matroid \(M\) and the size of its an adjoint \(adM\). Let \(M\) be a simple matroid. Recall from [15, Theorem 2] that \(M\) is modular if and only if \(|\mathcal{H}(M)|=|E(M)|\). Combining with Proposition4.1, we can obtain directly the following result. **Proposition 4.9**.: _Let \(M\) be a simple matroid. Then \(M\) is modular if and only if \(M\) has an adjoint \(adM\) such that \(|E(M)|=|E(adM)|\)._ ## 5 Adjoint sequences Inspired by Oxley and Wang's work in [23], we will study the classifications of adjoint sequences for arbitrary matroids. Modular matroids will become the key ingredient to characterize adjoint sequences. Recall the arguments at the beginning of Section4, intuitively, an adjoint of a matroid \(M\) is more close to modular matroid than \(M\) itself. Thus a natural question arises: suppose a connected matroid \(M\) has an adjoint sequence \(ad^{0}M=M,adM,ad^{2}M,\ldots\), whether such adjoint sequence is eventually convergent to a projective geometry. It is worth noting that [8, Exercise 7.17] seems to foreshadow this phenomenon. Let \(M\) be a matroid and \(ad^{0}M=M\). Inductively, for any positive integer \(k\), the \(k\)_th adjoint_\(ad^{k}M\) of \(M\) is an adjoint of \(ad^{k-1}M\) if \(ad^{k-2}M\) has an adjoint \(ad^{k-1}M\). _An adjoint sequence_ of \(M\) is the sequence \(adM^{0},adM,ad^{2}M,\ldots\). Such adjoint sequence may have to stop after finitely many steps with a matroid that fails to admit an adjoint. Based on the fact that a matroid with rank smaller than three always admits an adjoint, we can easily obtain the following result and omit this proof. **Theorem 5.1**.: _Let \(M\) be a simple connected matroid of rank \(r\leq 2\) and size \(m\). Then for all integers \(k\geq 0\), we have that \(adM^{k}\cong U_{1,1}\) for \(r=1\), and \(ad^{k}M\cong U_{2,m}\) for \(r=2\)._ In order to investigate adjoint sequences of matroids with rank greater than two, we need the next key result. **Lemma 5.2**.: _Let \(M\) be a simple matroid. If \(M\) has a \(2\)th adjoint \(ad^{2}M\), then \(M\) is a submatroid of \(ad^{2}M\) up to isomorphism._ Proof.: Let \(\phi_{1}\) and \(\phi_{2}\) be the adjoint maps of \(M\) and \(adM\), respectively. According to the definition of adjoint map in Section2, the maps \(\phi_{1}\) and \(\phi_{2}\) induce an order-preserving injective map \(\phi_{2}\circ\phi_{1}\) from \(\mathcal{L}(M)\) to \(\mathcal{L}(ad^{2}M)\) sending each element \(e\in E(M)\) to \(\phi_{2}\circ\phi_{1}(e)\in E(ad^{2}M)\). Taking an independent set \(\{e_{1},e_{2},\ldots,e_{k}\}\) of \(M\), we know that every \(\phi_{1}(e_{i})\) is a hyperplane of \(adM\) and \(r_{adM}\bigl{(}\bigcap_{j=1}^{i}\phi_{1}(e_{j})\bigr{)}=r(M)-i\) for all \(1\leq i\leq k\) from the properties (ii) and (iii) in Proposition2.3. Immediately, we have \(\bigcap_{j=1}^{i+1}\phi_{1}(e_{j})\subsetneq\bigcap_{j=1}^{i}\phi_{1}(e_{j})\) for each \(1\leq i\leq k-1\). From the property (v) in Proposition2.3, we arrive at that \(\{\phi_{2}\circ\phi_{1}(e_{1}),\phi_{2}\circ\phi_{1}(e_{2}),\ldots,\phi_{2} \circ\phi_{1}(e_{k})\}\) is an independent set of \(ad^{2}M\). So \(M\) is isomorphic to a submatroid of \(ad^{2}M\) indeed. Below further describes that if a \(k\)th adjoint \(ad^{k}M\) is isomorphic to the original matroid \(M\), then \(M\) is isomorphic to its an adjoint \(adM\) or 2th adjoint \(ad^{2}M\). **Lemma 5.3**.: _Let \(M\) be a simple matroid. If \(M\) has a \(k\)th adjoint \(ad^{k}M\) for some \(k\geq 3\) such that \(M\cong ad^{k}M\), then \(M\) is isomorphic to \(adM\) or \(adM^{2}\)._ Proof.: Suppose \(k\) is an even number. According to Lemma5.2, we know that \(ad^{2}M\) is a submatroid of \(ad^{k}M\). Noting that \(|E(M)|\leq|E(ad^{2}M)|\leq|E(ad^{k}M)|\). It follows from \(M\cong ad^{k}M\) that \(M\cong ad^{2}M\). When \(k\) is a odd number, we can prove \(M\cong adM\) by the same argument as the above case. **Lemma 5.2** states that \(ad^{i}M\) can be embedded naturally into \(ad^{i+2}M\) as a submatroid. This leads to an interesting phenomenon that there are two non-decreasing adjoint sequences of matroids, all having the same rank as the original matroid, one beginning with the original matroid and the other starting with the adjoint matroid. Moreover, Lemma5.3 further implies that if existing distinct non-negative integers \(i,j\) satisfies \(ad^{i}M\cong ad^{j}M\), then the adjoint sequence is cyclic starting with \(ad^{i}M\), that is, \(ad^{i+2k}M\cong ad^{i}M\) and \(ad^{i+2k+1}M\cong ad^{i+1}M\) for all possible integers \(k\geq 0\). The following result indicates that the cyclic adjoint sequences eventually stabilize at the finite projective geometries. **Theorem 5.4**.: _Let \(M\) be a simple connected matroid of rank \(r\geq 3\). If \(M\) has an adjoint sequence \(ad^{0}M,adM,ad^{2}M,\ldots\) such that \(ad^{i}M\cong ad^{j}M\) for some non-negative integers \(i<j\), then_ * _if_ \(r=3\)_, the_ \(k\)_th adjoint_ \(ad^{k}M\) _always exists for all_ \(k\geq i\)_, and we have that_ * _if_ \(j-i\) _is odd, then_ \(ad^{k}M\) _is isomorphic to the same finite projective plane for all_ \(k\geq i\)_;_ * _if_ \(j-i\) _is even, then_ \(ad^{i+2k}M\) _(_\(ad^{i+2k+1}M\)_) is isomorphic to the same finite projective plane for all_ \(k\geq 0\) _(resp.);_ * _if_ \(r\geq 4\)_, the_ \(k\)_th adjoint_ \(ad^{k}M\) _always exists and is isomorphic to the same finite projective geometry_ \(PG(r-1,q)\) _for all_ \(k\geq i\) Proof.: For \(r=3\), if \(j-i\) is odd, we get \(ad^{i}M\cong ad^{i+1}M\) by Lemma5.3. This means \(\mathcal{L}(ad^{i+1}M)\cong\mathcal{L}(ad^{i}M)^{op}\) and \(\mathcal{L}(ad^{i}M)^{op}\) is a geometric lattice. Immediately, we have that \(ad^{i}M\) is modular. On the other hand, the uniqueness of an adjoint of a modular matroid in Theorem4.3 further implies that \(ad^{i+1}M\) is the unique adjoint of \(ad^{i}M\) up to isomorphism. It follows from \(ad^{i}M\cong ad^{i+1}M\) that \(ad^{k}M\) always holds and \(ad^{k}M\cong ad^{i}M\) for all \(k>i\). Then \(ad^{k}M\) is isomorphic to a finite projective plane for all \(k\geq i\) via Proposition4.4. If \(j-i\) is even, we acquire \(ad^{i}M\cong ad^{i+2}M\) from Lemma5.3, which indicates \(\mathcal{L}(ad^{i+1}M)\cong\mathcal{L}(ad^{i}M)^{op}\). Using the same arguments as the above case, we can also arrive at \(ad^{i+2k}M\cong ad^{i}M\) and \(ad^{i+2k+1}M\cong ad^{i+1}M\) for all \(k\geq 0\). Then we can verify that \(ad^{i+2k}M\) (\(ad^{i+2k+1}M\)) is isomorphic to a finite projective plane by Proposition4.4 (resp.). For \(r\geq 4\), as an application of Lemma5.3, we have that \(ad^{i}M\) is isomorphic to \(ad^{i+1}M\) or \(ad^{i+2}M\). It implies \(\mathcal{L}(ad^{i+1}M)\cong\mathcal{L}(ad^{i}M)^{op}\) in the both potential cases. Likewise, \(ad^{i}M\) is modular. Combing with Theorem4.8, we arrive at that \(ad^{k}M\) always exists and \(ad^{k}M\cong ad^{i}M\) for all \(k>i\) via the uniqueness of the adjoint of a modular matroid. So for all \(k\geq i\), \(ad^{k}M\) is isomorphic to a finite projective geometry \(PG(r-1,q)\) from Remark4.7. The proof is completed. To classify the infinite non-repeating adjoint sequences, we first introduce the direct limit of a directed system associated to matroids. Let \(M=(E,\mathcal{I})\) and \(M^{\prime}=(E^{\prime},\mathcal{I}^{\prime})\) be two matroids. An injection \(\iota:M\hookrightarrow M^{\prime}\) is called an _embedding_ if the image \(\iota(M):=\big{(}\iota(E),\iota(\mathcal{I})\big{)}\) of \(\iota\) is a submatroid of \(M^{\prime}\). Let \(\mathcal{M}\) be a category of all matroids including finite and infinite matroids. More information for the infinite matroids may refer to [21, 22]. Let \(\{M_{i}\mid i\in\mathbb{N}\}\) be a family of matroids in \(\mathcal{M}\). For each pair \(i,j\in\mathbb{N}\) with \(i\leq j\), assume given an embedding map \(f_{ij}:M_{i}\hookrightarrow M_{j}\) such that, whenever \(i\leq j\leq k\) in \(\mathbb{N}\), we have \[f_{jk}\circ f_{ij}=f_{ik}\quad\text{ and }\quad f_{ii}=\operatorname{id},\] where \(\operatorname{id}\) denotes the identity mapping. Such triple \(\big{(}\mathbb{N},\{M_{i}\},\{f_{ij}\}\big{)}\) is called a _directed system_ in \(\mathcal{M}\). **Definition 5.5**.: Let \(\mathcal{M}\) be a category of all matroids and \(\big{(}\mathbb{N},\{M_{i}\},\{f_{ij}\}\big{)}\) a directed system in \(\mathcal{M}\). An element \(M\in\mathcal{M}\) is called a _direct limit_ of this system if there exists an embedding map \(f_{i}:M_{i}\hookrightarrow M\) for each \(i\in\mathbb{N}\) with the following properties: 1. \(f_{i}=f_{j}\circ f_{ij}\) for any integer \(i\leq j\) in \(\mathbb{N}\). 2. Given an element \(N\in\mathcal{M}\) and an embedding \(g_{i}:M_{i}\hookrightarrow N\) such that \(g_{i}=g_{j}\circ f_{ij}\) for any integer \(i\leq j\) in \(\mathbb{N}\), there exists unique embedding \(g:M\hookrightarrow N\) such that \(g_{i}=g\circ f_{i}\). Such direct limit, write \(M=\varinjlim_{\vec{i}}M_{i}\). Let \(M\) be a simple connected matroid. Suppose \(M\) has an infinite adjoint sequence \(ad^{0}M,adM,ad^{2}M,\ldots\). Let \(\phi_{k}^{2}\) be an adjoint map from \(ad^{2k}M\) to \(ad^{2k+1}M\), and \(\phi_{k}^{1}\) be an adjoint map from \(ad^{2k+1}M\) to \(ad^{2(k+1)}M\) for all integers \(k\in\mathbb{N}\). Recall from Lemma5.2 that for any integers \(i\leq j\) in \(\mathbb{N}\), \(ad^{2i}M\) is a submatroid of \(ad^{2j}M\), which yields a natural embedding map \[\phi_{ij}:ad^{2i}M\hookrightarrow ad^{2j}M\quad\text{ such that }\quad\phi_{ij}=\phi_{j-1}^{1}\circ\phi_{j-1}^{2}\cdots\phi_{i}^{1} \circ\phi_{i}^{2}.\] It is clear that for any integers \(i\leq j\leq k\) in \(\mathbb{N}\), we have \(\phi_{ik}=\phi_{jk}\circ\phi_{ij}\) and \(\phi_{ii}=\operatorname{id}\). Therefore, the triple \(\big{(}\mathbb{N},\{ad^{2i}M\},\{\phi_{ij}\}\big{)}\) is a directed system in \(\mathcal{M}\). Next we shall construct a matroid \(\bar{M}\) (may be infinite) associated to the even adjoint sequence \(ad^{0}M,ad^{2}M,\ldots\), which is turned out to be a direct limit of the directed system \(\left(\mathbb{N},\{ad^{2i}M\},\{\phi_{ij}\}\right)\) later. Let \(E_{\infty}=\bigcup_{i=0}^{\infty}E(ad^{2i}M)\). Define an equivalence relation \(\sim\) on \(E_{\infty}\) such that for any members \(e,f\in E_{\infty}\), \(e\sim f\) if and only if existing integers \(i\leq j\) in \(\mathbb{N}\) satisfy that \(e\in E(ad^{2i}M)\), \(f\in E(ad^{2j}M)\) and \(\phi_{ij}(e)=f\). Given an element \(e\in E_{\infty}\), let \(\bar{e}\) be an equivalence class of elements of \(E_{\infty}\) containing \(e\), that is \(\bar{e}:=\{f\ |\ f\sim e,f\in E_{\infty}\}\). Let \(\bar{E}\) be the set of all equivalence classes of \(E_{\infty}\), i.e., \(\bar{E}:=E_{\infty}/\sim=\left\{\bar{e}\ |\ e\in E_{\infty}\right\}\). Let \(\bar{\mathcal{I}}\) be a collection of subsets of \(\bar{E}\) such that for any subset \(I=\{\bar{e}_{1},\bar{e}_{2},\ldots,\bar{e}_{j}\}\) of \(\bar{E}\), assume \(e_{k}\in ad^{2i_{k}}M\) for each \(k=1,2,\ldots,j\), and \(i_{1}\leq i_{2}\leq\cdots\leq i_{j}\), \(I\in\bar{\mathcal{I}}\) if and only if \(\left\{\phi_{i_{1}i_{j}}(e_{1}),\phi_{i_{2}i_{j}}(e_{2}),\ldots,\phi_{i_{j}i_{ j}}(e_{j})\right\}\) is an independent set of \(ad^{2i_{j}}M\). In this case, obviously, the set \(\left\{\phi_{i_{1}l}(e_{1}),\phi_{i_{2}l}(e_{2}),\ldots,\phi_{i_{j}l}(e_{j})\right\}\) is independent in \(ad^{2l}M\) whenever \(l\geq i_{j}\). **Lemma 5.6**.: _Let \(M\) be a simple connected matroid of rank \(r\). If \(M\) has an infinite adjoint sequence \(ad^{0}M,adM,ad^{2}M,\ldots\), then \(\bar{M}=(\bar{E},\bar{\mathcal{I}})\) is a simple connected matroid of rank \(r\)._ Proof.: We first shall prove that \(\bar{M}=(\bar{E},\bar{I})\) is a matroid. Obviously, \(\emptyset\in\bar{\mathcal{I}}\). For any member \(I\) of \(\bar{\mathcal{I}}\), the construction of \(\bar{\mathcal{I}}\) guarantees that all subsets of \(I\) are also the members of \(\bar{\mathcal{I}}\). Given two elements \(I_{1}=\{\bar{e}_{1},\bar{e}_{2},\ldots,\bar{e}_{k}\}\) and \(I_{2}=\{\bar{f}_{1},\bar{f}_{2},\ldots,\bar{f}_{l}\}\) with \(k<l\). We may assume that \(e_{m}\in ad^{2i_{m}}M\) (\(m=1,2,\ldots,k\)) with \(i_{1}\leq i_{2}\leq\cdots\leq i_{k}\), \(f_{n}\in ad^{2j_{n}}M\) (\(n=1,2,\ldots,l\)) with \(j_{1}\leq j_{2}\leq\cdots\leq j_{l}\) and \(i_{k}\leq j_{l}\). According to the definition of \(\bar{\mathcal{I}}\), we have that \(I^{\prime}_{1}=\left\{\phi_{i_{1}j_{l}}(e_{1}),\phi_{i_{2}j_{l}}(e_{2}), \ldots,\phi_{i_{k}j_{l}}(e_{k})\right\}\) and \(I^{\prime}_{2}=\left\{\phi_{j_{1}j_{l}}(f_{1}),\phi_{j_{2}j_{l}}(f_{2}), \ldots,\phi_{j_{l}j_{l}}(f_{l})\right\}\) are the independent sets of \(ad^{2j_{l}}M\). Since \(ad^{2j_{l}}M\) is a matroid, the both independent sets \(I^{\prime}_{1}\) and \(I^{\prime}_{2}\) satisfy the independence augmentation property in \(ad^{2j_{l}}M\). This implies that \(I_{1}\) and \(I_{2}\) meet the independence augmentation property in \(\bar{M}\) as well. Hence, the ordered pair \((\bar{E},\bar{\mathcal{I}})\) satisfies the independence axiom of matroid. Namely, \(\bar{M}\) is a matroid. Moreover, recall the definition of \(\bar{\mathcal{I}}\) again, we can easily obtain that \(|I|\leq r\) for any member \(I\in\bar{\mathcal{I}}\), and \(\{\bar{e}_{1},\bar{e}_{2},\ldots,\bar{e}_{r}\}\) is an independent set of \(\bar{M}\) for a basis \(\{e_{1},e_{2},\ldots,e_{r}\}\) of \(M\). So, the rank of \(\bar{M}\) equals \(r\). Obviously, \(\bar{M}\) is simple since each matroid \(ad^{2i}M\) is simple. In addition, taking any members \(\bar{e}\) and \(\bar{f}\) of \(\bar{E}\), we may assume that \(e,f\in E(ad^{2j}M)\) for some non-negative integer \(j\). Then we obtain from Lemma 3.4 that \(ad^{2j}M\) is connected since \(M\) is connected. It follows that \(ad^{2j}M\) has a circuit \(C\) containing \(e,f\). Let \(C=\{e,f,g_{1},\ldots,g_{i}\}\) and \(\bar{C}=\{\bar{e},\bar{f},\bar{g}_{1},\ldots,\bar{g}_{i}\}\). Immediately, the construction of \(\bar{M}\) implies that \(\bar{C}\) is a circuit of \(\bar{M}\). Thus \(\bar{M}\) is connected. This proof is completed. **Proposition 5.7**.: _Let \(M\) be a simple connected matroid. If \(M\) has an infinite adjoint sequence \(ad^{0}M,adM,ad^{2}M,\ldots\), then \(\bar{M}\) is a direct limit of the directed system \(\left(\mathbb{N},\{ad^{2i}M\},\{\phi_{ij}\}\right)\), namely, \(\bar{M}=\varinjlim\limits_{i}ad^{2i}M\)._ Proof.: Lemma 5.6 states that \(\bar{M}\) is a matroid. We define a map \(\phi_{i}:ad^{2i}M\rightarrow\bar{M}\) sending \(e\in E(ad^{2i}M)\) to its equivalence class \(\bar{e}\in\bar{E}\). Obviously, \(\phi_{i}\) is injective. Note from the construction of \(\bar{\mathcal{I}}\) that for any subset \(I=\{e_{1},\ldots,e_{k}\}\) of \(E(ad^{2i}M)\), if \(I\) is an independent set of \(ad^{2i}M\), then \(\{\bar{e}_{1},\ldots,\bar{e}_{k}\}\) is independent in \(\bar{M}\). Namely, the image \(\phi_{i}(ad^{2i}M)\) of \(\phi_{i}\) is a submatroid of \(\bar{M}\). So, \(\phi_{i}\) is an embedding map. Additionally, for any integers \(i\leq j\) in \(\mathbb{N}\) and \(e\in E(ad^{2i}M)\), we have \(\phi_{i}(e)=\phi_{j}\circ\phi_{ij}(e)=\bar{e}\) since \(\phi_{ij}(e)\sim e\). Thus, we has obtained \(\phi_{i}=\phi_{j}\circ\phi_{ij}\) for any integers \(i\leq j\) in \(\mathbb{N}\). Namely, \(\phi_{i}\) meets the property (i) in Definition 5.5. Next we will show that \(\phi_{i}\) satisfies the property (ii) in Definition 5.5. Given a matroid \(N\in\mathcal{M}\) and an embedding map \(\psi_{i}:ad^{2i}M\to N\) such that \(\psi_{i}=\psi_{j}\circ\phi_{ij}\) for any integers \(i\leq j\) in \(\mathbb{N}\). We need to show that there is a unique embedding map \(\psi:M\to N\) for which \(\psi_{i}=\psi\circ\phi_{i}\). Define a map \(\psi:\bar{M}\to N\) such that \(\psi(\bar{e})=\psi_{i}(e)\) if \(\bar{e}=\phi_{i}(e)\) for some \(e\in E(ad^{2i}M)\). The relation \(\psi_{i}=\psi_{j}\circ\phi_{ij}\) implies that \(\psi\) is well defined. Firstly, we shall prove the injectivity of \(\psi\). Given two distinct members \(\bar{e}\) and \(\bar{f}\) of \(\bar{E}\), we may assume that \(\psi(\bar{e})=\psi_{i}(e)\) for \(e\in E(ad^{2i}M)\) and \(\psi(\bar{f})=\psi_{j}(f)\) for \(f\in E(ad^{2j}M)\) with \(i\leq j\) in \(\mathbb{N}\). Since \(\bar{e}\neq\bar{f}\), \(\phi_{ij}(e)\neq f\) in \(ad^{2j}M\). Then the injectivity of \(\psi_{j}\) means that \(\psi_{i}(e)=\psi_{j}\circ\phi_{ij}(e)\neq\psi_{j}(f)\). Namely, \(\psi\) is injective. Secondly, we will verify that the image \(\psi(\bar{M})\) of \(\psi\) is a submatroid of \(N\). This problem can reduce to showing that for a fixed independent set \(\bar{I}=\{\bar{e}_{1},\bar{e}_{2},\ldots,\bar{e}_{j}\}\) of \(\bar{M}\), the set \(\psi(\bar{I}):=\{\psi(\bar{e}_{1}),\psi(\bar{e}_{2}),\ldots,\psi(\bar{e}_{j})\}\) is independent in \(N\). Suppose \(\psi(\bar{e}_{k})=\psi_{i_{k}}(e_{k})\) for some \(e_{k}\in E(ad^{2i_{k}}M)\) and \(i_{1}\leq i_{2}\cdots\leq i_{j}\). According to the construction of \(\bar{M}\), we arrive at that \(\{\phi_{i_{1}i_{j}}(e_{1}),\phi_{i_{2}i_{j}}(e_{2}),\ldots,\phi_{i_{iji}}(e_{j})\}\) is an independent set of \(ad^{2i_{j}}M\) since \(\bar{I}\) is independent in \(\bar{M}\). Noticing that \(\psi(\bar{e}_{k})=\psi_{i_{j}}\circ\phi_{i_{k}i_{j}}(e_{k})\) for \(k=1,2,\ldots,j\). Immediately, we can obtain that \(\psi(\bar{I})\) is independent in \(N\) since \(\psi_{i_{j}}\) is an embedding map from \(ad^{2i_{j}}M\) to \(N\). Up to now, we have verified that \(\psi\) is an embedding map and \(\psi_{i}=\psi\circ\phi_{i}\). Note that the two embedding maps \(\psi_{i}\) and \(\phi_{i}\) completely determine the map \(\psi\), that is, \(\psi\) is unique. Hence, \(\bar{M}\) is a directed limit of \(\big{(}\mathbb{N},\{ad^{2i}M\},\{\phi_{ij}\}\big{)}\). We complete the proof. Proposition 5.7 states that \(\lim\limits_{\stackrel{{\rightarrow}}{{i}}}ad^{2i}M\) is a matroid. Below further shows that \(\lim\limits_{\stackrel{{\rightarrow}}{{i}}}ad^{2i}M\) is a projective geometry, which is implicitly contained in [8, Hint of Exercise 7.17]. We shall omit its straightforward proof. **Proposition 5.8**.: _Let \(M\) be a simple connected matroid of rank \(r\geq 3\). If \(M\) has an infinite adjoint sequence \(ad^{0}M,adM,ad^{2}M,\ldots\), then \(\lim\limits_{\stackrel{{\rightarrow}}{{i}}}ad^{2i}M\) is a projective geometry._ Analogous to \(\lim\limits_{\stackrel{{\rightarrow}}{{i}}}ad^{2i}M\), we can define \(\lim\limits_{\stackrel{{\rightarrow}}{{i}}}ad^{2i+1}M\) by making a minor change. Additionally, using the same arguments as in Lemma 5.6, Proposition 5.7 and Proposition 5.8, we can verify that if a connected matroid \(M\) with rank greater than two has an infinite adjoint sequence \(ad^{0}M,adM,ad^{2}M,\ldots\), then \(\lim\limits_{\stackrel{{\rightarrow}}{{i}}}ad^{2i+1}M\) is a projective geometry. We are now ready to characterize the infinite non-repeating adjoint sequences by infinite projective geometries. **Theorem 5.9**.: _Let \(M\) be a simple connected matroid of rank \(r\geq 3\). If \(M\) has an infinite adjoint sequence \(ad^{0}M,adM,ad^{2}M,\ldots\) such that \(ad^{i}M\not\approx ad^{j}M\) for any non-negative integers \(i<j\), then_ * _if_ \(r=3\)_,_ \(\lim\limits_{\stackrel{{\rightarrow}}{{i}}}ad^{2i}M\) _(_\(\lim\limits_{\stackrel{{\rightarrow}}{{i}}}ad^{2i+1}M\)_) is an infinite projective plane._ * _if_ \(r\geq 4\)_,_ \(\lim\limits_{\stackrel{{\rightarrow}}{{i}}}ad^{2i}M\) _and_ \(\lim\limits_{\stackrel{{\rightarrow}}{{i}}}ad^{2i+1}M\) _are isomorphic to the same infinite projective geometry_ \(PG(r-1,\mathbb{F})\) _for some infinite field_ \(\mathbb{F}\)_._ Proof.: For \(r\geq 4\), according to Theorem 4.6 and Proposition 5.8, we obtain that \(\lim\limits_{\stackrel{{\rightarrow}}{{i}}}ad^{2i}M\) is isomorphic to the projective geometry \(PG(r-1,\mathbb{F})\) for some field \(\mathbb{F}\). Suppose \(\mathbb{F}\) is a finite field \(GF(q)\) with \(q\) elements. Then \(PG(r-1,\mathbb{F})\) contains finitely many distinct submatroids (up to isomorphism). Notice from Lemma 5.2 that \(ad^{2i}M\) can be viewed as a submatroid of \(PG(r-1,\mathbb{F})\) for all \(i\geq 0\). The preceding arguments make \(ad^{i}M\cong ad^{j}M\) for some non-negative integers \(i<j\). This contradicts the assumption that \(ad^{i}M\not\approx ad^{j}M\) for all \(0\leq i<j\). So \(\mathbb{F}\) is an infinite field. Namely, \(PG(r-1,\mathbb{F})\) is an infinite projective geometry. This implies that each \(ad^{2i}M\) can be representable over \(\mathbb{F}\). Recall from [7, Lemma 2.8] that if \(M\) has an adjoint \(adM\) and \(M\) is representable over a field \(\mathbb{F}\), then \(adM\) is isomorphic to some type I adjoint of \(M\). Hence, every \(ad^{2i+1}M\) is representable over the field \(\mathbb{F}\). Similar to the argument of \(\varinjlim_{i}ad^{2i}M\), we can arrive at that \(\varinjlim_{i}ad^{2i+1}M\) is also isomorphic to the same infinite projective geometry \(PG(r-1,\mathbb{F})\). For \(r=3\), recall from Proposition5.8 that \(\varinjlim_{i}ad^{2i}M\) (\(\varinjlim_{i}ad^{2i+1}M\)) is a projective plane \((P,L,\iota)\). Using the same argument as in the proof of the case \(r\geq 4\), we can also obtain \((P,L,\iota)\) is an infinite projective plane. We complete the proof. Next we are ready to handle the classification problem of adjoint sequences for arbitrary matroids. Let us recall Lemma3.4 that an adjoint of a direct sum of two matroids is the direct sum of the adjoints of these matroids. Immediately, the following result is now a direct consequence of Theorem5.1, Theorem5.4 and Theorem5.9. **Corollary 5.10**.: _Let \(M\) be a simple matroid and write as a direct sum of its connected components \(M_{1},\ldots,M_{n}\). Then_ 1. _if the rank of each component of_ \(M\) _is no more than two, then the_ \(k\)_th adjoint_ \(ad^{k}M\) _always exists and_ \(ad^{k}M\cong M\) _for all_ \(k\geq 0\)_;_ 2. _if_ \(M\) _has an adjoint sequence_ \(ad^{0}M,adM,\ldots\) _such that_ \(ad^{i}M\cong ad^{j}M\) _for some non-negative integers_ \(i<j\)_, then the_ \(k\)_th adjoint_ \(ad^{k}M\) _always exists and each connected component of_ \(ad^{k}M\) _is isomorphic to the free matroid_ \(U_{1,1}\) _or a finite projective geometry for all_ \(k\geq i\)_;_ 3. _if_ \(M\) _has an infinite adjoint sequence_ \(ad^{0}M,adM,ad^{2}M,\ldots\) _such that_ \(ad^{i}M\not\cong ad^{j}M\) _for any non-negative integers_ \(i<j\)_, then each connected component of the direct limit_ \(\varinjlim_{i}ad^{2i}M\) (\(\varinjlim_{i}ad^{2i+1}M\)) _is the free matroid_ \(U_{1,1}\) _or a projective geometry, and_ \(M\) _has at least one component_ \(M_{k}\) _such that_ \(\varinjlim_{i}ad^{2i}M_{k}\) _is the infinite projective geometry._ We close this section by discussing the link between matroid representability and infinite adjoint sequences. A basic question in matroid theory is how to find whether a matroid is representable. As far as we know, this question is very challenging and still open. Noting that if a matroid \(M\) is representable over some field, then it has an infinite type I adjoint sequence. Conversely, suppose \(M\) with rank greater than three has an infinite adjoint sequence, applying Proposition5.8 to this adjoint sequence, we arrive at that \(M\) can be viewed as a submatroid of a projective geometry. Immediately, the Coordinatization Theorem in Theorem4.6 means that \(M\) is representable. We conclude the result as follows. **Corollary 5.11**.: _Let \(M\) be a simple connected matroid with rank greater than three. Then \(M\) is representable if and only if \(M\) has an infinite adjoint sequence._ ## 6 Type I adjoint ### Type I adjoint sequences To our knowledge, a matroid may fail to admit an adjoint. However, if a matroid is representable, it has always a type I adjoint. It is worth noting that the type I adjoint of a matroid depends on its a representation. As a byproduct of Section 5, associated with a fixed \(\mathbb{F}\)-represented matroid, this section will simplify the classifications of the type I adjoint sequences into two types: finite and convergent. For a field \(\mathbb{F}\), let \(M\) be an \(\mathbb{F}\)-representable matroid on ground set \(E(M)=\{e_{1},e_{2},\ldots,e_{m}\}\), let \(\varphi:E(M)\to\mathbb{F}^{n}\) be a representation of \(M\). The matrix \(A\) whose columns are the vectors \(\varphi(e_{1}),\varphi(e_{2}),\ldots,\varphi(e_{m})\) is the matrix corresponding to \(\varphi\). Namely, \(M\) is \(M[A]\). The matrix \(A\) is known as an \(\mathbb{F}\)-representation of \(M\). Moreover, the pair \((M,\varphi)\), or equivalently the pair \((M,A)\), denotes an \(\mathbb{F}\)-represented matroid. Let \(M\) be an \(\mathbb{F}\)-represented matroid on ground set \(E(M)=\{e_{1},e_{2},\ldots,e_{m}\}\) of rank \(r\) with the representation \(\varphi:E(M)\to\mathbb{F}^{r}\). For each hyperplane \(H\in\mathcal{H}(M)\), all the vectors \(\varphi(e_{i})\) with \(e_{i}\in H\) automatically generate a hyperplane \(\operatorname{span}(H)\) in \(\mathbb{F}^{r}\). Let \(\boldsymbol{h}_{H}\) be the normal vector of \(\operatorname{span}(H)\) in \(\mathbb{F}^{r}\) when \(H\in\mathcal{H}(M)\). This yields an \(\mathbb{F}\)-represented matroid \((\sigma M,\sigma\varphi)\) with ground set \(\mathcal{H}(M)\) such that \((\sigma\varphi)(H)=\boldsymbol{h}_{H}\) for all hyperplanes \(H\) in \(\mathcal{H}(M)\). The _type I adjoint_\((\sigma M,\sigma\varphi)\) of \((M,\varphi)\) is defined as \[(\sigma M,\sigma\varphi):=M\big{[}\boldsymbol{h}_{H}\mid H\in\mathcal{H}(M) \big{]}.\] We shall frequently write \(\sigma M\) for \((\sigma M,\sigma\varphi)\). Moreover, let \((\sigma^{0}M,\sigma^{0}\varphi)=(M,\varphi)\). It is also possible to repeat the procedure of taking the type I adjoint, for any positive integer \(k\), the _\(k\)th type I adjoint_\((\sigma^{k}M,\sigma^{k}\varphi)\) of \(M\) is the type I adjoint of \((\sigma^{k-1}M,\sigma^{k-1}\varphi)\). By the construction of the type I adjoint, \((M,\varphi)\) has an infinite type I adjoint sequence \((\sigma^{0}M,\sigma^{0}\varphi)\), \((\sigma^{1}M,\sigma^{1}\varphi)\), \((\sigma^{2}M,\sigma^{2}\varphi),\ldots\), which is referred as Crapo sequence in [19] as well. To obtain a more precise classification of the type I adjoint sequences, let us first introduce Desargues' theorem. Desargues' theorem is one of the most fundamental and beautiful results in projective geometry. Desargues' theorem states that given three distinct lines \(a_{1}b_{1},a_{2}b_{2}\) and \(a_{3}b_{3}\) in \(PG(2,\mathbb{F})\), if the three lines meet at the point \(o\), then the points \(c_{1}=a_{1}a_{2}\cap b_{1}b_{2}\), \(c_{2}=a_{1}a_{3}\cap b_{1}b_{3}\) and \(c_{3}=a_{2}a_{3}\cap b_{2}b_{3}\) are collinear. Below describes the other beautiful property that if a projective plane satisfies Desargues' theorem, then it is isomorphic to a projective plane \(PG(2,\mathbb{F})\) obtained from a vector space \(\mathbb{F}^{3}\). **Theorem 6.1**.: [11] _Let \((P,L,\iota)\) be a projective plane. Then \((P,L,\iota)\) is isomorphic to \(PG(2,\mathbb{F})\) for a filed \(\mathbb{F}\) if and only if Desargues' theorem holds. In particular, if \((P,L,\iota)\) is a finite projective plane, then \((P,L,\iota)\) is isomorphic to \(PG(2,q)\) for some prime power \(q\) if and only if Desargues' theorem holds._ Now we are ready to present that if an \(\mathbb{F}\)-represented matroid \((M,\varphi)\) is a projective plane, then Desargues' theorem holds. **Lemma 6.2**.: _Let \((M,\varphi)\) be an \(\mathbb{F}\)-represented matroid with the representation \(\varphi:E(M)\to\mathbb{F}^{3}\). If \(M\) is a projective plane, then \(M\cong PG(2,\mathbb{K})\) for some subfield \(\mathbb{K}\) of \(\mathbb{F}\)._ Proof.: To prove the lemma, from the Theorem 6.1, it is sufficient to show that the Desargues' theorem holds in \(M\). Since \(M\) is an \(\mathbb{F}\)-represented matroid, then \(M\cong PG(2,\mathbb{F})|\varphi(E)\), where \(\varphi(E)\) is a finite set of \(PG(2,\mathbb{F})\). Then we may assume that \(M=PG(2,\mathbb{F})|\varphi(E)\). So three points in \(M\) are collinear if and only if their representing vectors are linearly dependent. Let \(a_{1}b_{1},a_{2}b_{2}\) and \(a_{3}b_{3}\) be three lines in \(M\) all meeting at the point \(o\) with \(a_{i},b_{i}\in\varphi(E)\) for \(i=1,2,3\) and \(o\in\varphi(E)\). Since \(a_{i}\neq b_{i}\) for \(i=1,2,3\), then the vectors \(a_{i}\) and \(b_{i}\) form a basis of 2-dimensional subspace corresponding to the line \(a_{i}b_{i}\) for \(i=1,2,3\). Since the point \(o\) is incident with each of lines \(a_{i}b_{i}\), then there exist \(\lambda_{i},\mu_{i}\in\mathbb{F}\) such that \[o=\lambda_{1}a_{1}+\mu_{1}b_{1},\ o=\lambda_{2}a_{2}+\mu_{2}b_{2},\ o=\lambda_{ 3}a_{3}+\mu_{3}b_{3}\] hold. From the above equation, we have \[c_{1}=\lambda_{1}a_{1}-\lambda_{2}a_{2}=\mu_{2}b_{2}-\mu_{1}b_{1},c_{2}=\lambda_{2 }a_{2}-\lambda_{3}a_{3}=\mu_{3}b_{3}-\mu_{2}b_{2},c_{3}=\lambda_{3}a_{3}-\lambda _{1}a_{1}=\mu_{1}b_{1}-\mu_{3}b_{3}.\] Note that \(c_{1}\) is the linear combination of \(a_{1}\) and \(a_{2}\), and \(c_{1}\) is also the linear combination of \(b_{1}\) and \(b_{2}\). It implies \(c_{1}\) lies on the lines \(a_{1}a_{2}\) and \(b_{1}b_{2}\). Since \(M\) is a projective plane, then \(a_{1}a_{2}\) and \(b_{1}b_{2}\) has the unique common point, which implies \(c_{1}\in\varphi(E)\) and \(c_{1}\in a_{1}a_{2}\cap b_{1}b_{2}\). In the same way, we get \(c_{2}\in\varphi(E)\) and \(c_{2}\in a_{2}a_{3}\cap b_{2}b_{3}\) and \(c_{3}\in\varphi(E)\) and \(c_{3}\in a_{1}a_{3}\cap b_{1}b_{3}\). The equality \[c_{1}+c_{2}+c_{3}=(\lambda_{1}a_{1}-\lambda_{2}a_{2})+(\lambda_{2}a_{2}- \lambda_{3}a_{3})+(\lambda_{3}a_{3}-\lambda_{1}a_{1})=0\] means that \(c_{1},c_{2}\) and \(c_{3}\) are linearly dependent, i.e., \(c_{1},c_{2}\) and \(c_{3}\) are collinear. Hence the lemma is proved. It is clear that Lemma6.2 holds for infinite matroids as well. Then, combining with Lemma6.2, applying Theorem5.1, Theorem5.4 and Theorem5.9 to the type I adjoint sequences, we can obtain the classifications of the type I adjoint sequences directly, which is implicit in the work of Kung [19]. **Theorem 6.3**.: _Let \((M,\varphi)\) be a simple connected \(\mathbb{F}\)-represented matroid of rank \(r\) with the representation \(\varphi:E(M)\to\mathbb{F}^{r}\), then_ * _when_ \(r=1,2\)_, then_ \(\sigma^{i}M\cong U_{1,1}\) _and_ \(U_{2,m}\) _for all_ \(i\geq 0\)_, respectively;_ * _when_ \(r\geq 3\)_, if_ \(\sigma^{i}M\cong\sigma^{j}M\) _for some non-negative integers_ \(i<j\)_, then_ \(\mathbb{F}\) _is a finite field and_ \(\sigma^{k}M\cong PG(r-1,q)\) _for all_ \(k\geq i\)_, where_ \(GF(q)\) _with_ \(q\) _elements is a subfield of_ \(\mathbb{F}\)_;_ * _when_ \(r\geq 3\)_, if_ \(\sigma^{i}M\ncong\sigma^{j}M\) _for any non-negative integers_ \(i<j\)_, then_ \(\mathbb{F}\) _is an infinite field and_ \(\lim\limits_{i}\sigma^{2i}M\) _and_ \(\lim\limits_{i}\sigma^{2i+1}M\) _are isomorphic to the same infinite projective geometry_ \(PG(r-1,\mathbb{K})\)_, where_ \(\mathbb{K}\) _is an infinite subfield of_ \(\mathbb{F}\)_._ ### Duality When \(M\) is a vector matroid, Bixby and Coullard [7] constructed an adjoint of \(M\) in two equivalent ways: one is from cocircuits of \(M\), the other is from hyperplane flats of \(M\). From their constructions, we realize that there is a duality phenomenon between the adjoint and derived matroid of vector matroids. Such phenomenon has appeared in [12] as well, when Falk studied the discriminantal arrangements for a general position configurations. For a field \(\mathbb{F}\), given an \(\mathbb{F}\)-represented matroid \((M,A):=M[A]\) of rank \(r\) on ground set \(E(M)=\{e_{1},e_{2},\ldots,e_{m}\}\), where the columns of the matrix \(A\in\mathbb{F}^{r\times m}\) are labelled, in order, \(e_{1},e_{2},\ldots,e_{m}\). For every circuit \(C\in\mathcal{C}(M)\), there is a unique vector \(\mathbf{c}_{C}=(c_{1},c_{2},\ldots,c_{m})\) (up to a non-zero scalar multiple) in \(\mathbb{F}^{m}\) such that \(\sum_{i=1}^{m}c_{i}A_{e_{i}}=\mathbf{0}\), where \(c_{i}\neq 0\) if and only if \(e_{i}\in C\), called the _circuit vector_ of \(C\). The _Oxley-Wang derived matroid_\(\delta_{OW}M\) is defined as \[\delta_{OW}M:=M[\mathbf{c}_{C}\mid C\in\mathcal{C}(M)]. \tag{6.1}\] Let \(A^{\prime}\in\mathbb{F}^{(m-r)\times m}\) be a matrix such that the set of all row vectors of \(A^{\prime}\) is a basis of the solution space of \(A\mathbf{x}=\mathbf{0}\). It is clear from matroid theory [22, SS2.2] that the dual matroid \(M^{*}\) of \(M\) can be naturally represented by column vectors of \(A^{\prime}\), i.e., \((M^{*},A^{\prime}):=M[A^{\prime}]\), where the columns of the matrix \(A^{\prime}\) are also labelled, in order, \(e_{1},e_{2},\ldots,e_{m}\). Accordingly, we can define the Oxley-Wang derived matroid \(\delta_{OW}M\) and the type I adjoint \(\sigma M^{*}\) via \(A\) and \(A^{\prime}\). Below will present a detailed proof of the duality relation in (1.1), which is an illuminating relation for future studies associated to the combinatorial derived matroid defined in [14]. **Proposition 6.4**.: _If \((M,A)\) and \((M^{*},A^{\prime})\) are defined as above, then_ \[\delta_{OW}M\cong\sigma M^{*}.\] Proof.: Recall from (2.2) and (6.1) that \[\delta_{OW}M=M\big{[}\boldsymbol{c}_{C}\mid C\in\mathcal{C}(M)\big{]},\quad \sigma M^{*}=M\big{[}\boldsymbol{h}_{H}\mid H\in\mathcal{H}(M^{*})\big{]}.\] Note from the matroid theory that \(C\) is a circuit of \(M\) if and only if \(C^{*}:=E(M)\setminus C\) is a hyperplane of \(M^{*}\), see [22, Proposition 2.1.6]. It suffices to show that for any circuits \(C_{1},C_{2},\ldots,C_{k}\in\mathcal{C}(M)\), the both sets \(\{\boldsymbol{c}_{C_{1}},\boldsymbol{c}_{C_{2}},\ldots,\boldsymbol{c}_{C_{k}}\}\) and \(\{\boldsymbol{h}_{C_{1}^{*}},\boldsymbol{h}_{C_{2}^{*}},\ldots,\boldsymbol{h}_ {C_{k}^{*}}\}\) have the same rank. Given a circuit \(C\in\mathcal{C}(M)\) with the circuit vector \(\boldsymbol{c}_{C}=(c_{1},c_{2},\ldots,c_{m})\), i.e., \(A\boldsymbol{c}_{C}^{\top}=\boldsymbol{0}\) and \(c_{i}\neq 0\) if and only if \(e_{i}\in C\). It follows from the definition that \(\boldsymbol{c}_{C}\) can be written uniquely as a linear combination of row vectors of \(A^{\prime}\), say \(\boldsymbol{c}_{C}=\boldsymbol{h}A^{\prime}\), where \(\boldsymbol{h}\neq\boldsymbol{0}\) obviously. Since \(c_{i}=0\) if and only if \(e_{i}\in C^{*}\), we have \(\boldsymbol{h}A^{\prime}_{e_{i}}=0\) for all \(e_{i}\in C^{*}\). Namely, \(\boldsymbol{h}\) is the normal vector of the hyperplane \(\operatorname{span}\{A^{\prime}_{e_{i}}:e_{i}\in C^{*}\}\) in \(\mathbb{F}^{m-r}\) that is spanned by all columns of \(A^{\prime}\) labelled by \(e_{i}\in C^{*}\). So by the definition of \(\boldsymbol{h}_{C^{*}}\) we may assume \(\boldsymbol{h}_{C^{*}}=\boldsymbol{h}\). From above arguments, we have obtained for any circuits \(C_{1},C_{2},\ldots,C_{k}\in\mathcal{C}(M)\), \[\left[\begin{array}{c}\boldsymbol{c}_{C_{1}}\\ \vdots\\ \boldsymbol{c}_{C_{k}}\end{array}\right]=\left[\begin{array}{c}\boldsymbol{ h}_{C_{1}^{*}}\\ \vdots\\ \boldsymbol{h}_{C_{k}^{*}}\end{array}\right]A^{\prime}.\] Since row vectors of \(A^{\prime}\) are of full rank, it follows that the rank of \(\{\boldsymbol{c}_{C_{1}},\boldsymbol{c}_{C_{2}},\ldots,\boldsymbol{c}_{C_{k}}\}\) equals the rank of \(\{\boldsymbol{h}_{C_{1}^{*}},\boldsymbol{h}_{C_{2}^{*}},\ldots,\boldsymbol{h}_ {C_{k}^{*}}\}\) and completes the proof. In general, \(\sigma M\) may depend on the \(\mathbb{F}\)-representation of \(M\). Oxley and Wang [23, Theorem 8] showed that the Oxley-Wang derived matroid \(\delta_{OW}M\) of an \(\mathbb{F}\)-represented matroid \((M,A)\) does not depend on the \(\mathbb{F}\)-representation \(A\) if and only if \(\mathbb{F}\) is \(GF(2)\) or \(GF(3)\). An immediate result comes from this by Proposition6.4. **Corollary 6.5**.: _Let \(\mathbb{F}\) be a field. Then, for all \(\mathbb{F}\)-represented matroids \((M,A)\) of rank \(r\) and size \(m\), the type I adjoint \(\sigma M\) does not depend on the \(\mathbb{F}\)-representation \(A\) if and only if \(\mathbb{F}\) is \(GF(2)\) or \(GF(3)\), where the matrix \(A\in\mathbb{F}^{r\times m}\)._ ## 7 Further research In this section, of utmost initial motivation to us is to study the relevant questions appearing in [14] associated with combinatorial derived matroids. Recently, as a generalization of the Oxley-Wang derived matroid, Freij-Hollanti, Jurrius and Kuznetsova [14] constructed the combinatorial derived matroid for arbitrary matroids. Unlike the adjoint of matroids, this construction guarantees that an arbitrary matroid has always a derived matroid. In particular, they showed that by choosing an appropriate representation for a representable matroid, its Oxley-Wang derived matroid may coincide with the combinatorial derived matroid. Moreover, [14] also stated some close connections between the combinatorial derived matroid and the adjoint of matroids, and further proposed a number of particularly interesting and insightful questions. Before proceeding further, we first introduce the definition of the combinatorial derived matroid. **Definition 7.1**.: Let \(M\) be a matroid and the collection Inductively, let \(\mathcal{D}_{i+1}=\uparrow\epsilon(\mathcal{D}_{i})\) for \(i\geq 1\), and \(\mathcal{D}=\bigcup_{i\geq 0}\mathcal{D}_{i}\), where \[\epsilon(\mathcal{D}_{i}):=\mathcal{D}_{i}\cup\{D_{1}\cup D_{2}\setminus C\ |\ D_{1},D_{2}\in\mathcal{D}_{i},\,D_{1}\cap D_{2}\notin\mathcal{D}_{i},\,C \in D_{1}\cap D_{2}\}\] and \[\uparrow\epsilon(\mathcal{D}_{i}):=\{D\subseteq\mathcal{C}(M)\mid\exists D^{ \prime}\in\epsilon(\mathcal{D}_{i}):D^{\prime}\subseteq D\}.\] The matroid \(\delta M:=(\mathcal{C}(M),\mathcal{D})\) with ground set \(\mathcal{C}(M)\) is called the _combinatorial derived matroid_ of \(M\), where \(\mathcal{D}\) is the collection of all dependent sets of \(\delta M\). Associated with the combinatorial derived matroid, our immediate motive is an attempt to answer the following conjecture. **Conjecture 7.2** ([14],Conjecture 7.6).: _Let \(M\) be a matroid of rank \(r\) and size \(m\) such that its dual matroid \(M^{*}\) has an adjoint. Then \(\delta M\) is isomorphic to one of the adjoints of \(M^{*}\). In particular, the rank of \(\delta M\) equals \(m-r\)._ Inspired by the duality relation in Proposition6.4, the standard duality argument motivates another alternative characterization of the adjoint in terms of cocircuits. **Proposition 7.3**.: _Let \(M\), \(adM\) be two matroids of the same rank \(r\) and \(M^{*}\) be the dual matroid of \(M\). Then \(adM\) is an adjoint of \(M\) if and only if its ground set can be regarded as \(E(adM):=\mathcal{C}(M^{*})\) such that the sets \(C^{*}[e]:=\{C^{*}\in\mathcal{C}(M^{*})\mid e\notin C^{*}\}\) are hyperplanes of \(adM\) for all \(e\in E(M)\) except for loops._ Proof.: For the necessity, suppose \(adM\) has a ground set \(\mathcal{C}(M^{*})\) such that the sets \(C^{*}[e]=\{C^{*}\in\mathcal{C}(M^{*})\mid e\notin C^{*}\}\) are hyperplanes of \(adM\) for all \(e\in E(M)\) except for loops. Noting that there is a natural one-to-one correspondence between all cocircuits and all hyperplanes of \(M\). More precisely, each cocircuit \(C^{*}\) of \(M\) corresponds to its hyperplane \(H_{C^{*}}=E(M)\setminus C^{*}\). Then we can replace every element \(C^{*}\) in \(adM\) with \(H_{C^{*}}\in\mathcal{H}(M)\). This yields a matroid \(N\) on the ground set \(\mathcal{H}(M)\) such that \(N\cong adM\), and every set \(H[e]=\{H\in\mathcal{H}(M)\mid e\in H\}\) is a hyperplane of \(N\) except for loops of \(M\). It follows from Proposition3.1 that \(N\) is an adjoint of \(M\). So \(adM\) is an adjoint of \(M\). Similarly, we can verify that the sufficiency holds. Comparing the both different ways to characterize the adjoint in Proposition3.1 and Proposition7.3, it is not difficult to see that the description of an adjoint by the cocircuits in Proposition7.3 is more close to the combinatorial derived matroid. So, we expect this characterization to be a key bridge between combinatorial derived matroids and adjoints, and even to play an important role in solving those relevant problems in [14]. For example, the characterization of an adjoint by the cocircuits makes us more easily see this relation that the Oxley-Wang derived matroid of an \(\mathbb{F}\)-represented matroid \((M,\varphi)\) is always its an adjoint. Given a basis \(B\) of a matroid \(M\) and an element \(e\in E(M)\setminus B\), the unique circuit \(C(e;B)\) contained in \(B\cup\{e\}\) is called the _fundamental circuit_ of \(e\) with respect to \(B\). If \(e\in B\), denote \(C_{M^{*}}(e;E(M)\setminus B)\) by \(C^{*}(e;B)\), and call it the _fundamental cocircuit_ of \(e\) with respect to \(B\). **Example 7.4**.: Let \((M,\varphi)\) be an \(\mathbb{F}\)-represented matroid. [23, Lemma 3] stated that given a basis \(B\) of \(M\), \(\{C(e;B)\mid e\in E(M)\setminus B\}\) forms a basis of \(\delta_{OW}M\). Based on the property of \(\delta_{OW}M\), suppose \(e_{0}\) is not a coloop of \(M\), then there is a basis \(B_{e_{0}}\) of \(M\) contained no \(e_{0}\). Let \(C[e_{0}]:=\{C\in\mathcal{C}(M)\mid e_{0}\notin C\}\). Then \(C[e_{0}]\) contains all the fundamental circuits \(C(e;B_{e_{0}})\) with respect to \(B_{e_{0}}\) except for \(C(e_{0};B_{e_{0}})\). It follows from the elementary linear algebra that the subspace \(\operatorname{span}\bigl{\{}\mathbf{c}_{C(e;B_{e_{0}})}\mid e\in E(M)\setminus( B\cup\{e_{0}\})\bigr{\}}\) and the subspace \(\operatorname{span}\bigl{\{}\mathbf{c}_{C}\mid C\in C[e_{0}]\bigr{\}}\) are the same hyperplane in the space \(\operatorname{span}\bigl{\{}\mathbf{c}_{C}\mid C\in\mathcal{C}(M)\bigr{\}}\), where \(\mathbf{c}_{C}\) denotes the circuit vector of a circuit \(C\). This implies that every set \(C[e]:=\{C\in\mathcal{C}(M)\mid e\notin C\}\) is a hyperplane of \(\delta_{OW}M\) except that \(e\) is a coloop of \(M\). So \(\delta_{OW}M\) is an adjoint of the dual matroid \(M^{*}\) by Proposition7.3. Namely, \(\delta_{OW}M\) is isomorphic to the type I adjoint \(\sigma M^{*}\) of \(M^{*}\). From this perspective, the duality relation in Proposition6.4 is a special case of the two equivalent characterizations of the adjoint, that one is from the hyperplanes and the other is from the cocircuits. More generally, for an arbitrary matroid \(M\), if \(M\) has an adjoint \(adM\), then the fundamental cocircuits of \(M\) also meet the above property as the Oxely-Wang derived matroid. **Lemma 7.5**.: _Let \(M\) be a matroid of rank \(r\) and \(adM\) its an adjoint with ground set \(\mathcal{C}(M^{*})\). If \(B\) is a basis of \(M\), then \(\{C^{*}(e;B)\mid e\in B\}\) is a basis of \(adM\)._ Proof.: Let \(B=\{e_{1},e_{2},\ldots,e_{r}\}\) be a basis of \(M\). Since \(r(adM)=r\), this proof reduces to showing that the set \(\{C^{*}(e_{i};B)\mid e_{i}\in B\}\) is independent. Otherwise, this set contains a circuit \(C\) of \(adM\). We may assume \(C^{*}(e_{r};B)\in C\). On the other hand, the property of \(C^{*}[e_{r}]\) implies that it contains all the fundamental cocircuits \(C^{*}(e_{i};B)\) except for \(i=r\). Then we obtain \(C^{*}(e_{r};B)\in C^{*}[e_{r}]\), which is in contradiction with the fact \(e_{r}\in C^{*}(e_{r};B)\). The property in Lemma7.5 may be an important feature to distinguish the combinatorial derived matroid \(\delta M\) of some matroid \(M\) and the adjoint of its dual matroid \(M^{*}\). At least this is a necessary condition such that \(\delta M\) is isomorphic to an adjoint of \(M^{*}\). Based on the preceding arguments, although we could not prove and disprove Conjecture7.2, we provide the following problems which may be a more specific solution towards Conjecture7.2. **Problem 7.6**.: _Let \(M\) be a matroid and \(B\) a basis of \(M\). If \(M^{*}\) has an adjoint, then the set \(\{C(e;B)\mid e\in E(M)\setminus B\}\) is a basis of the combinatorial derived matroid \(\delta M\)._ **Problem 7.7**.: _Let \(M\) be a matroid. If every set \(\{C(e;B)\mid e\in E(M)\setminus B\}\) is a basis of \(\delta M\) for any basis \(B\) of \(M\), then \(\delta M\) is an adjoint of the dual matroid \(M^{*}\) of \(M\)._ In end of this section, we will list a noticeable question for further research. **Question 7.8**.: How to construct a combinatorial adjoint for an arbitrary matroid.
2301.02495
Backlund transformation of the Geng-Xue system
We construct a Backlund transformation for the Geng-Xue system with the help of reciprocal and gauge transformations. Furthermore, we derive N-Backlund transformation for the Geng-Xue system resorting to Bianchi's permutability. As an application, we obtain some exact solutions of the Geng-Xue system including multi-kink, bell-shaped soliton. Finally, we discuss Backlund transformations for the Degasperis-Procesi and the Novikov equations, which are two reductions of the Geng-Xue system.
Lihua Wu, Nianhua Li
2023-01-06T13:14:18Z
http://arxiv.org/abs/2301.02495v1
# Backlund transformation of the Geng-Xue system ###### Abstract We construct a Backlund transformation for the Geng-Xue system with the help of reciprocal and gauge transformations. Furthermore, we derive \(N\)-Backlund transformation for the Geng-Xue system resorting to Bianchi's permutability. As an application, we obtain some exact solutions of the Geng-Xue system including multi-kink, bell-shaped soliton. Finally, we discuss Backlund transformations for the Degasperis-Procesi and the Novikov equations, which are two reductions of the Geng-Xue system. Mathematical Subject Classification: 37K10, 37K35, 37K40, 35C08 keywords: Geng-Xue system, Degasperis-Procesi equation, Novikov equation, Backlund transformation, exact solutions. ## 1 Introduction The Camassa-Holm (CH) equation [1] \[m_{t}+um_{x}+2u_{x}m=0,\ \ m=u-u_{xx}, \tag{1}\] arises as a model for long waves in shallow water by the asymptotic approximation of Hamiltonian for Euler's equations. It is a completely integrable system since it has Lax pair with bi-Hamiltonian structure, and may be solved by the Backlund transformation [2] as well as the inverse scattering transformation [3; 4]. The CH equation can be linked to the first negative flow of the KdV hierarchy by a reciprocal transformation [5]. One important feature for the CH equation is admiting peakon solutions [6; 7; 8], which have discontinuities in \(x\)-derivative but both one-sided derivatives exist and differ only by a sign at the crest. Henceforth, integrable equations with peakon solutions have attracted much attention in recent years [9]. The Geng-Xue (GX) system [10] \[\begin{array}{ll}m_{t}+3u_{x}vm+uvm_{x}=0,&m=u-u_{xx},\\ n_{t}+3v_{x}un+uvm_{x}=0,&n=v-v_{xx},\end{array} \tag{2}\] is a coupled integrable CH type system with cubic nonlinearity and admits a Lax pair and associated bi-Hamiltonian structure [11]. It is reciprocally connected with a first negative flow of a modified Boussinesq hierarchy [12]. Lundmark and Szmigielski throughly studied inverse spectral problem and got multi-peakon solutions of the GX system [13]. Very recently, multi-kink solutions of the GX system were obtained by Darboux transformation [14]. In addition, the GX system is closely related to the Degasperis-Procesi (DP) equation [15] \[m_{t}+um_{x}+3u_{x}m=0,\quad m=u-u_{xx}, \tag{3}\] and the Novikov equation [16] \[m_{t}+u^{2}m_{x}+3uu_{x}m=0,\quad m=u-u_{xx}, \tag{4}\] since they can be reduced from (2) as \(v=1\) and \(v=u\), respectively. There are many works on their Lax representations, bi-Hamiltonian structures, reciprocal partners and exact solutions [17]-[27]. Backlund transformations (BTs), originated from the differential geometry, play an important role in the theory of integrable systems, such as searching exact solutions, integrable discretization, as well as constructing symmetries, etc. [28, 29, 30]. However, in view of the speciality of the spectral problem for the CH type equations, it is hard to construct their BTss directly. Recently, Rasin and Schiff discussed BT for the CH equation with the help of reciprocal transformation and concluded that it involves not only the dependent variables but also the independent spatial variables [2]. Later on Mao, Liu et al construct BTs for the DP, the Novikov and the short pulse equations [31, 32, 33, 34]. As far as we know, there is no results on the BT of the GX system. The aim of this paper is to construct the \(N\)-BT of the GX system. The paper is arranged as follows. In section 2, we first introduce a reciprocal transformation to relate the GX system with an associated GX (aGX) system, and further to a negative flow of the Boussinesq hierarchy by a gauge transformation. With the aid of these two transformations, we get a BT for the GX system from the Darboux transformation of the negative Boussinesq flow. In section 3, using the Bianchi's permutability, we derive 2 and \(N\)-BT for the GX system. In section 4, we apply BT to obtain exact solutions for the GX system such as multi-kink, bell-shaped soliton etc.. In section 5, the BT for the DP equation and the Novikov equation are discussed. ## 2 Backlund transformation of the Geng-Xue system According to Ref. [10], the GX system (2) admits the Lax pair \[\psi_{x}=U\psi,\hskip 36.135pt\psi_{t}=V\psi, \tag{5}\] where \(\psi=(\psi_{1},\psi_{2},\psi_{3})^{T}\) and \[U=\begin{bmatrix}0&\lambda m&1\\ 0&0&\lambda n\\ 1&0&0\end{bmatrix},\quad V=\begin{bmatrix}-u_{x}v&\frac{u_{x}}{\lambda}- \lambda uvm&u_{x}v_{x}\\ \frac{v}{\lambda}&-\frac{1}{\lambda^{2}}+u_{x}v-uv_{x}&-\lambda uvn-\frac{v_{ x}}{\lambda}\\ -uv&\frac{u}{\lambda}&uv_{x}\end{bmatrix}.\] It was shown that the GX system has infinitely many conservation laws [10; 12] in which the first one is \[q_{t}=(-uvq)_{x},\hskip 36.135ptq=(mn)^{\frac{1}{3}}.\] This naturally defines a reciprocal transformation \[dy=qdx-uvqdt,\hskip 36.135ptd\tau=dt. \tag{6}\] Applying (6) to the Lax pair (5), we have \[\psi_{y}=F\psi,\hskip 36.135pt\psi_{\tau}=G\psi, \tag{7}\] where \[F=\begin{bmatrix}0&\lambda p&\frac{1}{q}\\ 0&0&\lambda\frac{q}{p}\\ \frac{1}{q}&0&0\end{bmatrix},\quad G=\begin{bmatrix}-u_{y}vq&\frac{u_{y}q}{ \lambda}&uv+u_{y}v_{y}q^{2}\\ \frac{v}{\lambda}&u_{y}vq-uv_{y}q-\frac{1}{\lambda^{2}}&-\frac{v_{y}q}{\lambda} \\ 0&\frac{u}{\lambda}&uv_{y}q\end{bmatrix},\] and \(p=\frac{m}{q}\). Direct calculation shows that the compatibility condition of linear system (7) yields the aGX system \[\begin{array}{ll}p_{\tau}=pq(uv_{y}-2u_{y}v),&u_{yy}q^{2}+u_{y}qq_{y}+pq-u=0,\\ q_{\tau}=-q^{2}(uv)_{y},&v_{yy}q^{2}+qq_{y}v_{y}+p^{-1}q^{2}-v=0.\end{array} \tag{8}\] Eliminating \(\psi_{1},\psi_{2}\) from (7), we obtain a scalar spectral problem for the wave function \(\psi_{3}\). Under a gauge transformation \(\psi_{3}=p^{\frac{1}{3}}q^{-\frac{2}{3}}\phi\), the scalar spectral problem is converted to the classical spectral problem of the Boussinesq hierarchy \[(\partial_{y}^{3}+Q_{1}\partial_{y}+Q_{2})\phi=(\partial_{y}-r)(\partial_{y}-s) (\partial_{y}+r+s)\phi=\lambda^{2}\phi, \tag{9}\] where \[r=\frac{2p_{y}}{3p}-\frac{q_{y}}{3q},\hskip 36.135pts=-\frac{p_{y}}{3p}-\frac{q_ {y}}{3q}-\frac{1}{q}. \tag{10}\] With the aid of the classical DT of the Boussinesq hierarchy [35], we get a DT for the aGX system (8). **Proposition 1**.: _The Lax presentation (7) is covariant under the DT:_ \[\begin{array}{ll}\psi_{[1]}=T(\lambda_{1},a_{1},b_{1})\psi,&T( \lambda_{1},a_{1},b_{1})=\begin{bmatrix}-\frac{a_{1}}{c_{1}}&\frac{\lambda(a_ {1}^{2}-1)}{\lambda_{1}b_{1}c_{1}}&\frac{1}{c_{1}}\\ 0&-1&\frac{\lambda b_{1}}{\lambda_{1}}\\ \frac{1}{c_{1}}&0&-\frac{a_{1}}{c_{1}}\end{bmatrix},\\ p_{[1]}=\frac{q(a_{1}^{2}-1)}{pb_{1}^{2}c_{1}},&q_{[1]}=\frac{a_{1}^{2}-1}{ \lambda_{1}pb_{1}},\\ u_{[1]}=\frac{1}{c_{1}}(ua_{1}-u_{y}q),&v_{[1]}=\frac{c_{1}}{a_{1}^{2}-1}(va_ {1}-v_{y}q-\frac{b_{1}}{\lambda_{1}}),\end{array} \tag{11}\] _where \(a_{1}=\frac{\varphi_{1}}{\varphi_{3}},b_{1}=\frac{\varphi_{2}}{\varphi_{3}}\), \(c_{1}=\sqrt{|a_{1}^{2}-1|}\), and \((\varphi_{1},\varphi_{2},\varphi_{3})^{T}\) is a special solution of (7) or (5) at \(\lambda=\lambda_{1}\)._ To construct a BT for the GX system, it is important to observe that \[\frac{1}{q_{[1]}}=\frac{1}{q}+\frac{a_{1,y}}{a_{1}^{2}-1},\hskip 36.135ptu_{[1]}v _{[1]}=uv+\frac{a_{1,\tau}}{a_{1}^{2}-1}. \tag{12}\] Taking (6) and (12) into account, we arrive at \[dx_{[1]}=\frac{1}{q_{[1]}}dy+u_{[1]}v_{[1]}d\tau=d(x-\frac{1}{2}\text{ln}| \frac{a_{1}+1}{a_{1}-1}|).\] Integrating on both sides of this equation and choosing the integration constant to be zero, we obtain \[x_{[1]}=x-\frac{1}{2}\text{ln}|\frac{a_{1}+1}{a_{1}-1}|. \tag{13}\] Given these preparations, the following proposition holds. **Proposition 2**.: _The GX system admits a BT_ \[\begin{split}& x_{[1]}=x-\frac{1}{2}\mathrm{ln}|\frac{a_{1}+1}{a_{1 }-1}|,\qquad\qquad t_{[1]}=t,\\ & u_{[1]}=\frac{1}{c_{1}}(ua_{1}-u_{x}),\\ & v_{[1]}=\frac{c_{1}}{a_{1}^{2}-1}(va_{1}-v_{x}-\frac{a_{1,x}+a _{1}^{2}-1}{\lambda_{1}^{2}m}),\end{split} \tag{14}\] _where \(c_{1}=\sqrt{|a_{1}^{2}-1|}\), and \(a_{1}\) is controlled by the system_ \[\begin{split}& a_{1,xx}=(\frac{m_{x}}{m}-a_{1})(a_{1x}+a_{1}^{2}-1 )-2a_{1}a_{1x}+\lambda_{1}^{2}mn,\\ & a_{1,t}=\frac{u_{x}-ua_{1}}{\lambda_{1}^{2}m}(a_{1,x}+a_{1}^{2 }-1)-(uva_{1})_{x}+uv+u_{x}v_{x}.\end{split} \tag{15}\] ## 3 \(N\)-Backlund transformation of the Geng-Xue system In this section, we shall first deduce a 2-BT for the GX system, and then extend it to \(N\)-BT. To begin with, let us show the diagram of Bianchi's permutability as follows. Using this Bianchi's permutability, we have \[T(\lambda_{2},a_{12},b_{12})T(\lambda_{1},a_{1},b_{1})=T(\lambda_{1},a_{21},b_ {21})T(\lambda_{2},a_{2},b_{2}), \tag{16}\] which leads to \[\begin{split}& a_{12}=\frac{\lambda_{2}b_{2}(a_{1}^{2}-1)+\lambda_ {1}b_{1}(1-a_{1}a_{2})}{\lambda_{1}b_{1}(a_{2}-a_{1})},\ \ a_{21}=\frac{\lambda_{1}b_{1}(a_{2}^{2}-1)+\lambda_{2}b_{2}(1-a_{1}a_{2})}{ \lambda_{2}b_{2}(a_{1}-a_{2})},\\ & b_{12}=\frac{(\lambda_{2}b_{1}-\lambda_{1}b_{2})c_{1}}{\lambda _{1}(a_{2}-a_{1})},\qquad\quad b_{21}=\frac{\lambda_{1}c_{2}}{\lambda_{2}c_{1 }}b_{12},\qquad\quad c_{21}=\frac{(a_{2}^{2}-1)\lambda_{1}b_{1}c_{1}}{(a_{1}^{2 }-1)\lambda_{2}b_{2}c_{2}}c_{12}.\end{split}\] Then, based on the Proposition 2, we have 2-BT for the GX system. The main result is stated as follows. Figure 1: Bianchi permutability **Proposition 3**.: _The GX system admits a 2-BT_ \[x_{[12]} =x-\frac{1}{2}\mathrm{ln}|\frac{(a_{1}+1)(a_{12}+1)}{(a_{1}-1)(a_{1 2}-1)}|,\qquad\qquad\quad t_{[12]}=t, \tag{17}\] \[u_{[12]} =\frac{1}{c_{1}c_{12}}[u(a_{1}a_{12}+1)-u_{x}(a_{1}+a_{12})-\frac {a_{1}^{2}-1}{\lambda_{1}b_{1}}],\] \[v_{[12]} =\frac{c_{1}c_{12}}{(a_{1}^{2}-1)(a_{12}^{2}-1)}[v(a_{1}a_{12}+1) -(v_{x}+\frac{b_{1}}{\lambda_{1}})(a_{1}+a_{12})]\] \[\qquad-\frac{c_{1}c_{12}}{(a_{2}-a_{1})(a_{12}^{2}-1)}(\frac{b_{1 }}{\lambda_{1}}-\frac{b_{2}}{\lambda_{2}}).\] _Here \(c_{12}=\sqrt{|a_{12}^{2}-1|}\), \(a_{2}=\frac{h_{1}}{h_{3}},b_{2}=\frac{h_{2}}{h_{3}}\), and \((h_{1},h_{2},h_{3})^{T}\) is a special solution of (5) at \(\lambda=\lambda_{2}\)._ Next, we will derive \(N\)-BT of the GX system. For convenience, let us denote natural permutation from \(1\) to any positive integer \(N\) by \(\widehat{N}\), i.e. \(\widehat{N}=12\cdots N\). Then, constructing the \(N\)-BT comes down to give compact forms for \(x_{[\widehat{N}]},a_{[\widehat{N}]},u_{[\widehat{N}]}\) and \(v_{[\widehat{N}]}\). In fact, it follows from Proposition 1 that \[x_{[\widehat{N}]}=x-\frac{1}{2}\mathrm{ln}|\frac{(a_{1}+1)(a_{12}+1)...(a_{ \widehat{N}}+1)}{(a_{1}-1)(a_{12}-1)...(a_{\widehat{N}}-1)}|. \tag{18}\] Since it's not easy to obtain compact form for \(a_{[\widehat{N}]}\) directly, we define \(w_{\widehat{N}}\) by \[\frac{(a_{1}+1)(a_{12}+1)...(a_{\widehat{N}}+1)}{(a_{1}-1)(a_{12}-1)...(a_{ \widehat{N}}-1)}=\frac{w_{\widehat{N}}+1}{w_{\widehat{N}}-1}, \tag{19}\] which implies that \[a_{\widehat{N}}=\frac{1-w_{\widehat{N-1}}w_{\widehat{N}}}{w_{\widehat{N}}-w_{ \widehat{N-1}}}. \tag{20}\] We first devote ourselves to arriving at a recurrence relation for \(w_{\widehat{N}}\) to obtain its expression of compact form, and hence for that of \(a_{\widehat{N}},\ x_{\widehat{N}},\ u_{\widehat{N}},\ v_{\widehat{N}}\). Resorting to Bianchi's permutability, we get the recurrence relations \[a_{\widehat{N}} =\frac{(a_{\widehat{N-1}}^{2}-1)\lambda_{N}b_{\widehat{N-2}N}+ \lambda_{N-1}b_{\widehat{N-1}}(1-a_{\widehat{N-2}N}a_{\widehat{N-1}})}{\lambda _{N-1}b_{\widehat{N-1}}(a_{\widehat{N-2}N}-a_{\widehat{N-1}})}, \tag{21}\] \[b_{\widehat{N}} =\frac{\lambda_{N}b_{\widehat{N-1}}-\lambda_{N-1}b_{\widehat{N-2 }N}}{\lambda_{N-1}(a_{\widehat{N-2}N}-a_{\widehat{N-1}})}c_{\widehat{N-1}}, \tag{22}\] where \(c_{\widehat{N}}=\sqrt{|a_{\widehat{N}}^{2}-1|}\). Moreover, introducing \[\sigma_{\widehat{N-1}}^{N}=\frac{b_{\widehat{N-2}N}}{b_{\widehat{N-1}}},\ \ \ \ \ \ \ \ \ \ N\geq 2, \tag{23}\] and inserting (20) into (21), one infers \[w_{\widehat{N}}=\frac{\lambda_{N}\sigma_{\widehat{N-1}}^{N}w_{\widehat{N-1}}( w_{\widehat{N-2}N}-w_{\widehat{N-2}})+\lambda_{N-1}w_{\widehat{N-2}N}(w_{ \widehat{N-2}}-w_{\widehat{N-1}})}{\lambda_{N}\sigma_{\widehat{N-1}}^{N}(w_{ \widehat{N-2}s}-w_{\widehat{N-2}})+\lambda_{N-1}(w_{\widehat{N-2}}-w_{\widehat {N-1}})}, \tag{24}\] or equivalently \[\sigma_{\widehat{N-1}}^{N}=\frac{\lambda_{N-1}}{\lambda_{N}}\frac{(w_{\widehat {N-2}}-w_{\widehat{N-1}})(w_{\widehat{N-2}N}-w_{\widehat{N}})}{(w_{\widehat{N -2}}-w_{\widehat{N-2}N})(w_{\widehat{N-1}}-w_{\widehat{N}})}. \tag{25}\] We are now in a position to obtain determinant expressions for \(w_{\widehat{N}}\) and \(\sigma_{\widehat{N-1}}^{N}\). A natural idea is to guess their expressions by observation of explicit formulae for \(N\leq 3\) and then prove them. In fact, it is not hard to show that the first several members in (24) and (25) are \[w_{1}=a_{1},\ \ \ \ \ \ \ \ \ w_{12}=\frac{\lambda_{1}b_{1}a_{2}- \lambda_{2}b_{2}a_{1}}{\lambda_{1}b_{1}-\lambda_{2}b_{2}},\] \[w_{123}=\frac{\lambda_{1}b_{1}(a_{3}\lambda_{2}^{2}-a_{2}\lambda_ {3}^{2})+\lambda_{2}b_{2}(a_{1}\lambda_{3}^{2}-a_{3}\lambda_{1}^{2})+\lambda_{ 3}b_{3}(a_{2}\lambda_{1}^{2}-a_{1}\lambda_{2}^{2})}{\lambda_{1}b_{1}(\lambda_{ 2}^{2}-\lambda_{3}^{2})+\lambda_{2}b_{2}(\lambda_{3}^{2}-\lambda_{1}^{2})+ \lambda_{3}b_{3}(\lambda_{1}^{2}-\lambda_{2}^{2})}, \tag{26}\] \[\sigma_{1}^{2}=\frac{b_{2}}{b_{1}},\ \ \ \ \ \ \ \ \sigma_{12}^{3}=\frac{b_{13}}{b_{12}}=\frac{(\lambda_{3}b_{1}- \lambda_{1}b_{3})(a_{2}-a_{1})}{(\lambda_{2}b_{1}-\lambda_{1}b_{2})(a_{3}-a_{ 1})}.\] In view of (26), we introduce the following determinant \[\Delta_{N}=\left\{\begin{array}{llll}\left|1&a_{1}&\lambda_{1}b_{1}&\cdots &\lambda_{1}^{2k}\right|\\ \vdots&\vdots&\vdots&&\vdots\\ 1&a_{N}&\lambda_{N}b_{N}&\cdots&\lambda_{N}^{2k}\end{array}\right|,\ \ N=3k+1,\right.\] \[\left.\begin{array}{llll}\left|1&a_{1}&\lambda_{1}b_{1}&\cdots &\lambda_{1}^{2k}a_{1}\\ \vdots&\vdots&\vdots&&\vdots\\ 1&a_{N}&\lambda_{N}b_{N}&\cdots&\lambda_{N}^{2k}a_{N}\end{array}\right|,\ \ N=3k+2, \tag{27}\] \[\left.\begin{array}{llll}\left|1&a_{1}&\lambda_{1}b_{1}&\cdots &\lambda_{1}^{2k+1}b_{1}\\ \vdots&\vdots&\vdots&&\vdots\\ 1&a_{N}&\lambda_{N}b_{N}&\cdots&\lambda_{N}^{2k+1}b_{N}\end{array}\right|,\ \ N=3k+3,\] for \(k\in\mathbf{N}\). **Theorem 1**.: _The expressions for \(\sigma_{\widehat{N-1}}^{N}\) and \(w_{\widehat{N}}\) in terms of determinant \(\Delta_{N}\) read_ \[w_{\widehat{N}}=\frac{A_{N}}{B_{N}},\ \ N\geq 1, \tag{28}\] \[\sigma_{\widehat{N-1}}^{N}=\frac{\lambda_{N-1}(A_{N-2}B_{N-1}-A_ {N-1}B_{N-2})(C_{N-1}B_{N}-A_{N}D_{N-1})}{\lambda_{N}(A_{N-2}D_{N-1}-C_{N-1}B_ {N-2})(A_{N-1}B_{N}-A_{N}B_{N-1})},N\geq 3, \tag{29}\] _where_ \[A_{N}=\Delta_{N+1}\left[\begin{matrix}N+1\\ 1\end{matrix}\right], C_{N-1}=\Delta_{N}\left[\begin{matrix}N-1\\ 1\end{matrix}\right],\] \[B_{N}=\Delta_{N+1}\left[\begin{matrix}N+1\\ 2\end{matrix}\right], D_{N-1}=\Delta_{N}\left[\begin{matrix}N-1\\ 2\end{matrix}\right].\] Here \(J\left[\begin{array}{ccc}i_{1}&i_{2}&\cdots&i_{k}\\ j_{1}&j_{2}&\cdots&j_{k}\end{array}\right]\) denotes the determinant by removing \(i_{1},\cdots,i_{k}\) rows and \(j_{1},\cdots,j_{k}\) columns from the determinant \(J\). To prove the theorem, we need two useful identities displayed in the following Lemma. **Lemma 1**.: _Assume that \(\pi\) is a \((N+2)\times N\) matrix, \(\chi_{k}\) are \(N+2\) order column vectors. Then we have_ 1. _The Plucker relation_ \[|\pi,\chi_{1},\chi_{2}||\pi,\chi_{3},\chi_{4}|-|\pi,\chi_{1},\chi_{3}||\pi, \chi_{2},\chi_{4}|+|\pi,\chi_{1},\chi_{4}||\pi,\chi_{2},\chi_{3}|=0.\] 2. _The Jacobi identity_ \[J\times J\left[\begin{array}{cc}i_{1}&i_{2}\\ j_{1}&j_{2}\end{array}\right]=J\left[\begin{array}{c}i_{1}\\ j_{1}\end{array}\right]\times J\left[\begin{array}{c}i_{2}\\ j_{2}\end{array}\right]-J\left[\begin{array}{c}i_{1}\\ j_{2}\end{array}\right]\times J\left[\begin{array}{c}i_{2}\\ j_{1}\end{array}\right].\] **Proof of Theorem 1:** Here we only prove the case of \(N=3k+2\) by the method of mathematical induction, because the other two cases can be verified similarly. First, it is easy to check that both (28) and (29) are true for \(N\leq 3\). Next, assume (28) and (29) hold for \(N-1\), our task is to verify them for \(N\). In view of (22) and (23), it is straightforward to know that \[\sigma_{\widehat{N}}^{N+1}=\frac{b_{\widehat{N-1}N+1}}{b_{\widehat{N}}}=\frac{ a_{\widehat{N-2}N}-a_{\widehat{N-1}}}{a_{\widehat{N-2}N+1}-a_{\widehat{N-1}}} \frac{\lambda_{N+1}-\lambda_{N-1}\sigma_{\widehat{N-1}}^{N+1}}{\lambda_{N}- \lambda_{N-1}\sigma_{\widehat{N-1}}^{N}}. \tag{30}\] On the one hand, it follows from (20) that \[\frac{a_{\widehat{N-2}N}-a_{\widehat{N-1}}}{a_{\widehat{N-2}N+1}-a_ {\widehat{N-1}}} =\frac{(w_{\widehat{N-2}N}-w_{\widehat{N-1}})(w_{\widehat{N-2}}-w_ {\widehat{N-2}N+1})}{(w_{\widehat{N-2}N+1}-w_{\widehat{N-1}})(w_{\widehat{N-2} }-w_{\widehat{N-2}N})}\] \[=\frac{(C_{N-1}B_{N-1}-A_{N-1}D_{N-1})(A_{N-2}H_{N-1}-B_{N-2}E_{N- 1})}{(E_{N-1}B_{N-1}-A_{N-1}H_{N-1})(A_{N-2}D_{N-1}-C_{N-1}B_{N-2})}, \tag{31}\] with \[E_{N-1}=\Delta_{N+1}\left[\begin{array}{cc}N-1&N\\ 1&N+1\end{array}\right],\quad H_{N-1}=\Delta_{N+1}\left[\begin{array}{cc}N-1& N\\ 2&N+1\end{array}\right].\] On the other hand, by inductive hypotheses, one infers \[\frac{\lambda_{N+1}-\lambda_{N-1}\sigma_{\widehat{N-1}}^{N+1}}{\lambda_{N}- \lambda_{N-1}\sigma_{\widehat{N-1}}^{N}}=\frac{\lambda_{N}R_{1}}{\lambda_{N+1 }R_{2}}\frac{(A_{N-2}D_{N-1}-B_{N-2}C_{N-1})(A_{N-1}B_{N}-A_{N}B_{N-1})}{(A_{ N-2}H_{N-1}-B_{N-2}E_{N-1})(A_{N-1}D_{N}-B_{N-1}C_{N})}, \tag{32}\] where \[R_{1} =\lambda_{N+1}^{2}(A_{N-2}H_{N-1}-B_{N-2}E_{N-1})(A_{N-1}D_{N}-C_ {N}B_{N-1})\] \[\quad-\lambda_{N-1}^{2}(A_{N-2}B_{N-1}-A_{N-1}B_{N-2})(E_{N-1}D_ {N}-C_{N}H_{N-1}),\] \[R_{2} =\lambda_{N}^{2}(A_{N-2}D_{N-1}-B_{N-2}C_{N-1})(A_{N-1}B_{N}-A_{ N}B_{N-1})\] \[\quad-\lambda_{N-1}^{2}(C_{N-1}B_{N}-A_{N}D_{N-1})(A_{N-2}B_{N-1 }-A_{N-1}B_{N-2}).\] Substituting (31) and (32) into (30), one has \[\sigma_{\widehat{N}}^{N+1}=\frac{\lambda_{N}(A_{N-1}B_{N}-A_{N}B_{N-1})}{ \lambda_{N+1}(A_{N-1}D_{N}-B_{N-1}C_{N})}\frac{R_{1}(C_{N-1}B_{N-1}-A_{N-1}D_ {N-1})}{R_{2}(E_{N-1}B_{N-1}-A_{N-1}H_{N-1})}. \tag{33}\] Before proceeding further, let us list some useful identities obtained from the Jacobi identity and Pl\(\ddot{u}\)cker relation as follows \[\begin{split} C_{N-1}B_{N-1}-A_{N-1}D_{N-1}&=\Delta _{N}\Delta_{N}\left[\begin{array}{cc}N-1&N\\ 1&2\end{array}\right],\\ E_{N-1}B_{N-1}-A_{N-1}H_{N-1}&=\Delta_{N+1}\left[\begin{array}{cc}N\\ N+1\end{array}\right]\Delta_{N}\left[\begin{array}{cc}N-1&N\\ 1&2\end{array}\right],\end{split} \tag{34}\] \[A_{N-1}B_{N}-A_{N}B_{N-1} =\Delta_{N}\Delta_{N+1}\left[\begin{array}{cc}N&N+1\\ 1&2\end{array}\right],\] \[C_{N-1}B_{N}-A_{N}D_{N-1} =\Delta_{N}\Delta_{N+1}\left[\begin{array}{cc}N-1&N+1\\ 1&2\end{array}\right],\] \[A_{N-1}D_{N}-C_{N}B_{N-1} =\Delta_{N+1}\left[\begin{array}{c}N\\ N+1\end{array}\right]\Delta_{N+1}\left[\begin{array}{cc}N&N+1\\ 1&2\end{array}\right], \tag{35}\] \[E_{N-1}D_{N}-C_{N}H_{N-1} =\Delta_{N+1}\left[\begin{array}{c}N\\ N+1\end{array}\right]\Delta_{N+1}\left[\begin{array}{cc}N-1&N\\ 1&2\end{array}\right],\] \[A_{N-2}H_{N-1}-B_{N-2}E_{N-1} =\Delta_{N+1}\left[\begin{array}{cc}N-1&N\\ N&N+1\end{array}\right]\Delta_{N}\left[\begin{array}{cc}N-1&N\\ 1&2\end{array}\right],\] whose proofs will be given in appendix. With the help of (34) and (35), a direct calculation gives rise to \[\sigma_{\widehat{N}}^{N+1}=\frac{\lambda_{N}(A_{N-1}B_{N}-A_{N}B_{N-1})}{ \lambda_{N+1}(A_{N-1}D_{N}-B_{N-1}C_{N})}\frac{R_{3}}{R_{4}}, \tag{36}\] where \[R_{3} =\lambda_{N+1}^{2}\Delta_{N+1}\left[\begin{array}{cc}N-1&N\\ N&N+1\end{array}\right]\Delta_{N+1}\left[\begin{array}{cc}N&N+1\\ 1&2\end{array}\right]\] \[\quad-\lambda_{N-1}^{2}\Delta_{N-1}\Delta_{N+1}\left[\begin{array} []{cc}N-1&N\\ 1&2\end{array}\right],\] \[R_{4} =\lambda_{N}^{2}\Delta_{N}\left[\begin{array}{c}N-1\\ N\end{array}\right]\Delta_{N+1}\left[\begin{array}{cc}N&N+1\\ 1&2\end{array}\right]\] \[\quad-\lambda_{N-1}^{2}\Delta_{N-1}\Delta_{N+1}\left[\begin{array} []{cc}N-1&N+1\\ 1&2\end{array}\right].\] Using the Jacobi identity, we have (proven in appendix) \[R_{3} =\frac{1}{\lambda_{1}^{2}\lambda_{2}^{2}\cdots\lambda_{N-2}^{2}} \Delta_{N+2}\left[\begin{array}{cc}N&N+2\\ 1&2\end{array}\right]\Delta_{N+1}\left[\begin{array}{cc}N-1&N&N+1\\ 1&2&3\end{array}\right], \tag{37}\] \[R_{4} =\frac{1}{\lambda_{1}^{2}\lambda_{2}^{2}\cdots\lambda_{N-2}^{2}} \Delta_{N+2}\left[\begin{array}{cc}N+1&N+2\\ 1&2\end{array}\right]\Delta_{N+1}\left[\begin{array}{cc}N-1&N&N+1\\ 1&2&3\end{array}\right].\] Substituting (37) into (36) and noting the first two equalities in (35), we get \[\sigma_{\widehat{N}}^{N+1} =\frac{\lambda_{N}(A_{N-1}B_{N}-A_{N}B_{N-1})\Delta_{N+2}\left[ \begin{array}{cc}N&N+2\\ 1&2\end{array}\right]}{\lambda_{N+1}(A_{N-1}D_{N}-B_{N-1}C_{N})\Delta_{N+2} \left[\begin{array}{cc}N+1&N+2\\ 1&2\end{array}\right]} \tag{38}\] \[=\frac{\lambda_{N}}{\lambda_{N+1}}\frac{(A_{N-1}B_{N}-A_{N}B_{N- 1})(C_{N}B_{N+1}-A_{N+1}D_{N})}{(A_{N-1}D_{N}-B_{N-1}C_{N})(A_{N}B_{N+1}-B_{N} A_{N+1})},\] which proves (29). Furthermore, it follows from inductive hypotheses, (25), (34) and (35) that \[w_{\widehat{N+1}} =\frac{\lambda_{N+1}\sigma_{\widehat{N}}^{N+1}w_{\widehat{N}}(w_ {\widehat{N-1}N+1}-w_{\widehat{N-1}})+\lambda_{N}w_{\widehat{N-1}N+1}(w_{ \widehat{N-1}}-w_{\widehat{N}})}{\lambda_{N+1}\sigma_{\widehat{N}}^{N+1}(w_{ \widehat{N-1}N+1}-w_{\widehat{N-1}})+\lambda_{N}(w_{\widehat{N-1}}-w_{\widehat {N}})}\] \[=\frac{\frac{(A_{N-1}B_{N}-A_{N}B_{N-1})(C_{N}B_{N+1}-A_{N+1}D_{N })}{(A_{N-1}D_{N}-B_{N-1}C_{N})(A_{N}B_{N+1}-B_{N}A_{N+1})}\frac{A_{N}}{B_{N} }(\frac{C_{N}}{D_{N}}-\frac{A_{N-1}}{B_{N-1}})+\frac{C_{N}}{D_{N}}(\frac{A_{ N-1}}{B_{N-1}}-\frac{A_{N}}{B_{N}})}{\frac{(A_{N-1}B_{N}-A_{N}B_{N-1})(C_{N}B_{N+1}-A_{N+1 }D_{N})}{(A_{N-1}D_{N}-B_{N-1}C_{N})(A_{N}B_{N+1}-B_{N}A_{N+1})}(\frac{C_{N}}{ D_{N}}-\frac{A_{N-1}}{B_{N-1}})+\frac{A_{N-1}}{B_{N-1}}-\frac{A_{N}}{B_{N}}}\] \[=\frac{\Delta_{N+2}\left[\begin{array}{cc}N+1&N+2\\ 1&2\end{array}\right]\Delta_{N+1}\left[\begin{array}{cc}N\\ 1\end{array}\right]-\Delta_{N+2}\left[\begin{array}{cc}N&N+2\\ 1&2\end{array}\right]\Delta_{N+1}\left[\begin{array}{cc}N+1\\ 1\end{array}\right]}{\Delta_{N+2}\left[\begin{array}{cc}N+1&N+2\\ 1&2\end{array}\right]\Delta_{N+1}\left[\begin{array}{cc}N+1\\ 2\end{array}\right]}\] \[=\frac{A_{N+1}\left[\begin{array}{cc}N+1\\ 1\end{array}\right]A_{N+1}\left[\begin{array}{cc}N\\ N+1\end{array}\right]-A_{N+1}\left[\begin{array}{cc}N\\ 1\end{array}\right]A_{N+1}\left[\begin{array}{cc}N+1\\ N+1\end{array}\right]}{B_{N+1}\left[\begin{array}{cc}N+1\\ 1\end{array}\right]B_{N+1}\left[\begin{array}{cc}N+1\\ N+1\end{array}\right]}\] \[=\frac{A_{N+1}}{B_{N+1}},\] which completes the proof of Theorem 1. Now, according to Theorem 1, it directly infers from (20) that \[a_{\widehat{N}}=\frac{A_{N-1}A_{N}-B_{N-1}B_{N}}{A_{N-1}B_{N}-B_{N-1}A_{N}}. \tag{39}\] Thus, we may summarize what we have obtained as the following Theorem. **Theorem 2**: _The GX system admits the \(N\)-BT_ \[x_{[\widehat{N}]} =x-\frac{1}{2}\mathrm{ln}|\frac{A_{N}+B_{N}}{A_{N}-B_{N}}|,\qquad \quad t_{[\widehat{N}]}=t,\] \[u_{[\widehat{N}]} =\frac{1}{c_{\widehat{N}}}(u_{[\widehat{N-1}]}a_{\widehat{N}}- \frac{u_{[\widehat{N-1}],x}}{x_{[\widehat{N-1}],x}}), \tag{40}\] \[v_{[\widehat{N}]} =\frac{c_{\widehat{N}}}{a_{\widehat{N}}^{2}-1}[v_{[\widehat{N-1}] }a_{\widehat{N}}-\frac{v_{[\widehat{N-1}],x}}{x_{[\widehat{N-1}],x}}-\frac{1}{ \lambda_{N}^{2}m_{[\widehat{N-1}]}}(\frac{a_{\widehat{N},x}}{x_{[\widehat{N-1}],x}}+a_{\widehat{N}}^{2}-1)],\] _where \(a_{\widehat{N}}\) is given by (39), \(c_{\widehat{N}}=\sqrt{|a_{\widehat{N}}^{2}-1|},\) and \(m_{\widehat{N-1}}=u_{\widehat{N-1}}-\frac{1}{x_{[\widehat{N-1}],x}}(\frac{u_{ [\widehat{N-1}],x}}{x_{[\widehat{N-1}],x}})_{x}.\)_ ## 4 Exact solutions As an application of the BT, we shall deduce some exact solutions of the GX system. Choose \(u=u_{0},v=v_{0},u_{0}v_{0}\neq 0\) as an initial solution of the GX system. Let \(\alpha_{j},\beta_{j},-\alpha_{j}-\beta_{j},(\ 1\leq j\leq N)\) be three roots of the equation \(\gamma^{3}-\gamma-\lambda_{j}^{2}u_{0}v_{0}=0\), and \(f_{j}\) be solutions of the following system \[\begin{split}&\varphi_{xxx}-\varphi_{x}-\lambda_{j}^{2}u_{0}v_{0} \varphi=0,\\ &\varphi_{t}-\frac{1}{\lambda_{j}^{2}}\varphi_{xx}+u_{0}v_{0} \varphi_{x}+\frac{1}{\lambda_{j}^{2}}\varphi=0,\end{split} \tag{41}\] which is a scalar form of (5) at \(\lambda=\lambda_{j},u=u_{0},v=v_{0}\). **Example 1:** 1-soliton solutions. If \(27\lambda_{1}^{4}u_{0}^{2}v_{0}^{2}-4<0\), then \(\alpha_{1},\beta_{1},-\alpha_{1}-\beta_{1}\) are three different real roots. We take \[f_{1}=e^{\frac{\xi_{1}+\eta_{1}}{2}}(e^{\theta_{1}}+\delta_{1}e^{-\theta_{1}}),\] where \(\xi_{1}=\alpha_{1}x-\frac{\lambda_{1}^{2}u_{0}^{2}v_{0}^{2}}{\alpha_{1}^{2}}t+ \xi_{10},\ \eta_{1}=\beta_{1}x-\frac{\lambda_{1}^{2}u_{0}^{2}v_{0}^{2}}{\beta_{1}^{2}}t+ \eta_{10},\ \theta_{1}=\frac{\mu_{1}}{2}[x+\frac{u_{0}v_{0}(4-\mu_{1}^{2})}{\mu_{1}^{2}-1} t]+\theta_{10},\ \mu_{1}=\alpha_{1}-\beta_{1},\ \delta_{1}=\pm 1,\ \theta_{10}=\frac{1}{2}(\xi_{10}-\eta_{10})\), and \(\xi_{10},\eta_{10}\) are two constants. For convenience, assume \(\mu_{1}>0\) and let \(\nu_{1}=\alpha_{1}+\beta_{1}\), it infers \(\mu_{1}=\sqrt{4-3\nu_{1}^{2}}\). Then \[a_{1}=\frac{(\mu_{1}+\nu_{1})e^{\theta_{1}}+\delta_{1}(\nu_{1}-\mu_{1})e^{- \theta_{1}}}{2(e^{\theta_{1}}+\delta_{1}e^{-\theta_{1}})}=\left\{\begin{array}[ ]{ll}\frac{\nu_{1}+\mu_{1}\tanh\theta_{1}}{2},&\delta_{1}=1,\\ \frac{\nu_{1}+\mu_{1}\coth\theta_{1}}{2},&\delta_{1}=-1,\end{array}\right.\] which together with Proposition 2 yields tanh type 1-soliton solution \[x_{[1]}=x-\frac{1}{2}\ln|\frac{\nu_{1}+2+\mu_{1}\tanh\theta_{1}}{ \nu_{1}-2+\mu_{1}\tanh\theta_{1}}|,\] \[u_{[1]}=\frac{u_{0}(\nu_{1}+\mu_{1}\tanh\theta_{1})}{\sqrt{|(\nu_ {1}+\mu_{1}\tanh\theta_{1})^{2}-4|}}, \tag{42}\] \[v_{[1]}=-\frac{v_{0}\nu_{1}(\nu_{1}^{2}-2+\mu_{1}\mu_{1}\tanh \theta_{1})\sqrt{|(\nu_{1}+\mu_{1}\tanh\theta_{1})^{2}-4|}}{(1-\nu_{1}^{2})[( \nu_{1}+\mu_{1}\tanh\theta_{1})^{2}-4]},\] and coth type 1-soliton solution \[x_{[1]}=x-\frac{1}{2}\ln|\frac{\nu_{1}+2+\mu_{1}\coth\theta_{1}} {\nu_{1}-2+\mu_{1}\coth\theta_{1}}|,\] \[u_{[1]}=\frac{u_{0}(\nu_{1}+\mu_{1}\coth\theta_{1})}{\sqrt{|(\nu _{1}+\mu_{1}\coth\theta_{1})^{2}-4|}}, \tag{43}\] \[v_{[1]}=-\frac{v_{0}\nu_{1}(\nu_{1}^{2}-2+\mu_{1}\nu_{1}\coth \theta_{1})\sqrt{|(\nu_{1}+\mu_{1}\coth\theta_{1})^{2}-4|}}{(1-\nu_{1}^{2})[( \nu_{1}+\mu_{1}\coth\theta_{1})^{2}-4]}.\] It is easy to see that \(x_{[1]}\rightarrow\pm\infty\) when \(x\rightarrow\pm\infty\) in (42) and (43). Further analysis shows the map from \(x_{[1]}\) to \(x\) in (42) is bijective and \(x_{[1]},u_{[1]},v_{[1]}\) are nonsingular when \(1<|\nu_{1}|<\frac{2}{\sqrt{3}}\), while in (43) \(x_{[1]},u_{[1]},v_{[1]}\) are singular for any \(\nu_{1}\). It infers that (42) gives smooth soliton solutions for \(1<|\nu_{1}|<\frac{2}{\sqrt{3}}\) and (43) gives singular solutions to which we do not pay more attention. The profiles of the 1-soliton solutions (42) are shown in Fig. 2-3. **Example 2:** 2-soliton solutions and their interactions. Figure 2: The profiles of smooth 1-soliton solution (42) at \(u_{0}=1,v_{0}=1,\nu_{1}=1.1\). Applying the Proposition 3, we have 2-soliton solution \[x_{[12]} =x-\frac{1}{2}\mathrm{ln}|\frac{(a_{1}+1)(a_{12}+1)}{(a_{1}-1)(a_{12 }-1)}|, t_{[12]}=t, \tag{44}\] \[u_{[12]} =\frac{1}{c_{1}c_{12}}[u_{0}(a_{1}a_{12}+1)-\frac{a_{1}^{2}-1}{ \lambda_{1}b_{1}}),\] \[v_{[12]} =\frac{c_{1}c_{12}}{(a_{1}^{2}-1)(a_{12}^{2}-1)}[v_{0}(a_{1}a_{12 }+1)-\frac{b_{1}}{\lambda_{1}}(a_{1}+a_{12})]\] \[\quad-\frac{c_{1}c_{12}}{(a_{2}-a_{1})(a_{12}^{2}-1)}(\frac{b_{1} }{\lambda_{1}}-\frac{b_{2}}{\lambda_{2}})\] with \(c_{1}=\sqrt{|a_{1}^{2}-1|},\ c_{12}=\sqrt{|a_{12}^{2}-1|}.\) We call (44) tanh-tanh type and tanh-coth type 2-soliton solution respectively when \[a_{1} =\frac{\nu_{1}+\mu_{1}\tanh\theta_{1}}{2}, a_{2} =\frac{\nu_{2}+\mu_{2}\tanh\theta_{1}}{2},\] \[b_{1} =\frac{-\nu_{1}(\nu_{1}-\mu_{1}\tanh\theta_{1})}{2\lambda_{1}u_{ 0}}, b_{2} =\frac{-\nu_{2}(\nu_{2}-\mu_{2}\tanh\theta_{2})}{2\lambda_{2}u_{ 0}},\] \[a_{12} =\frac{4-(\nu_{1}+\mu_{1}\tanh\theta_{1})(\nu_{2}+\mu_{2}\tanh \theta_{2})}{2(\nu_{2}-\nu_{1}+\mu_{2}\tanh\theta_{2}-\mu_{1}\tanh\theta_{1})}\] \[\quad-\frac{\nu_{2}(\nu_{2}-\mu_{2}\tanh\theta_{2})[4-(\nu_{1}+ \mu_{1}\tanh\theta_{1})^{2}]}{2\nu_{1}(\nu_{1}-\mu_{1}\tanh\theta_{1})(\nu_{2} -\nu_{1}+\mu_{2}\tanh\theta_{2}-\mu_{1}\tanh\theta_{1})},\] Figure 3: The profiles of smooth 1-soliton solution (42) at \(u_{0}=1,v_{0}=-1,\nu_{1}=-1.12\). \[a_{1} =\frac{\nu_{1}+\mu_{1}\tanh\theta_{1}}{2}, a_{2} =\frac{\nu_{2}+\mu_{2}\coth\theta_{1}}{2},\] \[b_{1} =\frac{-\nu_{1}(\nu_{1}-\mu_{1}\tanh\theta_{1})}{2\lambda_{1}u_{0} }, b_{2} =\frac{-\nu_{2}(\nu_{2}-\mu_{2}\coth\theta_{2})}{2\lambda_{2}u_{0}},\] \[a_{12} =\frac{4-(\nu_{1}+\mu_{1}\tanh\theta_{1})(\nu_{2}+\mu_{2}\coth \theta_{2})}{2(\nu_{2}-\nu_{1}+\mu_{2}\coth\theta_{2}-\mu_{1}\tanh\theta_{1})}\] \[\quad-\frac{\nu_{2}(\nu_{2}-\mu_{2}\coth\theta_{2})[4-(\nu_{1}+ \mu_{1}\tanh\theta_{1})^{2}]}{2\nu_{1}(\nu_{1}-\mu_{1}\tanh\theta_{1})(\nu_{2 }-\nu_{1}+\mu_{2}\coth\theta_{2}-\mu_{1}\tanh\theta_{1})}.\] Here \(\theta_{1}=\frac{\mu_{1}}{2}[x+\frac{u_{0}v_{0}(4-\mu_{1}^{2})}{\mu_{1}^{2}-1 }t]+\theta_{10},\ \theta_{2}=\frac{\mu_{2}}{2}[x+\frac{u_{0}v_{0}(4-\mu_{2}^{2})}{\mu_{2}^{2}- 1}t]+\theta_{20},\ \mu_{1}=\sqrt{4-3\nu_{1}^{2}},\ \mu_{2}=\sqrt{4-3\nu_{2}^{2}},\ \lambda_{1}^{2}=\frac{\nu_{1}(1-\nu_{1}^{2})}{u_{0}v_{0}},\ \lambda_{2}^{2}=\frac{\nu_{2}(1-\nu_{2}^{2})}{u_{0}v_{0}},\) and \(\nu_{1},\ \nu_{2}\) are two constants. Analysis shows that \(\tanh\)-\(\tanh\) type 2-soliton solution gives smooth kink-antikink or antikink-kink solution if \(1<|\nu_{1}|,|\nu_{2}|<\frac{2}{\sqrt{3}},\ \nu_{1}\nu_{2}<0\). Especially and interestingly, one finds that the \(\tanh\)-\(\tanh\) type 2-soliton solution becomes bell-shaped 1-soliton solutions when \(\nu_{1}=-\nu_{2}\). Moreover, \(\tanh\)-\(\coth\) type 2-soliton solution gives kink-kink or antikink-antikink solutions when \(1<|\nu_{2}|<|\nu_{1}|<\frac{2}{\sqrt{3}},\ \nu_{1}\nu_{2}>0\). The profiles of the 2-soliton solutions (44) and their interactions are shown in Fig. 4-6. ## 5 Backlund transformations for the Degasperis-Procesi and Novikov equations ### The Degasperis-Procesi case As we know, the GX system reduces to the DP equation as \(v=1\). Therefore, we shall study BT for the DP equation. The DP equation (3) possesses a Lax pair [19] \[\psi_{x}=U_{1}\psi,\hskip 36.135pt\psi_{t}=V_{1}\psi, \tag{45}\] where \(\psi=(\psi_{1},\psi_{2},\psi_{3})^{T}\) and \[U_{1}=\begin{bmatrix}0&\lambda m&1\\ 0&0&\lambda\\ 1&0&0\end{bmatrix},\qquad\quad V_{1}=\begin{bmatrix}-u_{x}&\frac{u_{x}}{\lambda} -\lambda um&0\\ \frac{1}{\lambda}&-\frac{1}{\lambda^{2}}+u_{x}&-\lambda u\\ -u&\frac{u}{\lambda}&0\end{bmatrix}.\] Naturally, the reciprocal transformation (6) of the Geng-Xue system reduces to that of the DP equation \[dy=qdx-uqdt,\qquad\quad d\tau=dt. \tag{46}\] with \(q=m^{\frac{1}{3}}\). Applying this transformation, the Lax presentation (45) is converted to \[\psi_{y}=F_{1}\psi,\qquad\quad\psi_{\tau}=G_{1}\psi, \tag{47}\] where \[F_{1}=\begin{bmatrix}0&\lambda q^{2}&\frac{1}{q}\\ 0&0&\frac{1}{q}\\ \frac{1}{q}&0&0\end{bmatrix},\qquad\quad G_{1}=\begin{bmatrix}-u_{y}q&\frac{u_ {y}q}{\lambda}&u\\ \frac{1}{\lambda}&u_{y}q-\frac{1}{\lambda^{2}}&0\\ 0&\frac{u}{\lambda}&0\end{bmatrix}.\] The compatibility condition of Lax pair (47) yields the associated DP (aDP) equation [17, 36] \[q_{\tau}=-u_{y}q^{2},\qquad\quad u-q^{3}-q(u_{y}q)_{y}=0. \tag{48}\] It is straightforward to verify that, under a gauge transformation \(\chi=q\psi_{2}\), the scalar form of spectral problem in (47) is converted to \[(\partial_{y}^{3}+U_{1}\partial_{y}+\frac{1}{2}U_{1y})\chi=\lambda_{1}^{2}\chi,\qquad\quad U_{1}=-2\frac{q_{yy}}{q}+\frac{q_{y}^{2}-1}{q^{2}},\] which is just the classical spectral problem of the KK hierarchy [35]. Making use of the DT for the KK hierarchy, we get the following Proposition. **Proposition 4**.: _The Lax pair (47) is covariant under the DT_ \[\begin{array}{l}\psi_{[1]}=T\psi,\\ T=I+\begin{bmatrix}\frac{(\lambda_{1}\sigma_{1}-\sigma_{2})(\lambda_{1}^{2}- \sigma_{2}^{2})}{\lambda_{1}^{2}-2\lambda_{1}\sigma_{1}\sigma_{2}+\sigma_{2}^{ 2}}&\sigma_{2}&0\\ 0&\lambda\sigma_{2}&0\\ 0&0&\lambda_{1}\end{bmatrix}\begin{bmatrix}\lambda^{2}&\lambda&-\lambda^{2} \\ \lambda_{1}^{2}&-\lambda&-\lambda^{2}_{1}\\ \lambda_{1}^{2}&-\lambda&-\lambda^{2}_{1}\end{bmatrix}T_{1},\\ q_{[1]}=\frac{q(\lambda_{1}^{2}-\sigma_{2}^{2})}{\lambda_{1}^{2}-2\lambda_{1} \sigma_{1}\sigma_{2}+\sigma_{2}^{2}},\\ u_{[1]}=u+\frac{2(\lambda_{1}^{2}u_{y}u_{Z}-\lambda_{1}^{3}u+\lambda_{1} \sigma_{1}-\sigma_{2})}{\lambda_{1}(\lambda_{1}^{2}-\sigma_{2}^{2})},\end{array} \tag{49}\] _or the DT_ \[\begin{array}{l}\psi_{[1]}=\tilde{T}\psi,\quad\tilde{T}=\mathrm{diag}[-1,-1,1]T,\\ q_{[1]}=-\frac{q(\lambda_{1}^{2}-\sigma_{2}^{2})}{\lambda_{1}^{2}-2\lambda_{1} \sigma_{1}\sigma_{2}+\sigma_{2}^{2}},\\ u_{[1]}=-u+\frac{2(\sigma_{2}-\lambda_{1}\sigma_{1}+\lambda_{1}^{3}u-\lambda_{1 }^{2}u_{y}h\sigma_{2})}{\lambda_{1}(\lambda_{1}^{2}-\sigma_{2}^{2})},\end{array} \tag{50}\] _where \(T_{1}=\frac{2}{(\lambda^{2}+\lambda_{1}^{2})(\lambda_{1}^{2}-\sigma_{2}^{2})} \mathrm{diag}[\sigma_{2},\lambda_{1}\sigma_{1}-\sigma_{2},\lambda_{1}]\), \(\sigma_{i}=\frac{g_{i}}{g_{3}},i=1,2\), and \((g_{1},g_{2},g_{3})^{T}\) is a special solution of the linear system (47) at \(\lambda=\lambda_{1}\)._ Since the process is very similar, we just consider the second DT in Proposition 4. It is easy to check that \[\frac{1}{q_{[1]}}=\frac{1}{q}+\frac{2\vartheta_{y}}{\vartheta^{2}-1},\quad \quad\quad u_{[1]}=u+\frac{2\vartheta_{\tau}}{\vartheta^{2}-1},\quad\quad \quad\vartheta=\frac{\lambda_{1}}{\sigma_{2}}. \tag{51}\] Then with the aid of (46), we obtain \[dx_{[1]}=d(x-\ln\lvert\frac{\vartheta+1}{\vartheta-1}\rvert),\] which infers that \[x_{[1]}=x-\ln\lvert\frac{\vartheta+1}{\vartheta-1}\rvert.\] Here the integration constant is taken to be zero. **Corollary 1**.: _The DP equation has a BT of the form_ \[\begin{array}{l}x_{[1]}=x-\ln\lvert\frac{\vartheta+1}{\vartheta-1}\rvert, \quad\quad\quad t_{[1]}=t,\\ u_{[1]}=u-\frac{2\vartheta}{\lambda_{1}^{2}}+\frac{2\lambda_{1}^{2}(u-u_{x} \vartheta)-2\vartheta\vartheta_{x}}{\lambda_{1}^{2}(\vartheta^{2}-1)},\end{array} \tag{52}\] _where \(\vartheta\) is determined by_ \[\vartheta_{xx}=\lambda_{1}^{2}m-3\vartheta\vartheta_{x}+\vartheta-\vartheta^ {3},\quad\quad\vartheta_{t}=u-(u\vartheta)_{x}+\lambda_{1}^{-2}(\vartheta- \vartheta^{3}-\vartheta\vartheta_{x}).\] **Remark:** In fact, considering the first DT in Proposition 4, one may get an equivalent BT to the one in [31], which are related by \(a=\frac{f_{2}^{2}p^{2}}{\int f_{2}^{2}p^{2}dy}\). Moreover, one can also discuss the \(N\)-BT for the DP equation like Section 2 which will not reproduce here. ### The Novikov equation Now we consider BT for the Novikov equation (4), which is another reduction of the Geng-Xue system as \(u=v\). The Novikov equation admits the following Lax pair [18] \[\psi_{x}=U_{2}\psi,\hskip 28.452756pt\psi_{t}=V_{2}\psi, \tag{53}\] where \(\psi=(\psi_{1},\psi_{2},\psi_{3})^{T}\) and \[U_{2}=\begin{bmatrix}0&\lambda m&1\\ 0&0&\lambda m\\ 1&0&0\end{bmatrix},\quad V_{2}=\begin{bmatrix}-uu_{x}&\frac{u_{x}}{\lambda}- \lambda u^{2}m&u_{x}^{2}\\ \frac{u}{\lambda}&-\frac{1}{\lambda^{2}}&-\lambda u^{2}m-\frac{u_{x}}{\lambda }\\ -u^{2}&\frac{u}{\lambda}&uu_{x}\end{bmatrix}.\] In such a case, the reciprocal transformation (6) reduces to \[dy=p^{2}dx-u^{2}p^{2}dt,\hskip 28.452756ptd\tau=dt, \tag{54}\] with \(p=m^{\frac{1}{3}}\). This is a reciprocal transformation of the Novikov equation which changes the Lax pair (53) to \[\psi_{y}=F_{2}\psi,\hskip 28.452756pt\psi_{\tau}=G_{2}\psi, \tag{55}\] where \[F_{2}=\begin{bmatrix}0&\lambda p&\frac{1}{p^{2}}\\ 0&0&\lambda p\\ \frac{1}{p^{2}}&0&0\end{bmatrix},\hskip 28.452756ptG_{2}=\begin{bmatrix}-u_{y}up^ {2}&\frac{u_{y}p^{2}}{\lambda}&u^{2}+u_{y}^{2}p^{4}\\ \frac{u}{\lambda}&-\frac{1}{\lambda^{2}}&-\frac{u_{y}p^{2}}{\lambda}\\ 0&\frac{u}{\lambda}&u_{y}up^{2}\end{bmatrix}.\] The compatibility condition of (55) yields the associated Novikov (aNovikov) equation [18; 37] \[p_{\tau}=-p^{3}uu_{y},\hskip 28.452756ptu_{yy}p^{4}+2p^{3}p_{y}u_{y}+p^{3}-u=0. \tag{56}\] It is easy to show that the scalar spectral problem of Lax pair (55) with respect to \(\psi_{2}\) is just that of the SK hierarchy \[[\partial_{y}^{3}-(\frac{p_{yy}}{p}+\frac{1}{p^{4}})\partial_{y}]\psi_{2}= \lambda^{2}\psi_{2}. \tag{57}\] With the help of DT for the SK hierarchy [35], the following Proposition holds. **Proposition 5**.: _The Lax pair (55) is covariant with respect to the DT_ \[\begin{array}{l}\psi_{[1]}=T\psi,\quad T=\mathrm{diag}[\frac{p_{[1]}}{p},1, \frac{p}{p_{[1]}}]((\lambda^{2}+\lambda_{1}^{2})I-\begin{bmatrix}T_{11}&T_{12}&T_ {13}\\ \frac{2\lambda\lambda_{1}}{\sigma_{2}}&2\lambda_{1}^{2}&-2\lambda\lambda_{1} \frac{\sigma_{1}}{\sigma_{2}}\\ -2\frac{\lambda_{1}^{2}}{\sigma_{2}^{2}}&2\frac{\lambda\lambda_{1}}{\sigma_{2} }&2\frac{\lambda_{1}^{2}\sigma_{1}}{\sigma_{2}^{2}}\end{bmatrix})\\ p_{[1]}^{2}=p^{2}|(1-2\frac{\sigma_{1}}{\sigma_{2}^{2}})^{2}-\frac{4}{\sigma_{ 2}^{4}}|,\\ u_{[1]}=-\frac{p}{p_{[1]}}(u+\frac{2p^{2}u_{u}-2u\sigma_{1}}{\sigma_{2}^{2}}+ \frac{2}{\lambda_{1}\sigma_{2}}).\end{array} \tag{58}\] _where_ \[\begin{array}{l}T_{11}=2\frac{\lambda_{1}^{2}}{\sigma_{2}^{2}}(p^{2}\frac{p_ {[1],y}}{p_{[1]}}-pp_{y}-\sigma_{1})+4p^{3}\frac{\lambda_{1}^{3}}{\sigma_{2}^{3 }},\\ T_{12}=2\frac{\lambda\lambda_{1}}{\sigma_{2}}(\sigma_{1}-p^{2}\frac{p_{[1],y}} {p_{[1]}}+pp_{y})-\frac{4\lambda\lambda_{1}p^{3}}{\sigma_{2}^{2}},\\ T_{13}=(\lambda^{2}+\lambda_{1}^{2}-2\frac{\lambda_{1}^{2}\sigma_{1}}{\sigma_{ 2}^{2}})(2\frac{\lambda_{1}p^{3}}{\sigma_{2}}+p^{2}\frac{p_{[1],y}}{p_{[1]}}- pp_{y})+2\frac{\lambda_{1}^{2}\sigma_{1}^{2}}{\sigma_{2}^{2}},\end{array}\] _and \(\sigma_{1}=\frac{g_{1}}{g_{3}},\sigma_{2}=\frac{g_{2}}{g_{3}}\), \((g_{1},g_{2},g_{3})^{T}\) is a special solution of the linear system (55) at \(\lambda=\lambda_{1}\)._ Now, we shall establish a BT for the Novikov equation with the help of reciprocal transformation (54). It infers from the Proposition 5 that \[p_{[1]}^{2}=\frac{4p^{2}}{\sigma_{2}^{4}}|\vartheta^{2}-1|,\quad\quad\quad \vartheta=\frac{1}{2}\sigma_{2}^{2}-\sigma_{1}. \tag{59}\] If \(\vartheta^{2}-1>0\), direct calculation shows that \[\frac{1}{p_{[1]}^{2}}=\frac{1}{p^{2}}-\frac{\vartheta_{y}}{\vartheta^{2}-1}, \quad\quad\quad u_{[1]}^{2}=u^{2}-\frac{\vartheta_{\tau}}{\vartheta^{2}-1}, \tag{60}\] Substituting (60) into (54) and taking the integration constant to be zero, we obtain \[x_{[1]}=x+\frac{1}{2}\mathrm{ln}|\frac{\vartheta+1}{\vartheta-1}|. \tag{61}\] If \(\vartheta^{2}-1<0\), a similar process gives rise to \[x_{[1]}=-x-\frac{1}{2}\mathrm{ln}|\frac{\vartheta+1}{\vartheta-1}|. \tag{62}\] **Corollary 2**.: _A BT of the Novikov equation reads_ \[\begin{split}& x_{[1]}=x+\frac{1}{2}{\rm ln}|\frac{\vartheta+1}{ \vartheta-1}|,\hskip 28.452756ptt_{[1]}=t,\\ & u_{[1]}=\pm\frac{1}{\sqrt{\vartheta^{2}-1}}(u\vartheta+u_{x}+ \frac{\eta}{\lambda_{1}}),\end{split} \tag{63}\] _if \(\vartheta^{2}-1>0\), and_ \[\begin{split}& x_{[1]}=-x-\frac{1}{2}{\rm ln}|\frac{\vartheta+1}{ \vartheta-1}|,\hskip 28.452756ptt_{[1]}=t,\\ & u_{[1]}=\pm\frac{1}{\sqrt{1-\vartheta^{2}}}(u\vartheta+u_{x}+ \frac{\eta}{\lambda_{1}}),\end{split} \tag{64}\] _if \(\vartheta^{2}-1<0\). Here \(\theta=\frac{1}{2}\eta^{2}+\frac{\eta_{x}}{\eta}-\lambda_{1}\frac{m}{\eta}\) and \(\eta\) is determined by_ \[\begin{split}&\eta_{xx}=\lambda_{1}m_{x}-\eta-\lambda_{1}m\eta^{2 }+(2\eta_{x}^{2}-3\lambda_{1}m\eta_{x}+\lambda_{1}^{2}m^{2})/\eta,\\ &\eta_{t}=-(u^{2}+\frac{u}{\lambda_{1}\eta})\eta_{x}-\eta(uu_{x} +\frac{1}{\lambda_{1}^{2}})-\frac{u_{x}}{\lambda_{1}}+\frac{um}{\eta}-\frac{u \eta^{2}}{\lambda_{1}}.\end{split}\] **Remark:** Comparing the BT (63) with the one in [32], we may show that they are related by \(a=\frac{2\lambda_{1}p}{\sigma_{2}}\). ## 6 Appendix: proofs of the identities (35) and (37) In this section, we will give proofs of the identities (34), (35) and (37). Actually, the (34) is the direct result of the Jacobi identity. Since proofs of identities in (35) are similar, we only prove one of them, and so do (37). Here we prove the first one in (35) for \(N=3k+2\). For convenience, we define \[\vec{\alpha}_{N}=(\alpha_{1},...,\alpha_{N})^{T},\quad\lambda^{k}\vec{\alpha}_ {N}=(\lambda_{1}^{k}\alpha_{1},...,\lambda_{N}^{k}\alpha_{N})^{T},\quad\vec{1 }_{N}=(1,...,1)^{T}, \tag{65}\] and \(\vec{1}_{N}(i)\) as the column vector with the \(i\)-th element \(1\) and other elements \(0\). Then, using the Plucker relation, we compute \[A_{N-1}B_{N}-A_{N}B_{N-1}\] \[\quad=\left|\vec{a}_{N-1}\quad\lambda\vec{b}_{N-1}\quad\cdots\quad \lambda^{2k}\vec{a}_{N-1}\right|\left|\vec{1}_{N}\quad\lambda\vec{b}_{N} \quad\cdots\quad\lambda^{2k+1}\vec{b}_{N}\right|\] \[\quad\quad-\left|\vec{a}_{N}\quad\lambda\vec{b}_{N}\quad\cdots \quad\lambda^{2k+1}\vec{b}_{N}\right|\left|\vec{1}_{N-1}\quad\lambda\vec{b}_{ N-1}\quad\cdots\quad\lambda^{2k}\vec{a}_{N-1}\right|\] \[\quad=\left|\vec{a}_{N}\quad\lambda\vec{b}_{N}\quad\cdots\quad \lambda^{2k}\vec{a}_{N}\quad\vec{1}_{N}(N)\right|\left|\vec{1}_{N}\quad\lambda \vec{b}_{N}\quad\cdots\quad\lambda^{2k+1}\vec{b}_{N}\right|\] \[\quad\quad-\left|\vec{a}_{N}\quad\lambda\vec{b}_{N}\quad\cdots \quad\lambda^{2k+1}\vec{b}_{N}\right|\left|\vec{1}_{N}\quad\lambda\vec{b}_{N }\quad\cdots\quad\lambda^{2k}\vec{a}_{N}\quad\vec{1}_{N}(N)\right|\] \[\quad=\left|\vec{a}_{N}\quad\lambda\vec{b}_{N}\quad\cdots\quad \lambda^{2k}\vec{a}_{N}\quad\vec{1}_{N}\right|\left|\vec{1}_{N}(N)\quad\lambda \vec{b}_{N}\quad\cdots\quad\lambda^{2k}\vec{a}_{N}\quad\lambda^{2k+1}\vec{b}_{ N}\right|\] \[\quad=\Delta_{N}\Delta_{N+1}\left[\begin{array}{cc}N&N+1\\ 1&2\end{array}\right].\] Now, we proceed to prove the first one of (37) as \(N=3k+2\). With the aid of the Jacobi identity, we have \[R_{3} =\lambda_{N+1}^{2}\left|\begin{array}{cccc}\vec{1}_{N-2}&\vec{ a}_{N-2}&\cdots&\lambda^{2k}\vec{1}_{N-2}\\ 1&a_{N+1}&\cdots&\lambda_{N+1}^{2k}\end{array}\right|\left|\lambda\vec{b}_{N- 1}\quad\cdots\quad\lambda^{2k+1}\vec{b}_{N-1}\right|\] \[\quad-\lambda_{N-1}^{2}\left|\vec{1}_{N-1}\quad\vec{a}_{N-1} \quad\cdots\quad\lambda^{2k}\vec{1}_{N-1}\right|\left|\begin{array}{cccc} \lambda\vec{b}_{N-2}&\cdots&\lambda^{2k+1}\vec{b}_{N-2}\\ \lambda_{N+1}b_{N+1}&\cdots&\lambda_{N+1}^{2k+1}b_{N+1}\end{array}\right|\] \[=\prod_{i=1}^{N-2}\lambda_{i}^{-2}\{\left|\begin{matrix}\lambda^ {2}\vec{1}_{N-2}&\cdots&\lambda^{2k+2}\vec{1}_{N-2}\\ \lambda_{N+1}^{2}&\cdots&\lambda_{N+1}^{2k+2}\end{matrix}\right|\left|\lambda \vec{b}_{N-1}\quad\cdots\quad\lambda^{2k+1}\vec{b}_{N-1}\right|\] \[\quad-\left|\lambda^{2}\vec{1}_{N-1}\quad\cdots\quad\lambda^{2k+2 }\vec{1}_{N-1}\right|\left|\begin{matrix}\lambda\vec{b}_{N-2}&\cdots&\lambda^{2 k+1}\vec{b}_{N-2}\\ \lambda_{N+1}b_{N+1}&\cdots&\lambda_{N+1}^{2k+1}b_{N+1}\end{matrix}\right|\}\] \[=\prod_{i=1}^{N-2}\lambda_{i}^{-2}\{\bigtriangleup_{N+2}\left[ \begin{matrix}N-1&N&N+2\\ 1&2&3\end{matrix}\right]\bigtriangleup_{N+1}\left[\begin{matrix}N&N+1\\ 1&2\end{matrix}\right]\] \[\quad-\bigtriangleup_{N+2}\left[\begin{matrix}N&N+1&N+2\\ 1&2&3\end{matrix}\right]\bigtriangleup_{N+1}\left[\begin{matrix}N-1&N\\ 1&2\end{matrix}\right]\}\] \[=\prod_{i=1}^{N-2}\lambda_{i}^{-2}\{\bigtriangleup_{N+2}\left[ \begin{matrix}N&N+2&N-1\\ 1&2&3\end{matrix}\right]\bigtriangleup_{N+2}\left[\begin{matrix}N&N+2&N+1\\ 1&2&N+2\end{matrix}\right]\] \[\quad-\bigtriangleup_{N+2}\left[\begin{matrix}N&N+2&N+1\\ 1&2&3\end{matrix}\right]\bigtriangleup_{N+2}\left[\begin{matrix}N&N+2&N-1\\ 1&2&N+2\end{matrix}\right]\}\] \[=\prod_{i=1}^{N-2}\lambda_{i}^{-2}\triangle_{N+2}\begin{bmatrix}N&N+2\\ 1&2\end{bmatrix}\triangle_{N+1}\begin{bmatrix}N-1&N&N+1\\ 1&2&3\end{bmatrix}.\] ## Acknowledgements This work is partially supported by the National Natural Science Foundation of China (Grant Nos. 12271190 and 11871232), and Youth Innovation Foundation of Xiamen (project no. 3502Z20206011),
2308.11684
User Identity Linkage in Social Media Using Linguistic and Social Interaction Features
Social media users often hold several accounts in their effort to multiply the spread of their thoughts, ideas, and viewpoints. In the particular case of objectionable content, users tend to create multiple accounts to bypass the combating measures enforced by social media platforms and thus retain their online identity even if some of their accounts are suspended. User identity linkage aims to reveal social media accounts likely to belong to the same natural person so as to prevent the spread of abusive/illegal activities. To this end, this work proposes a machine learning-based detection model, which uses multiple attributes of users' online activity in order to identify whether two or more virtual identities belong to the same real natural person. The models efficacy is demonstrated on two cases on abusive and terrorism-related Twitter content.
Despoina Chatzakou, Juan Soler-Company, Theodora Tsikrika, Leo Wanner, Stefanos Vrochidis, Ioannis Kompatsiaris
2023-08-22T15:10:38Z
http://arxiv.org/abs/2308.11684v1
# User Identity Linkage in Social Media Using Linguistic and Social Interaction Features ###### Abstract. Social media users often hold several accounts in their effort to multiply the spread of their thoughts, ideas, and viewpoints. In the particular case of objectionable content, users tend to create multiple accounts to bypass the combating measures enforced by social media platforms and thus retain their online identity even if some of their accounts are suspended. User identity linkage aims to reveal social media accounts likely to belong to the same natural person so as to prevent the spread of abusive/illegal activities. To this end, this work proposes a machine learning-based detection model, which uses multiple attributes of users' online activity in order to identify whether two or more virtual identities belong to the same real natural person. The models efficacy is demonstrated on two cases on abusive and terrorism-related Twitter content. Actor identity resolution, Abusive and Illegal content, Twitter + Footnote †: journal: Information Technology ## 1. Introduction In its somewhat more than 20 years of existence, social media have become an integral part of the life of more than 2.6\(B\) people around the globe. Originally envisaged as a means to stay connected with friends, get informed, or be entertained, it has become a very powerful instrument for public opinion formation and dissemination of all kinds of not always harmless content. Particularly worrying is the spread of abusive, extremist, and terrorism-related content via widely used online social platforms, such as Twitter and Facebook. In order to address this problem, social media administrators implement filtering methods and suspend accounts once harmful content is detected (Wandel, 2017). However, to counter such measures and overcome the suspension policies, users seeking to widely disseminate deleterious material often follow various strategies, the most popular being the setting up of multiple (back-up) accounts that allow them to keep contact with individuals with the same disposition (e.g., violent extremists) and exchange content, even after one of their accounts gets suspended (Kompatsiaris, 2017; Kompatsiaris, 2018). It is thus of paramount importance to be able to detect user accounts (alias _user identities_) likely to belong to the same person, so as to stop the propagation of harmful behavior on a large scale, including the spread of abusive or terrorism-related material.1 Footnote 1: Linking users is also important in other contexts, e.g., cutting the spread of spam or fake content. User identity linkage (i.e., detection of multiple user identities) has been studied both _across_ social networks (e.g., (Zhou et al., 2017; Wandel et al., 2017)) and _within_ the same social network (e.g., (Kompatsiaris, 2018; Wandel et al., 2017)). This paper focuses on the latter case and, particularly, on Twitter. Twitter has been selected as it is one of the most popular social media platforms and often contains abusive (Bak et al., 2017; Wandel et al., 2017) or terrorism-related (Kompatsiaris, 2018; Wandel et al., 2017) material. Moreover, Twitter is a rather challenging platform for investigating this phenomenon, since tweets are short and often contain grammatical and orthographic errors, thus making it harder to use off-the-shelf natural language processing tools to analyze them in the context of such investigations. As a consequence, Twitter is often avoided as a single social media source for the study of user identity linkage. Furthermore, user identity linkage research has thus far been mainly conducted on English data sources. Since the dissemination of deleterious (e.g., abusive and terrorism-related) material is not limited to English, the consideration of other languages is also necessary. **Overview & Contributions.** In this paper, we design, implement, and evaluate a methodology geared to identify the linkage between online user accounts within the same social network. Specifically, this work proposes a framework which considers a wide range of profile, linguistic, activity, and network characteristics (the latter two are also referred to as _social interaction_ features) for representing users' online presence, and employs machine learning and deep learning-based classifiers for identifying accounts potentially linked to the same natural person. Our main contributions can be summarized as follows: to the best of our knowledge, this is the first user identity linkage work to employ (i) a wide range of features extracted from social networks constructed based on users' activity, (ii) advanced syntactic features based on dependency trees, (iii) semantic similarities based on word embeddings, and (iv) deep neural networks in such a classification setup. Moreover, comprehensive evaluation experiments are performed on two Twitter datasets related to abusive behaviors and terrorism phenomena, with English and Arabic material, respectively, and the experimental results are promising, achieving up to 99.50% AUC. The rest of the paper is organized as follows. Section 2 reviews the related work. Section 3 presents the proposed framework, the extracted features, and the techniques for modeling the data, and predicting possible user linkage. Section 4 describes the employed datasets, the process for constructing the ground truth, and the experimental methodology, while Section 5 presents the experimental results. Finally, Section 6 draws some conclusions and outlines future work. ## 2. Related Work Numerous studies have examined user identity linkage _across_ online social networks; see, e.g., (Zhou et al., 2017; Zhang et al., 2018; Malhotra et al., 2019). Malhotra et al. (2019) proposed to disambiguate profiles of the same user based on their digital footprint in both Twitter and LinkedIn. Twitter has also been jointly considered in many works as one of the studied platforms in relation to other social networks, e.g., Yelp (Yelp, 2018), Flickr (Yelp, 2018), Foursquare (Sandhi et al., 2019), Instagram (Sandhi et al., 2019), and Facebook (Zhou et al., 2017). For instance, authors in (Sandhi et al., 2019) proposed a method that examines whether two accounts belong to the same mobile user by exploiting location information, when they are active on both Twitter and Instagram. Identity linkage _within_ a single social network has also been explored. For instance, an Irish forum was studied (Fischer et al., 2019) to first unmask authors identities and then detect matching aliases. The so-called'sockpuppetry' (i.e., blocked users initiating new accounts) has been considerably studied on Wikipedia (Wikipedia, 2019; Wikipedia, 2019). Finally, user identity linkage has been explored on popular online news sites, such as _The Guardian_ and the _SPIEGEL ONLINE_, to assist their providers detect manipulations of public opinion (Zhou et al., 2017). Profile, content, and network attributes are often exploited to build such detection models. User name, screen name, and biography are common profile attributes (Zhou et al., 2017; Zhang et al., 2018). In relation to the posted content, temporal (e.g., timestamps) and spatial (e.g., geotags) information (Fischer et al., 2019; Sandhi et al., 2019; Sandhi et al., 2019), as well as stylometric features (e.g., part-of-speech n-grams, etc.) (Fischer et al., 2019; Zhou et al., 2017; Zhang et al., 2018) are widely employed. The way that a user's social network is formulated and their communication patterns can also provide useful information about a user's identity; hence, network attributes have been used to detect actor's identity across multiple social networks (Zhou et al., 2017; Sandhi et al., 2019). For instance, a user's immediate or non-immediate neighborhood can be exploited by considering friendship relations. Building upon such features, supervised, unsupervised, and semi-supervised methods have been considered. For instance, a probabilistic classification based on Naive Bayes has been employed to link user identities across social media (Sandhi et al., 2019). Decision Trees, SVM, and kNN algorithms have also been tested (Zhou et al., 2017). Moreover, an alignment algorithm has also been used, where an affinity score based on timestamped sparse and dense location-based properties is computed to find the most likely matching identities using a maximum weighted matching scheme (Sandhi et al., 2019). Regarding semi-supervised models, a multi-objective framework has been built for modeling heterogeneous behaviors and structural consistency maximization (Zhou et al., 2017). Table 1 compares our method to those that are most relevant to our problem setting (i.e., identity linkage _within_ the same platform). Most of such works use "classic" (traditional) machine learning classifiers, such as SVMs (Fischer et al., 2019; Wikipedia, 2019; Wikipedia, 2019), Naive Bayes (Fischer et al., 2019), and Random Forest (Fischer et al., 2019; Wikipedia, 2019). Moreover, matching approaches based on similarity measures (e.g., cosine similarity or euclidean distance) (Fischer et al., 2019), as well as threshold-based approaches have also been employed (Zhou et al., 2017). Under the features category three main types of features are listed, i.e., activity-, linguistic-, and network-based. Depending on the considered platform, different activity-based features are used, such as number of posts and replies, down- and up-votes, number of total revisions, etc. Moreover, users' activity is often examined in relation to the temporal dimension, by considering for instance the mean time between two consecutive posts or the posting activity in relation to different timeframes (such as hours, period of day, and month). The linguistic-based features are highly related to a user's behavioral and writing style, as for instance average words length, average number of characters per word and/or sentence, upper-cased letters, and part-of-speech tags (such as verbs, nouns, and adverbs). Finally, the network-based features so far have been related to a reply-based network (Fischer et al., 2019), examining users' tendency to cluster with others (based on clustering coefficient) and quantifying the extent to which users reciprocate the reply communication they receive from other users (reciprocity). Overall, apart from English, Irish (Fischer et al., 2019; Fischer et al., 2019) and German (Zhou et al., 2017) textual sources have been studied. **Contributions.** Compared to existing works, we use a wide range of linguistic features (driven by well-established approaches used in similar tasks, e.g., author profiling and identification), while to our knowledge we are the first to employ dependency and tree features in addition to part-of-speech (as syntactic features) in this context. Moreover, we advance state-of-the-art by considering various social interaction features, which contribute significantly in successfully detecting accounts likely to belong to the same person within a social network. Specifically, we employ a "conversation-based network", which considers mentions, replies, and retweets, to first construct the network and estimate then various network features. To the best of our knowledge, we are the first to employ the conversation-based network and all these features in this context. To be in alignment with the literature, we evaluate various traditional machine learning methods, i.e., probabilistic, tree-based, and ensemble classifiers. In addition, we study the application of deep learning on the user identity linkage task. The designed neural network architecture digests both textual information and various numerical metadata (i.e., activity, linguistic, and network features). Finally, since the propagation of objectionable material is not limited to English, we conduct comprehensive experiments in two case studies related to abusive and terrorism phenomena, associated with English and Arabic textual sources, respectively. ## 3. Discovery of Account Linkage This section details the proposed framework for detecting the possible linkage of user accounts in social media based on models of user behavior. To this end, a wide range of user characteristics are considered for representing users' online presence, and, based on these \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{} & \multicolumn{8}{c|}{**Features**} & \multicolumn{3}{c|}{**ML method used**} \\ \hline \multirow{2}{*}{\begin{tabular}{c} **Related** \\ **Work** \\ \end{tabular} } & \multirow{2}{*}{**A**} & \multicolumn{3}{c|}{**L**} & \multicolumn{3}{c|}{**N**} & \multirow{2}{*}{ \begin{tabular}{c} **Classic** \\ **Nets** \\ \end{tabular} } \\ \cline{3-10} & & **CH** & **W** & **S** & **D** & **SY** & **DI** & **SE** & **CO** & \\ \hline (Fischer et al., 2019) & x & x & x & x & & & & & & x & \\ \hline (Fischer et al., 2019) & x & x & x & x & & & & & & & \\ \hline (Fischer et al., 2019) & x & x & x & x & x & & & & & & & \\ \hline (Fischer et al., 2019) & x & x & x & x & x & x & x & x & x & & \\ \hline (Fischer et al., 2019) & x & x & x & x & x & x & & & & & & \\ \hline (Fischer et al., 2019) & x & x & x & x & x & x & x & x & x & x & x & x \\ \hline (Fischer et al., 2019) & x & x & x & x & x & x & x & x & x & x & x & x \\ \hline \end{tabular} \end{table} Table 1. Comparison of our method with past works. A: Activity, L: Linguistic (CH: character, W: word, S: sentence, D: dictionary, SY: syntactic), N: Network (DI: distribution, SE: segmentation, CO: connection). extracted features, machine learning and deep learning-based classifiers are employed for distinguishing between _linked accounts_ (i.e., accounts belonging to the same person) and non-linked accounts. ### Individual User Account Features Various attributes can be exploited in social media to model the behavior of _each individual user_, namely: 1. _Profile Features_ (P) extracted from a user's profile, such as demographic information, biography, avatar (i.e., image provided by the user to visually present themselves), etc. 2. _Activity Features_ (A) related to a user's posting behavior, such as number of posts, replies, mentions, etc. 3. _Linguistic Features_ (L) extracted from users' posted content that may be used to model users with respect to, e.g., their writing style or topics of interest. 4. _Network Features_ (N) extracted from the social network interactions between users. Below, we detail the set of features considered per _individual user account_ for each of the aforementioned categories. **Profile Features.** Features in this category include the age of the account (i.e., number of days since its creation), whether the account is verified or not (i.e., acknowledged by Twitter as an account linked to a user of "public interest"), and whether or not the user has provided information about their location. **Activity Features.** These features provide an overview of a user's online presence with respect to the considered social network and include the number of: posts, lists subscribed to, shares, favorited tweets, mentions, and hashtags, as well as the posts' inter-arrival time. For instance, mentions can be used to directly interact with another user (and possibly perform direct attacks in an abusive context), while the use of hashtags (particularly of popular ones) is a way to increase a post's visibility. **Linguistic Features.** This set of features analyzes the writing style of the author of a tweet. Based on the posted content, surface-oriented and deeper stylistic features are extracted. In particular, five subcategories of features are considered (Sutskever et al., 2017), as described next. 1. _Character-based features_: ratio of the number of each of the following characters to the total number of characters: upper-cased, periods, commas, parentheses, exclamations, colons, number digits, semicolons, hyphens, and quotation marks. 2. _Word-based features_: mean number of characters per word, vocabulary richness (i.e., different words being used), acronyms, stopwords, first person pronouns, usage of words composed by two or three characters, standard deviation (STD) of word length, and the difference between the longest and shortest words. 3. _Sentence-based features_: mean number and standard deviation of words per sentence, and difference between the maximum and minimum number of words per sentence in a text. 4. _Dictionary-based features_: the ratio of each of the following types of tokens to the total number of words in a text: discourse markers, interjections, abbreviations, curse words, and polar (positive/negative) words (Sutskever et al., 2017). 5. _Syntactic features_: three types of syntactic features are taken into account: (i) Part-of-Speech (POS) features: relative frequency of each POS tag in a text; (ii) Dependency features: occurrence of syntactic dependency relations in the dependency trees of the text;2 to this end, we extract the frequency of each individual dependency relation per sentence, the usage ratio of the passive voice, and the number of coordinate/subordinate clauses per sentence; and (iii) Tree features: measures of the tree width, the tree depth, and the ramification factor, where _tree depth_ is defined as the maximum number of nodes between the root and a leaf node, _tree width_ is the maximum number of siblings at any of levels of the tree, and the _ramification factor_ is the mean number of children per level; in other words, the tree features characterize the complexity of the inner structure of the sentences (simple clauses, as well as subordinate and coordinate clauses). To extract syntactic features, the parser presented in (Sutskever et al., 2017) has been trained on English and Arabic material annotated with Universal Dependencies. Footnote 2: Syntactic dependency trees are unordered rooted trees that represent the syntactic structure of a sentence according to a specific grammar. Their nodes correspond to the words of the sentence and are connected via binary asymmetrical dependencies. **Network Features.** This feature category aims to measure the popularity of a user based on different criteria, such as the number of followers (_in-degree centrality_), friends (_out-degree centrality_), and their ratio; since Twitter allows users to follow anyone without their approval, this ratio can quantify a user's popularity. Overall, these measures can quantify a user's opportunity to have a positive or negative impact in their ego-network in a direct way. To dig deeper into users' relations, we construct a "conversation-based network" based on the mentions, replies, and retweets between each pair of users, and extract (using Gephi (Gephi, 2017)) six network features grouped as follows: (i) Distribution metrics: hub, authority, Eigenvector, and PageRank centralities, which measure users' influence and connectivity in their immediate and extended neighborhoods, (ii) Connection metric: number of triangles a node belongs to, and (iii) Segmentation metric: Clustering Coefficient, which shows a user's tendency to cluster with others. To the best of our knowledge, we are the first to employ the conversation-based network and all these features in this context. ### User Modeling The aforementioned feature categories (or _sets_) \(S\)=\(\{P,A,L,N\}\) can be exploited to model the behavior of each _individual user account_ in a social media platform. We thus define the feature vector for each user \(u_{i}\) and feature category \(S\) as \(V_{S\text{u}_{i}}\)=\(<\)\(f_{S\text{i}_{1}},f_{S\text{i}_{2}},\ldots,f_{S\text{i}_{n}}\)\(>\), where \(f_{S\text{i}_{j}}\) is the \(j\)th feature of category \(S\) for user \(u_{i}\), and \(n\) equals to the total number of included features for this category. For instance, for the network features category, a feature vector can be created for every \(u_{i}\) as follows: \(V_{N\text{u}_{i}}\)\(\sim\)\(authority_{i},triangles_{i},eigenvector_{i}\), \(paper\text{a}k_{i},coef_{i}\), \(hub_{i}\)\(>\). A feature vector \(V_{All\text{u}_{i}}\) can also be created by considering all features from all four sets. To detect whether two accounts are likely to belong to the same person, we also need to jointly represent each _user pair_ so as to determine their potential relationship and use that as input to the classifier. To this end, we jointly represent the behavior of each pair of users \(u_{i}\) and \(u_{j}\), \(\forall i,j\), where \(i\neq j\), as either (i) a feature vector of the absolute differences between the individual feature vectors of \(u_{i}\) and \(u_{j}\), or (ii) as a vector of four similarity scores, each estimated based on the similarity of the per-category \(\{P,A,L,N\}\) feature vector. To estimate these similarities, the cosine similarity, the Euclidean, and the Manhattan distance are used; for the latter two, normalization is applied, such that values \(\in[0,1]\). Apart from the above approaches to user pair modeling that take into account the extracted features, we can also measure the direct similarity of the evidence associated with each user, such as their posted content, social network, and profile. In particular, we focus on the similarity between the posts of two users, since users tend to express themselves in standard ways by frequently using the same words or expressions; moreover, due to daily social interactions, even different persons may result in using the same words in essentially the same way (Bahdan et al., 2017). We thus consider two additional features corresponding to the similarities between the posts of two users, measured in terms of their (i) _edit distance_, i.e., number of changes needed to convert a text to another, and (ii) _semantic similarity_. To this end, a preprocessing step is applied to remove all numbers, mentions, and URLs from the posts. _Edit distance_ is estimated with the Levenshtein distance (Levenshtein, 1979), which counts the minimum number of single-character edits needed to convert one string into another; for each pair of users, this is averaged out over all pairs of their posts. _Semantic similarity_ is estimated based on a vector space model approach, whereby each word in a post is represented as a word embeddings vector. Word embeddings allow modeling both semantic and syntactic relations of words, thus capturing more refined attributes and contextual cues inherent in language. Specifically, we use Word2Vec (Levenshtein, 1979) to: (1) first establish a vocabulary based on the words included in the set more times than a user-defined threshold, (2) apply a learning model so as to learn the words' vector representations in a \(D\)-dimensional space (50-300 dimensions can model hundreds of millions of words with high accuracy (Levenshtein, 1979)), and (3) output a vector representation for each word encountered in the input texts. Based on (Levenshtein, 1979) 50-300 dimensions can model hundreds of millions of words with high accuracy. Given the vector representations of all words in a post, the overall vector representation of the post is derived by averaging the vectors of all its words. Finally, the set of all posts by a user, referred to as document \(d\), is represented as a vector which contains the semantic center of all posts' vectors, \(p\): \(Sem_{center}(d)=\sum_{p\in d}\,vec(p)/|d|\), where \(|d|\) is the number of the user's posts. ### Classification To be in alignment with the state-of-the-art, here, we proceed with both traditional machine learning methods and deep neural networks (NNs). Regarding the former, probabilistic (e.g., Naive Bayes, BayesNet), tree-based (e.g., J48, LADTree, LMT), and ensemble classifiers are considered. As an ensemble classifier, we use Random Forest which constructs a forest of decision trees with random subsets of features during classification; an important advantage is its ability to reduce overfitting by averaging several trees during model construction. Moreover, Random Forests are quite efficient in terms of the time needed to train a model. To build the Random Forest classifier, we tune the number of generated trees to 100, while there is no limit set to the maximum depth. Even though the traditional machine learning approaches have been extensively used in similar tasks, they face an important drawback: they cannot successfully combine semantic and cultural nuances of the written language. For instance, taking into account the negation of words or sarcastic expressions with traditional machine learning approaches is a quite challenging task, as the structure of the sentence has to be effectively presented in the set of features. To overcome such difficulties, deep learning algorithms have been proposed that build upon neural networks. Therefore, here we also proceed with a modeling process building upon neural networks. Specifically, in the neural network setup, we build a model to combine raw text with metadata (i.e., profile, activity, linguistic, network, and user pair features), similar to (Bahdan et al., 2017). The combination of raw text with additional behavioral facts (such as users' popularity, social network, and account settings) allows us to capture different facets of users' behavior, and thus possibly detecting more efficiently accounts likely to belong to the same user. Specifically, we construct a single network architecture which combines both text classification and metadata networks (see below) before their inputs are translated into classification probabilities. Figure 1 depicts the deep neural network setup used in this work. **Text Classification Network.** We employ a Recurrent Neural Network (RNN) (Levenshtein, 1979), which processes sequential data using recurrent connections between their neural activations at consecutive time steps. RNNs were selected over other NN models since they have proven successful in understanding word sequences and interpreting their meaning. Specifically, we build upon a Gated Recurrent Unit (GRU) since it performs well on short texts (such as tweets) (Bahdan et al., 2017). We employ a GRU with 100 units (neurons); we experimented with different sizes and this gave the best results for both datasets. To avoid over-fitting, we use a recurrent dropout with \(p=0.5\). Before moving through the RNN layers, the first layer performs a word embedding lookup, where all words are represented as high-dimensional vectors. For English, we use pre-trained word vectors from Twitter (Towards et al., 2017); for Arabic, we use AraVec (Vaswani et al., 2017), a pre-trained distributed word representation. Tweets' words are mapped onto 200 and 300 dimensional vectors, for English and Arabic, respectively. **Metadata Network.** After feeding the data to the metadata neural network, a batch normalization layer is used to enable faster learning and higher overall accuracy. To learn the metadata, we use a simple dense layer with 100 units, i.e., the same dimensionality as Figure 1. Deep neural network setup. the text classification network. Finally, we use _tanh_ as activation function, since it performs well with standardized numerical data. **Combined Network.** We combine the text classification and metadata networks using a concatenation layer using a fully connected output layer (i.e., dense layer) with one neuron per class we want to predict and _softmax_ as activation function. ## 4. Experiments This section presents our evaluation experiments on abusive and terrorism-related datasets collected from Twitter. ### Datasets The first step is to collect the necessary content from Twitter, i.e., one of the most popular social networks with \(\sim\)\(330M\) monthly active users (Nakamura et al., 2017), which also gives access to an important number of sample tweets via its open API. For our study, two datasets obtained from Twitter are used; we focus on these datasets since they are likely to involve users with multiple accounts (Kumar et al., 2017; Kumar et al., 2017; Kumar et al., 2017). It should be noted that the collected data correspond to publicly available data, we did not attempt to de-anonymize users, and we fully comply with the terms of use of the APIs we use. **Abusive Dataset.** The dataset provided by (Bordes et al., 2016) was used for studying abusive activities on Twitter. The authors collected a set of tweets between June and August 2016, using snowball sampling around the GamerGate controversy (Kumar et al., 2017), which is known to have produced many instances of cyber-bullying and cyber-aggression. GamerGate originated from alleged improprieties in video game journalism, which quickly grew into a larger campaign centered around sexism and social justice. The GamerGate controversy, and more specifically the hashtag \(\epsilon\)GamerGate, can serve as a relatively unambiguous reference to posts that are likely to involve abusive/aggressive behavior from a fairly mature and hateful online community, since individuals on both sides of the controversy were using this hashtag. Moreover, extreme cases of bullying and aggressive behavior (e.g., direct threats of rape and murder) have been associated with it. Overall, the dataset consists of \(600k\) tweets in English and \(312k\) users. **Terrorism Dataset.** This dataset was created using Twitter's Search API, which returns tweets matching specified keywords. Specifically, we collected data from February 2017 to June 2018 using a set of terrorism-related Arabic keywords provided by Law Enforcement and domain experts. The dataset consists of \(65k\) tweets and \(35k\) users. Based on a language detection library (Levy et al., 2017), \(99\%\) of the posts in our dataset are in Arabic. ### Ground Truth Due to the absence of ground truth that indicates which user accounts belong to the same person, the ground truth for each dataset is created as follows. First, we filter out all users with less than 10 posts (thus removing all users associated with insufficient evidence), and then we randomly select a subset of user accounts (e.g., \(X\)=200 users) by applying a stratified random sampling. To this end, the entire population is first divided into homogeneous groups based on the number of posted tweets; this number is varied between 10 and 60 with step 5, while the final group contains all users with more than 60 posts. Then a random sample is selected from each group, with the sample size being proportional to the group's size compared to the entire population. As in (Kumar et al., 2017; Kumar et al., 2017), where no annotated datasets were available, we build the ground truth by splitting the posts of each selected user into two subsets, assigning to each subset a different user id (e.g., user \(u_{i}\) becomes \(u_{ia}\) and \(u_{ib}\), and the tweets of \(u_{i}\) are split between \(u_{ia}\) and \(u_{ib}\)). Thus, we come up with a dataset with the double number of user accounts (e.g., 400 users for \(X\)=200) and a set of known _linked accounts_ (i.e., accounts belonging to the same person). Two approaches are considered for splitting the tweets of the original accounts (e.g., \(u_{i}\)) into linked users (e.g., \(u_{ia}\) and \(u_{ib}\)): (i) random _assignment_ of an equal number of posts to each, and (ii) _interleaving_, where posts are initially sorted based on their timestamps and then alternately assigned to each of the linked accounts. Hence, we have two sets of users available: \(A\)=\(\{u_{1a},u_{2a},\ldots,u_{Xa}\}\) and \(B\)=\(\{u_{1b},u_{2b},\ldots,u_{Xb}\}\). Comparing each user \(u_{ia}\) from set A with each user \(u_{jb}\) in set B \(\forall i,j\), where \(i\neq j\), we result to overall \(Y=X*(X-1)\) user pairs (e.g., for \(X=200\), \(Y=39,800\)), with each user pair in \(Y\) corresponding to a non-linked account. For each dataset, we opt for maintaining a proportion of \(10\%\) of linked and \(90\%\) of non-linked accounts, given that previous works, e.g., (Kumar et al., 2017), have indicated that about \(10\%\) of users within a dataset tend to exhibit bad behavior. Therefore, for a given \(X\), we randomly sample from \(Y\) so as to reflect the above observation; e.g, for \(X\)=200, the final dataset contains 200 linked accounts (\(u_{ia},u_{ib}\)) and \(Z=9\times 200=1,800\) non-linked accounts (\(u_{ia},u_{jb}\)), \(i\neq j\). We also (i) vary the number of randomly selected users \(X\) from 200 to 500 in steps of 100, and (ii) create unbalanced datasets by increasing the non-linked accounts; for this, we keep the same number of linked accounts and incrementally increase the number of non-linked accounts with step \(9\times X\). E.g., for \(X\)=200, \(Z\) ranges from \(1,800\) to \(39,800\) with step \(9\times 200=1,800\). In the last step, we consider all \(39,800\) (rather than the \(39,600\)) non-linked accounts. ### Features Selection Section 3.1 described various features that could be considered for exploring whether two accounts belong to the same person. Given the ground truth creation process applied in this work, profile features, as well as the number of followers, friends, and their ratio of the network features are excluded (as they would be the same for both linked accounts), while for activity features (for the same \begin{table} \begin{tabular}{l|l} \hline **Category** & **Features** \\ \hline Activity & avg. \(\pi\) mentions, avg. \(\pi\) hashtags, posts’ inter-arrival time \\ \hline \hline Linguistic & _Character-based_ & ratios of upper-cased characters, periods, commas, parentheses, examinations, colons, number of lists, sanctions, hyphens and quation marks, w.r.t characters in a text \\ \cline{2-3} & _Word-based_ & mean \(\#\) characters per word, vocabulary richness, acronyms, stopwords, first person pronouns, upage of words composed by \(2\) or \(3\) characters, STD of word length, difference between the longest and shortest words \\ \cline{2-3} & _Sentence-based_ & mean and STD of words per sentence, difference between the max. and min. number of words per sentence \\ \cline{2-3} & _Dictionary-based_ & ratios of discourse markers, interjections, abbreviations, curse w.r.t. \(\mu\)s, \(\mu\ reason) we can only consider the number of mentions and hashtags, and the posts' inter-arrival time. Table 2 summarizes the examined features; in real scenarios, all features from the four categories could be considered and may be beneficial for the classification. As expected, some of the features in Table 2 could be more distinguishing and thus assist more the classification. To this end and towards feature selection, we examine the significance of differences between the distributions of linked and non-linked user accounts based on the two-sample Kolmogorov-Smirnov test. This test is used since it enables to assess whether two samples come from the same distribution based on their empirical distribution function (ECDF). We consider as statistically significant all cases with \(p{<}0.01\). Due to space limits, we only present the ECDF plots of some features; to improve readability, some plots are trimmed. **Activity Features.** Figures 1(a)-1(b) plot the ECDF for the number of mentions and hashtags for the linked and non-linked users (\(p{<}0.01\)). We observe that the non-linked users tend to have a higher difference in relation to the number of mentions and hashtags compared to the linked user accounts. As for the inter-arrival time between the posted tweets (not shown in the plots), the difference is also statistically significant (\(D{=}0.15849\)). **Network Features.** Table 2 presents the estimated network-based features. To calculate such features, as already mentioned in Section 3.1, we consider the _conversation-based network_ constructed based on the mentions, replies, and retweets between each pair of users. For the hub and authority scores the difference in distributions is statistically significant (\(D{=}0.46745\)) with mean (STD) values for the hub score to be equal to 0.0248 (0.0205) and 0.0061 (0.0087) for the linked and non-linked accounts, respectively, and for the authority score to be equal to 0.0238 (0.0196) and 0.0058 (0.0083) for the linked and non-linked accounts, respectively. Concerning the pagerank and eigenvector centrality measures the difference is statistically significant (\(D{=}0.49974\), \(D{=}0.43939\), respectively), which is not the case for the clustering coefficient and the number of triangles where we cannot reject the null hypothesis that the distributions are different. **Linguistic Features.** To identify the linkage of two or more accounts we consider a set of various linguistic attributes extracted from the available textual material. Driven by the author profiling and identification tasks, we assume that the writing style of an author is unique enough to be distinguishable from the style of other authors [35]. In the literature for author profiling and identification a wide range of features is utilized; for instance, Burger et al [3] use more than \(15M\) attributes, while Mukherjee and Liu more than \(1K\)[28]. For our purposes, a more limited number of linguistic features is exploited, which has been shown to perform well in similar tasks [35]. This set of linguistic features is generic enough to capture the complexity and style of the discourse across different language families. Indicatively, Figures 1(c)-1(f) depict the ECDFs for the frequency of verbs, nouns, mean number of characters per word, and upper-cased characters features. Comparing the distributions among the linked and non-linked accounts, we observe that the differences are statistically significant with \(D{=}0.25181\), \(D{=}0.29595\), \(D{=}00.30405\), and \(D{=}0.29209\), respectively. Overall, in an effort to detect the linkage among users with the maximum possible efficiency we consider all the linguistic features presented in Table 2 (the difference in their distributions is statistically significant). **Note.** The analysis presented thus far was conducted on the English (abusive) dataset. A similar analysis was conducted for the Arabic (terrorism-related) dataset; we omit the results due to space limits. **Features Evaluation.** Table 3 shows the top 12 features for both the abusive and terrorism datasets based on the information gain approach which ranks features based on the information gain entropy in decreasing order. We observe that in both cases the network features, which describe the connectivity of users in the network, \begin{table} \begin{tabular}{l|l} \hline **Dataset** & **Feature (preserving order)** \\ \hline \hline \multirow{4}{*}{\begin{tabular}{l} Abusive \\ (English) \\ \end{tabular} } & eigenvector (30\%), authority (10.29\%), hub (10.26\%) \\ & pagerank (0.55\%), periods (6.83\%), stopwords (5.28\%) \\ & diff. between longer - shortest words (4.98\%) \\ & atwellized (4.98\%), passive nominal subject (4.68\%) \\ & mentions (4.51\%), coordination (4.43\%), adverbs (POS) (4.21\%) \\ \hline \multirow{4}{*}{ \begin{tabular}{l} Terrorism \\ (Arabic) \\ \end{tabular} } & eigenvector (26.23\%), hashtags (8.45\%), punctuation (7.98\%) \\ & mentions (7.78\%), diff. between longest - shortest words (7.18\%) \\ \cline{1-1} & periods (0.73\%), adoption (6.93\%), mean max depth (6.46\%) \\ \cline{1-1} & STD of word length (5.62\%), pagerank (5.51\%) \\ \cline{1-1} & Hub (5.42\%), Authority (5.42\%) \\ \hline \end{tabular} \end{table} Table 3: Features evaluation. Figure 2: ECDF of (a) Mentions, (b) Hashtags, (c) Verbs, (d) Nouns, (e) Mean # characters per word, and (f) Upper-cased characters. are among the most contributing ones. Especially for the abusive dataset such features seems to occupy the first places. Regarding the activity features the _average number of mentions_ is among the top contributing ones in both cases, where especially for the terrorism-related dataset both the _average number of hashtags_ and _mentions_ seem to have a better discriminative ability comparing to the rest. Focusing on the abusive dataset and the linguistic features, we observe that four out of seven are syntactic-based which indicates the importance of such features in distinguishing between linked and non-linked accounts. Specifically, the most contributing syntactic-based features are the following: _adverbs_ (part-of-speech), _adverbial modifier_ (adverb or adverbial phrase that serves to modify a predicate or a modifier word), _passive nominal subject_ (a noun phrase which is the syntactic subject of a passive clause), and _coordination_ (is the relation between an element of a conjunct and the coordinating conjunction word of the conjunct). With respect to the terrorism dataset and the linguistic features, we observe that the character-, word-, and syntactic-based ones tend to have an important discriminating power with the _average number of punctuations_ and the _difference between the longest and shortest words_ features being among the most contributing ones. Overall, for the English (abusive) dataset, most of the features presented in Table 2 are useful (statistically significant) in discriminating between the two classes (i.e., linked and non-linked user accounts). However, some are not useful and are excluded to avoid adding noise. Specifically, two features are excluded: the number of triangles and the clustering coefficient. For the Arabic dataset, all features are useful and thus are used during the modeling analysis. ### Experimental Methodology The features from the three categories \(\{A,L,N\}\) that are selected as described above are employed for user modeling, while user pairs are modeled based both on the absolute difference (_abs_) and on the similarity of feature vectors (_sim_); similarity is estimated based on Cosine similarity, and Euclidean and Manhattan distances. Therefore, the following approaches are evaluated: \(Activity_{abs}\), \(Linguistic_{abs}\), \(Network_{abs}\), \(All_{abs}\), and \(All_{sim}\). Moreover, the concatenation of \(All_{abs}\) and \(All_{sim}\) is also considered. In addition, the two features derived by modeling each pair of users using the edit distance and semantic similarities (see Section 3.2) are considered in conjunction with the above, resulting in five additional approaches (see Table 4). Overall, a total of 11 different methods are evaluated. We examined various machine learning algorithms, either probabilistic, tree-based, or ensemble classifiers, as well as deep neural networks. For each family of classifiers, we only present those that achieve the best results (due to space limits). Specifically, BayesNet, J48, and Random Forest (RF) are used as probabilistic, tree-based, and ensemble classifiers, respectively, along with the neural network setup. We use WEKA for the traditional classifiers, and Keras with Theano (Keras, 2016) for the deep learning models. In all cases, we use repeated (5 times) 10-fold cross validation which is less variable than the ordinary 10-fold cross validation (Keras, 2016). **Baseline.** Among the 11 approaches, the first three (i.e., \(Activity_{abs}\), \(Linguistic_{abs}\), \(Network_{abs}\)) are our baselines. Our aim is to not only determine the most effective classification approach, but to also assess whether the consideration of further information in the classification model (i.e., the features combined under different schemes) improves the overall performance, regardless of the choice of the classification algorithm. As shown in Table 1, a wide range of activity, linguistic, and network features have been exploited in previous related research. In an effort to be in alignment and comparable to literature to the maximum extent possible, here, we consider an important number of these features. Specifically, we focus to those that are more applicable to our problem setting, since due to the inherent differences in the structure of the various social media platforms, different features are applicable to each case. At the same time, we further expand these features to better describe online user behavior. Specifically, as for the linguistic features, we consider both dependency and tree features in addition to other commonly used ones (e.g., part-of-speech). Moreover, a wider range of network features is extracted by building on top of the conversation-based network constructed using mentions, replies, and retweets; previous work has used only a reply-based network and considered only two network features. Finally, to further improve the detection process, we also experiment with different combinations of features and user modeling approaches (i.e., absolute difference and similarity of feature vectors), while at the same time we further enhance the baseline by employing similarity-based features (i.e., edit distance and semantic similarity), which can encapsulate the authors' writing style in greater depth. **Evaluation metrics.** To be in alignment with similar works, standard evaluation metrics are reported: precision (prec), recall (rec), weighted area under the ROC curve (AUC), and accuracy (Acc). In each table and for each evaluation metric (i.e., accuracy, AUC, precision, and recall), we highlight the top in terms of performance. ## 5. Results We first evaluate user identity linkage detection on the abusive dataset and then on the terrorism dataset. The results are first presented on datasets built for \(X\)=200 and \(Z\)=1, 800, and then for varying \(X\) and \(Z\) values. Moreover, the presented results are based on randomly assigning tweets between linked accounts when building the ground truth; we achieve similar performance with interleaving (we omit these results due to space limits). ### Abusive Dataset (English tweets) Table 4 shows that BayesNet achieves the best results when using the absolute difference for the user modeling, with AUC between **74.20%** and **98.22%** and accuracy between **91.26%** and **97.64%**. We achieve the best precision and recall with the network features, either on their own (i.e., **97.58%** and **97.60%**) or combined with the two texts' similarity measures, i.e., edit distance (_edits_) and semantic similarity (_sem_), (i.e., **97.58%** and **97.64%**). With regard to feature categories, the activity ones contribute the least, with **88.94%** precision, **91.28%** recall, and moderate AUC of **74.20%**. Similar to BayesNet, J48 achieves the best AUC (up to **95.30%**) based on the absolute difference between features, while again we achieve the best performance using the network features (i.e., **99.08%** precision and **99.10%** recall). Finally, texts' similarities appear to have an important role, since in most cases they tend to improve the classification results. \begin{table} \begin{tabular}{l c c c c|c c c c|c c c c|c c c c} & \multicolumn{3}{c|}{**BayesNet**} & \multicolumn{3}{c|}{**J48**} & \multicolumn{3}{c|}{**Random Forest**} & \multicolumn{3}{c}{**Neural Network**} \\ \cline{2-13} & **Acc** & **AUC** & **Prec** & **Rec** & **Acc** & **AUC** & **Prec** & **Rec** & **Acc** & **AUC** & **AUC** & **Prec** & **Rec** & **Acc** & **AUC** & **Prec** & **Rec** & **Acc** & **AUC** & **Prec** & **Rec** \\ \hline Baseline \(Activity_{abs}\) & 91.26 & 74.20 & 88.94 & 91.28 & 91.24 & 59.34 & 88.70 & 91.14 & 91.02 & 74.68 & 88.56 & 91.02 & 90.90 & 65.00 & 83.00 & 91.00 \\ Baseline \(Inquistik_{abs}\) & 93.25 & 95.60 & 93.96 & 93.24 & 93.19 & 79.88 & 92.86 & 93.18 & 94.02 & 98.44 & 94.40 & 94.08 & 94.77 & 91.65 & 94.00 & 95.00 \\ Baseline \(Network_{abs}\) & 97.60 & 96.78 & **97.58** & 97.60 & **99.08** & 94.92 & **99.08** & **99.10** & 97.80 & 98.48 & 97.80 & 90.86 & 81.41 & 83.00 & 91.00 \\ \hline \(All_{abs}\) & 95.20 & **98.22** & 95.80 & 95.22 & 97.17 & 89.40 & 97.02 & 97.18 & 95.11 & 99.30 & 95.38 & 95.10 & 95.90 & 90.65 & **96.00** & **96.00** \\ \hline \(Activity_{abs}\) + \(edits+sem\) & 97.02 & 97.52 & 96.96 & 97.02 & 98.77 & 95.00 & 98.78 & 97.43 & 98.76 & 97.44 & 97.44 & 99.00 & 86.77 & 83.00 & 91.00 \\ \(Inquistik_{abs}\) + \(edits+sem\) & 92.97 & 95.68 & 93.96 & 92.98 & 93.86 & 80.98 & 93.64 & 93.86 & 94.12 & 98.72 & 94.48 & 94.14 & 94.22 & 90.92 & 94.00 & 94.00 \\ \(Network_{abs}\) + \(edits+sem\) & **97.64** & **97.44** & **97.58** & **97.64** & 98.75 & **95.30** & 98.74 & 98.74 & **97.81** & **95.00** & **97.82** & **97.82** & **90.68** & 80.90 & 83.00 & 91.00 \\ \hline \(All_{abs}\) + \(edits+sem\) & 95.06 & 98.22 & 95.80 & 95.06 & 96.67 & 87.50 & 95.44 & 96.68 & 95.13 & 99.00 & 95.38 & 95.12 & **95.95** & 95.91 & **96.00** & **96.00** \\ \hline \(All_{sim}\) + \(All_{abs}\) & 88.61 & 86.38 & 90.14 & 88.60 & 93.94 & 68.98 & 93.96 & 93.94 & 94.25 & 90.28 & 94.10 & 94.26 & 91.95 & 80.22 & 93.00 & 92.00 \\ \(All_{sim}\) + \(All_{abs}\) + \(edits+sem\) & 94.27 & 97.86 & 95.20 & 94.26 & 96.40 & 88.04 & 96.26 & 96.40 & 95.06 & 99.24 & 95.28 & 95.06 & 95.45 & **96.13** & 95.00 & 95.00 \\ \(All_{sim}\) + \(All_{abs}\) + \(edits+sem\) & 94.05 & 97.86 & 95.22 & 94.06 & 96.50 & 87.72 & 96.32 & 96.50 & 95.02 & 99.38 & 95.28 & 95.04 & 95.45 & 95.99 & 95.00 & 95.00 \\ \end{tabular} \end{table} Table 4. Classification results of BayesNet, J48, Random Forest, and Neural Network (Abusive Case, \(X\)=200). Figure 3. Varied linked (\(X\)=[200,500], step=100) and non-linked (\(X\)=200) instances. \begin{table} \begin{tabular}{l c c c|c c c c|c c c c|c c c c} & \multicolumn{3}{c|}{**BayesNet**} & \multicolumn{3}{c|}{**J48**} & \multicolumn{3}{c}{**Random Forest**} & \multicolumn{3}{c}{**Neural Network**} \\ \cline{2-13} & **Acc** & **AUC** & **Prec** & **Rec** & **Acc** & **AUC** & **Prec** & **Rec** & **Acc** & **AUC** & **Prec** & **Rec** & **Acc** & **AUC** & **Prec** & **Rec** \\ \hline Baseline \(Activity_{abs}\) & 92.00 & 81.20 & 90.74 & 91.60 & 91.70 & 77.22 & 96.88 & 91.70 & 89.59 & 81.14 & 87.66 & 89.58 & 90.90 & 78.23 & 83.00 & 91.00 \\ Baseline \(Inquistik_{abs}\) & 94.72 & 97.34 & 95.42 & 94.72 & 95.64 & 84.62 & 95.50 & 95.64 & 96.13 & 98.72 & 96.20 & 96.14 & 96.09 & 96.40 & 96.00 & 96.00 \\ Baseline \(Network_{abs}\) & 87.95 & 95.86 & 93.84 & 87.96 & 95.69 & 92.76 & 96.38 & 96.28 & 96.02 & 94.28 & 96.08 & 96.04 & 91.00 & 83.26 & 92.00 & 91.00 \\ \hline \(All_{abs}\) & 96.60 & 99.18 & 97.08 & 96.60 & 96.73 & 90.32 & 96.62 & 96.74 & 97.06 & 99.38 & 97.02 & 97.00 & 96.18 & 97.99 & 96.00 & 96.00 \\ \(Activity_{abs}\) + \(edits+sem\) & 90.09 & 94.82 & 93.00 & 90.10 & 96.19 & 80.18 & 96.08 & 96.20 & 95.77 & 94.80 & 95.52 & 97.83 & 93.22 & 95.32 & 95.30 & 93.00 \\ \(Inquistik_{abs}\) + \(edits+sem\) & 94.73 & 97.44 & 95.60 & 94.74 & 96.85 & 85.50 & 96.74 & 96.56 & 96.63 & 99.00 & 96.26 & 96.62 & 96.68 & 97.24 & **97.00** & **97.00** \\ \(Network_{abs}\) + \(edits+sem\) & 95.22 & 99.22 & 96.52 & 95.22 & **97.71** & **94.16** & **97.68** & **97.70** & **97.56** & 98.84 & **97.48** & **97.56** & 94.54 & 92.12 & 94.00 & 95.00 \\ \hline \(All_{abs}\) + \(edits+sem\) & 96.70 & **99.30** & **97.26** & 96.70 & 97.57 & 93.28 & 97.58 & 97.10 & **99.50** & 97.10 & 97.10 & 96.59 & **98.45** & **97.00** & **97.00** \\ \hline \(All_{abs}\) & 94.92 & 94.10 & 94.94 & 94.92 & 94.66 & 85.60 & 94.38 & 94.86 & 95.66 & 94.58 Contrary to the traditional classifiers, the linguistic features perform better in the NN setup (i.e., **91.65%** AUC, **94%** precision, and **95%** recall) compared to the activity and network ones. Overall, we obtain the best performance in terms of AUC (**96.13%**) when all features are considered, when using both the absolute difference and similarity of features vectors for user modeling. This indicates that the more information as input to the NN, the better the performance. Finally, the Random Forest ensemble classifier achieves the best performance when network features are used in addition to texts' similarities. Specifically, AUC equals to **99.50%** with precision, recall, and accuracy around **97.80%**. Compared to the probabilistic, tree-based classifiers, and deep neural networks, the Random Forest model achieves the best AUC, with precision and recall values among the top; thus we use only this in the following experiments. Thus far, we used the ground truth created with \(X=200\) randomly selected users. Next, we vary \(X\) from 200 to 500 with step 100. Figure 2(a), which depicts the performance of the Random Forest model, shows that from 200 to 300 users there is a slight increase in precision, recall, and accuracy, while then the performance is quite stable with more than **99%** AUC in all cases. We also examine how the number of the non-linked instances (unbalanced dataset) affects the results. The selected number of linked accounts equals to 200, thus the upper limit of non-linked accounts equals to \(39,800\). Figure 2(b) indicates that even with the highest number of non-linked user accounts, AUC remains at quite satisfactory levels (**87.30%**). Precision and recall increase as more data is available, while after a point (\(\sim\)24\(k\) non-linked accounts) they are not significantly affected. This is mainly attributed to the higher precision and recall values for the non-linked accounts. Hence, even with a higher amount of non-linked accounts, the proposed model will succeed to effectively distinguish between linked and non-linked users. ### Terrorism Dataset (Arabic tweets) Table 5 shows that when using BayesNet, the linguistic features alone result in better performance compared to the activity and network ones. We achieve the best precision (**97.26%**) and recall (**96.78%**) when we consider all feature categories together using both the absolute difference and the similarity of feature vectors for user modeling. AUC maintains above **94%**, for all cases, except when only the activity features are considered (**81.20%** AUC). Contrary to the BayesNet results in the abusive dataset, here we see that when the similarity of feature vectors (combined with additional features) is used as a user modeling method, we achieve high precision and recall values (up to **97.26%** and **96.78%**, respectively). Out of the tree-based classifiers, J48 performs best (similar to the abusive case), following also a similar pattern in terms of the most well-performing feature categories and user modeling methods. Network features appear to contribute more with the best performance (i.e., **97.68%** precision, **97.70%** recall, **94.16%** AUC) achieved when combined with the texts' similarity measures. Similar to the abusive case, linguistic features contribute more in the NN setup (**96.40%** AUC, **96%** precision and recall) compared to activity and network ones. We obtain the best AUC (**98.45%**) when all feature categories are considered, in addition to the texts' similarities. In almost all cases, AUC, precision, and recall are higher than 90%, highlighting the stability of the used setup. Finally, the best performance for the Random Forest (**99.50%** AUC) is obtained when all features under the absolute difference modeling method are combined with the texts' similarities. Regarding the feature categories, linguistic features result in better performance compared to the rest (**98.72%** AUC), which is also the case when combined with the texts' similarities (**99%** AUC). Overall, Random Forest leads to the best AUC and therefore is used next. Figure 3(a) shows the performance of Random Forest when the number of the selected linked accounts changes. AUC is fairly stable with its value to be in all cases above **99%**, which indicates the suitability of the proposed model. Concerning the other evaluation metrics, the increase of the linked accounts results in higher values. Figure 3(b) depicts how the proposed model performs with an unbalanced dataset (as in the abusive case: 200 linked and up to 39, 800 non-linked accounts). Overall, AUC fluctuates from **94%** to **99.50%**, which again points out the stability of the proposed model and precision and recall from **97.1%** to **99%**. ### Classification Takeaways Overall, our models perform well for both the abusive and terrorism-related datasets. For instance, the high ROC area3 for the overall classification (99.50% in both cases) indicates that the proposed models can quite successfully discriminate between linked and non-linked accounts. Even though the performance is slightly different in terms of the precision, recall values and the classification models, in both studied cases the traditional classifiers performed better. The lower performance of the neural network model can be justified by the limited number of instances used for building the model, since NNs perform better when large numbers of training data is available. Moreover, in most cases, a better performance is achieved when baseline features are enhanced with additional information. Footnote 3: AUC of the ROC curves are typically used to evaluate the performance (sensitivity) of a model. Focusing on the specific feature categories, we observe that the network features contribute significantly to the classification (especially when traditional classifiers are used); this highlights the importance of considering the connectivity of a user in a network to detect more efficiently the linkage between users. A quite important observation is that the proposed models perform well in different languages, and the performance, in some cases, is slightly better in the Arabic dataset. This could possibly be attributed to the way that the initial data was collected. The abusive dataset was created based on #Gamergate as a seed word for querying Twitter, while then during the collection process further filtering keywords were added in consecutive time intervals to select additional abusive-related content (Chen et al., 2017). On the contrary, the terrorism-related data was collected based on targeted filtering keywords from the very beginning. Hence, the abusive dataset is less focused than the terrorist one, and thus users' behavioral patterns may differ more, making the classification somewhat harder. Overall, even with more targeted or broader data, the proposed ensemble models succeed in distinguishing quite effectively between linked and non-linked accounts. Moreover, we observe that, for both the abusive and terrorism datasets, the ensemble models built using the network features in addition to the texts' similarity measures result in high performance (AUC > 98% and Acc, Prec, Rec > 97%). Hence, since some linguistic features are language-dependent and thus additional effort would be needed for constructing such models for other languages, one could opt for the network-based model which is easier to adapt to different languages (probably with a slight negative effect on the overall performance). ## 6. Conclusions & Future Work Similar to the offline world, user-generated content in online social networks often relates to abusive or even illegal activities. While social media administrators often take intensive actions to remove the content and respective content producers not complying with their rules, users with non-legitimate or abnormal activity often tend to create multiple accounts in an effort to bypass and to be a step ahead of the applied combating measures. This work proposed a framework for detecting accounts likely to belong to the same natural person in an attempt to combat multiple non-legitimate accounts. We considered several attributes of users' online activity, posts, and networks, and traditional machine learning methods, as well as deep neural networks were tested. The results showed that our method is able to effectively detect linked accounts related to non-legitimate, or even illegal (abusive and terrorism-related) activities, in different languages: English and Arabic. As future work, we plan to conduct our analysis on other online social media platforms, such as YouTube and Facebook, so as to understand if our methods can be easily adapted within and across other social networks. Moreover, the proposed method could be extended to consider additional linguistic attributes, like sarcasm and irony. Finally, we aim to also investigate the effectiveness of our framework in domains amenable to public opinion manipulation and propaganda, such as politics. ###### Acknowledgements. This research has received funding from the European Union's H2020 research and innovation programme as part of the CONNEXIONs (GA No 786731) and PREVISION (GA No 833115) projects.
2308.00951
From Sparse to Soft Mixtures of Experts
Sparse mixture of expert architectures (MoEs) scale model capacity without significant increases in training or inference costs. Despite their success, MoEs suffer from a number of issues: training instability, token dropping, inability to scale the number of experts, or ineffective finetuning. In this work, we propose Soft MoE, a fully-differentiable sparse Transformer that addresses these challenges, while maintaining the benefits of MoEs. Soft MoE performs an implicit soft assignment by passing different weighted combinations of all input tokens to each expert. As in other MoEs, experts in Soft MoE only process a subset of the (combined) tokens, enabling larger model capacity (and performance) at lower inference cost. In the context of visual recognition, Soft MoE greatly outperforms dense Transformers (ViTs) and popular MoEs (Tokens Choice and Experts Choice). Furthermore, Soft MoE scales well: Soft MoE Huge/14 with 128 experts in 16 MoE layers has over 40x more parameters than ViT Huge/14, with only 2% increased inference time, and substantially better quality.
Joan Puigcerver, Carlos Riquelme, Basil Mustafa, Neil Houlsby
2023-08-02T05:20:55Z
http://arxiv.org/abs/2308.00951v2
# From Sparse to Soft Mixtures of Experts ###### Abstract Sparse mixture of expert architectures (MoEs) scale model capacity without large increases in training or inference costs. Despite their success, MoEs suffer from a number of issues: training instability, token dropping, inability to scale the number of experts, or ineffective finetuning. In this work, we propose Soft MoE, a _fully-differentiable_ sparse Transformer that addresses these challenges, while maintaining the benefits of MoEs. Soft MoE performs an implicit soft assignment by passing different weighted combinations of all input tokens to each expert. As in other MoE works, experts in Soft MoE only process a subset of the (combined) tokens, enabling larger model capacity at lower inference cost. In the context of visual recognition, Soft MoE greatly outperforms standard Transformers (ViTs) and popular MoE variants (Tokens Choice and Experts Choice). For example, Soft MoE-Base/16 requires 10.5\(\times\) lower inference cost (5.7\(\times\) lower wall-clock time) than ViT-Huge/14 while matching its performance after similar training. Soft MoE also scales well: Soft MoE Huge/14 with 128 experts in 16 MoE layers has over 40\(\times\) more parameters than ViT Huge/14, while inference time cost grows by only 2\(\%\), and it performs substantially better. ## 1 Introduction Larger Transformers improve performance at increased computational cost. Recent studies suggest that model size and training data must be scaled together to optimally use any given training compute budget (Kaplan et al., 2020; Hoffmann et al., 2022; Zhai et al., 2022). A promising alternative that allows to scale models in size without paying their full computational cost is sparse mixtures of experts (MoEs). Recently, a number of successful approaches have proposed ways to sparsely activate token paths across the network in language (Lepikhin et al., 2020; Fedus et al., 2022), vision (Riquelme et al., 2021), and multimodal models (Mustafa et al., 2022). At the core of sparse MoE Transformers lies a discrete optimization problem: deciding which modules should be applied to each input token. These modules are commonly referred to as _experts_ and are usually MLPs. Many techniques have been devised to find good token-to-expert matches: linear programs (Lewis et al., 2021), reinforcement learning (Bengio et al., 2015), deterministic fixed rules (Roller et al., 2021), optimal transport (Liu et al., 2022), greedy top-\(k\) experts per token (Shazeer et al., 2017), or greedy top-\(k\) tokens per expert (Zhou et al., 2022). In many cases, heuristic auxiliary losses are required to balance utilization of experts and minimize unassigned tokens. These challenges can be exacerbated in out-of-distribution scenarios: small inference batch sizes, novel inputs, or in transfer learning. We introduce a new approach, Soft MoE, that overcomes many of these challenges. Rather than employing a sparse and discrete router that tries to find a good _hard_ assignment between tokens and experts, Soft MoEs instead perform a _soft_ assignment by mixing tokens. In particular, we compute several weighted averages of all tokens--with weights depending on both tokens and experts--and then we process each weighted average by its corresponding expert. Soft MoE models avoid most of the challenges mentioned above which are caused by the discrete procedure at the core of sparse MoEs. Popular sparse MoE algorithms learn some router parameters, and the source of gradients is usually two-fold: post-multiplication of expert outputs with the _selected_ routing scores, and auxiliary losses that enforce some desired behaviour and also depend on the routing scores. It has been observed that these mechanisms are often no better than random fixed routing (Roller et al., 2021). Soft MoE sidesteps this issue as every routing (or mixing) parameter is directly updated based on every single input token. Soft routing can provide _stability_ while training a router; (Mustafa et al., 2022) observed that during training large fractions of input tokens can simultaneously change discrete routes through the network, leading to training challenges. Further, hard routing can be challenging with many experts, with most works training with just a few dozen. We show that Soft MoE scales to thousands of experts, and it is balanced by construction. Finally, there are no batch-effects at inference, where one input can affect routing (due to limited expert capacity), and hence prediction, for other inputs. Soft MoE L/16 beats ViT H/14 on upstream, fewshot and finetuning while requiring almost half the training time, and being \(\mathbf{2\times faster}\) at inference. Moreover, Soft MoE B/16 matches ViT H/14 on fewshot and finetuning and outperforms it on upstream metrics after a comparable amount of training. Remarkably, Soft MoE B/16 is \(\mathbf{5.7\times faster}\) at inference despite having \(5.5\times\) the number of parameters of ViT H/14. Section 4 demonstrates Soft MoE's potential to extend to other tasks: we train a contrastive model text tower against the frozen vision tower, showing that representations learned via soft routing preserve their benefits for image-text alignment. ## 2 Soft Mixture of Experts ### Algorithm description The Soft MoE routing algorithm is depicted in Figure 2. We denote the inputs tokens for one sequence by \(\mathbf{X}\in\mathbb{R}^{m\times d}\), where \(m\) is the number of tokens and \(d\) is their dimension. Each MoE layer uses a set of \(n\) expert functions1 applied on individual tokens, namely \(\{f_{i}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\}_{1:n}\). Each expert will process \(p\)_slots_, and each slot has a corresponding \(d\)-dimensional vector of parameters. We denote these parameters by \(\mathbf{\Phi}\in\mathbb{R}^{d\times(n\cdot p)}\). Footnote 1: In practice, all experts apply the same function with different parameters, usually an MLP. In particular, the input slots \(\tilde{\mathbf{X}}\in\mathbb{R}^{(n\cdot p)\times d}\) are the result of convex combinations of all the \(m\) input tokens, \(\mathbf{X}\): \[\mathbf{D}_{ij} =\frac{\exp((\mathbf{X}\mathbf{\Phi})_{ij})}{\sum_{i^{\prime}=1} ^{m}\exp((\mathbf{X}\mathbf{\Phi})_{i^{\prime}j})} \tag{1}\] \[\tilde{\mathbf{X}}=\mathbf{D}^{\top}\mathbf{X}.\] Notice that \(\mathbf{D}\), which we call the _dispatch_ weights, is simply the result of applying a softmax over the _columns_ of \(\mathbf{X}\mathbf{\Phi}\). Then, as mentioned above, the corresponding expert function is applied on each slot (i.e. on rows of Figure 1: **Main differences between Sparse and Soft MoE layers. While the router in Sparse MoE layers (left) learns to _assign_ individual input tokens to each of the available slots, in Soft MoE layers (right) each slot is the result of a (different) _weighted average_ of all the input tokens. Learning to make discrete assignments introduces several optimization and implementation issues that Soft MoE sidesteps.** \(\tilde{\mathbf{X}}\)) to obtain the output slots: \[\tilde{\mathbf{Y}}_{i}=f_{\lfloor i/p\rfloor}(\tilde{\mathbf{X}}_{i}). \tag{2}\] Finally, the output tokens \(\mathbf{Y}\) are computed as a convex combination of all \((n\cdot p)\) output slots, \(\tilde{\mathbf{Y}}\), whose weights are computed similarly as before: \[\mathbf{C}_{ij}=\frac{\exp((\mathbf{X}\mathbf{\Phi})_{ij})}{\sum_{j^{\prime}=1 }^{n\cdot p}\exp((\mathbf{X}\mathbf{\Phi})_{ij^{\prime}})} \tag{3}\] We refer to \(\mathbf{C}\) as the _combine_ weights, and it is the result of applying a softmax over the _rows_ of \(\mathbf{X}\mathbf{\Phi}\). Following the usual design for Sparse MoEs (Lepikhin et al., 2020; Fedus et al., 2022; Riquelme et al., 2021; Zhou et al., 2022), we replace a subset of the Transformer's MLP blocks with Soft MoE blocks. In particular, we typically replace the second half of MLP blocks. The total number of slots is a key hyperparameter of Soft MoE layers because the time complexity depends on the number of slots rather than on the number of experts. For example, one can set the number of slots equal to the input sequence length to match the FLOPs of the equivalent dense Transformer. ``` 1defsoft_moe_layer(X,Phi,experts): 2#Computethedispatchandcombineweights. 3logits=jp.einsum('md,dmp->mpp',X,Phi) 4D=jax.nn.softmax(logits,axis=(0,)) 5C=jax.nn.softmax(logits,axis=(1,2)) 6#Theinputslotsareweightedaverageofalltheinputtokens, 7#givenbythedispatchweights. 8Xs=jp.einsum('md,mp->npd',X,D) 9#Applythecorrespondingexpertfunctiontoeachinputslot. 10Ys=jp.stack( 11f_i(Xs[i,:,:])fori,f_iinenumerate(experts)], 12axis=0) 13#Theoutputtokensareweightedaverageofalltheoutputslots, 14#givenbythecombineweights. 15Y=jp.einsum('npd,mp->md',Ys,C) 16returnY ``` **Algorithm 1**Simple JAX (Bradbury et al., 2018) implementation of a Soft MoE layer. Full code is available at [https://github.com/google-research/vmoe](https://github.com/google-research/vmoe). Figure 2: **The Soft MoE routing algorithm.** Soft MoE first computes scores or logits for every pair of input token and slot, based on some learnable per-slot parameters. These logits are then normalized per slot (columns) and every slot computes a linear combination of all the input tokens based on these weights (in green). Each expert (an MLP in this work) then processes its slots (e.g. 2 slots per expert, in this diagram). Finally, the same original logits are normalized per token (i.e. by row) and used to combine all the slot outputs, for every input token (in blue). Dashed boxes represent learnable parameters. ### Properties of Soft MoEs and connections with Sparse MoEs Fully differentiableAt the core of all Sparse MoE algorithms there is an assignment problem between tokens and experts, which is usually subject to some specific capacity and balance constraints. Different algorithms relax the problem or approximate the solution in different ways: the Top-\(k\) or "Token Choice" router (Shazeer et al., 2017; Lepikhin et al., 2020; Riquelme et al., 2021), for instance, selects the top-\(k\)-scored experts for each token, while there are slots available in such expert (i.e. the expert has not filled its _capacity_). The "Expert Choice" router (Zhou et al., 2022) selects the top-_capacity_-scored tokens for each expert. Other works suggest more advanced (and often costly) algorithms to compute the assignments, such as approaches based on Linear Programming algorithms (Lewis et al., 2021), Optimal Transport (Liu et al., 2022; Clark et al., 2022) or Reinforcement Learning (Clark et al., 2022). Nevertheless virtually all of these approaches are discrete in nature, and thus non-differentiable. In contrast, all operations in Soft MoE layers are continuous and fully differentiable. Indeed, we can interpret the weighted averages with softmax scores as _soft_ assignments -which motivates our algorithm's name- rather than the _hard_ assignments that Sparse MoE methods typically use. No token dropping and expert unbalanceThe classical routing mechanisms mentioned above tend to suffer from issues such as "token dropping" (i.e. some tokens are not assigned to any expert), or "expert unbalance" (i.e. some experts receive far more tokens than others). Unfortunately, performance can be severely impacted as a consequence. For instance, the popular Top-\(k\) or "Token Choice" router (Shazeer et al., 2017) suffers from both, while the "Expert Choice" router (Zhou et al., 2022) only suffers from the former (see Appendix B for some experiments regarding dropping in both cases). Soft MoEs are basically immune to token dropping and expert unbalance since every slot is filled with a weighted average of all tokens. All weights are (in theory) strictly positive thanks to the softmax (see Section 5 for detailed experiments). FastThe total number of slots is the main hyperparameter that determines the cost of a Soft MoE layer. Every input applies such number of MLPs. The total number of _experts_ is irrelevant in this calculation: few experts with many slots per expert or many experts with few slots per expert will have matching costs if the total number of slots is identical. The only constraint we must meet is that the number of slots has to be greater or equal to the number of experts (as each expert must process at least one slot). The main advantage of Soft MoE is completely avoiding sort or top-\(k\) operations which are slow and typically not well suited for hardware accelerators. As a result, Soft MoE is significantly _faster_ than most sparse MoEs (Figure 6). See Section 2.3 for time complexity details. Features of both sparse and denseThe _sparsity_ in Sparse MoEs comes from the fact that expert parameters are only applied to a subset of the input tokens. However, Soft MoEs are not technically sparse, since every slot is a weighted average of all the input tokens. Every input token _fractionally_ activates all the model parameters. Likewise, all output tokens are fractionally dependent on all slots (and experts). Finally, notice also that Soft MoEs are not Dense MoEs, where every expert processes all input tokens, since every expert only processes a subset of the slots. Per-sequence determinismUnder capacity constraints, all Sparse MoE approaches route tokens in _groups_ of a fixed size and enforce (or encourage) balance within the group. When groups contain tokens from different sequences or inputs, these tokens often _compete_ against each other for available spots in expert buffers. As a consequence, the model is no longer deterministic at the sequence-level, but only at the batch-level, as some input sequences may affect the final prediction for other inputs. Models using larger groups tend to provide more freedom to the routing algorithm and usually perform better, while their computational cost is also higher. On the other hand, when groups contain tokens from a single sequence, the model is forced to use every expert on every input sequence. This may lead to more generalist experts. Moreover, changing the group size between training and inference can be problematic due to the potential distributional shift in token-to-expert assignments. We explore these aspects in Section 3.5. Soft MoE gracefully sidesteps all these challenges. Since it combines all tokens in each input sequence, we just set the group size to be a single sequence. Every expert does handle tokens from every input, maybe somewhat limiting the amount of high-level specialization. Yet, this also implies that it is per-example deterministic and fast, while typical instances of Sparse MoEs are not. ### Implementation Time complexityAssume the per-token cost of a single expert function is \(O(k)\). The time complexity of a Soft MoE layer is then \(O(mnpd+npk)\). By choosing \(p=O(m/n)\) slots per expert, i.e. the number of tokens over the number of experts, the cost reduces to \(O(m^{2}d+mk)\). Given that each expert function has its own set of parameters, increasing the number of experts \(n\) and scaling \(p\) accordingly, allows us to increase the total number of parameters without any impact on the time complexity. Moreover, when the cost of applying an expert is large, the \(mk\) term dominates over \(m^{2}d\), and the overall cost of a Soft MoE layer becomes comparable to that of applying a single expert on all the input tokens. Finally, even when \(m^{2}d\) is not dominated, this is the same as the (single-headed) self-attention cost, thus it does not become a bottleneck in Transformer models. NormalizationIn Transformers, MoE layers are typically used to replace the feedforward layer in each encoder block. Thus, when using pre-normalization as most modern Transformer architectures (Domhan, 2018; Xiong et al., 2020; Riquelme et al., 2021; Fedus et al., 2022), the inputs to the MoE layer are "layer normalized". This causes stability issues when scaling the model dimension \(d\), since the softmax approaches a one-hot vector as \(d\rightarrow\infty\) (see Appendix E). Thus, in Line 3 of algorithm 1 we replace x and Phi with 12_normalize(X, axis=1) and scale * 12_normalize(Phi, axis=0), respectively; where scale is a trainable scalar, and 12_normalize normalizes the corresponding axis to have unit (L2) norm, as Algorithm 2 shows. ``` 1defl2_normalize(x,axis,eps=1e-6); 2norm=jnp.sqrt(jnp.square(x).sum(axis=axis,keepdims=True)) 3returnx*jnp.reciprocal(norm+eps) ``` Algorithm 2: JAX implementation of the L2 normalization used in Soft MoE layers. For relatively small values of \(d\) (e.g. the model dimension used for ViT models up to ViT-H, that use \(d\leq 1280\)), the normalization has little impact on the model's quality. However, with the proposed normalization in the Soft MoE layer, we can eventually make the model dimension bigger and/or increase the learning rate (see Appendix E). Accordingly, we use it in all our experiments. Distributed modelWhen the number of experts increases significantly, it is not possible to fit the entire model in memory on a single device, especially during training or when using MoEs on top of large model backbones. In these cases, we employ the standard techniques to distribute the model across many devices, as in (Lepikhin et al., 2020; Riquelme et al., 2021; Fedus et al., 2022) and other works training large MoE models. Distributing the model typically adds an overhead in the cost of the model, which is not captured by the time complexity analysis based on FLOPs that we derived above. In order to account for this difference, in all of our experiments we measure not only the FLOPs, but also the wall-clock time in TPUv3-chip-hours. ### Connections with other methods Many existing works _merge_, _mix_ or _fuse_ input tokens to reduce the input sequence length (Jaegle et al., 2021; Ryoo et al., 2021; Renggli et al., 2022; Wang et al., 2022), typically using attention-like weighted averages with fixed keys, to try to alleviate the quadratic cost of self-attention with respect to the sequence length. Although our dispatch and combine weights are computed in a similar fashion to these approaches, our goal is not to reduce the sequence length (while it is possible), and we actually recover the original sequence length after weighting the experts' outputs with the _combine weights_, at the end of each Soft MoE layer. Multi-headed attention also shows some similarities with Soft MoE, beyond the use of softmax in weighted averages: the \(h\) different _heads_ can be interpreted as different (linear) experts. The distinction is that, if \(m\) is the sequence length and each input token has dimensionality \(d\), each of the \(h\) heads processes \(m\) vectors of size \(d/h\). The \(m\) resulting vectors are combined using different weights for each of the \(m^{\prime}\) output tokens (i.e. the attention weights), on each head independently, and then the resulting \((d/h)\)-dimensional vectors from each head are concatenated into one of dimension \(d\). Our experts are non-linear and combine vectors of size \(d\), at the _input and output_ of such experts. Finally, there are also connections with other MoE works that use a weighted combination of the experts parameters, rather than doing a sparse routing of the examples (Yang et al., 2019; Tian et al., 2020; Muqeeth et al., 2023). These approaches are also fully differentiable, although they can have a much higher cost, since 1) they must average the parameters of the experts, which can become a time and/or memory bottleneck when experts with many parameters are used; and 2) they cannot take advantage of vectorized operations as broadly as Soft (and Sparse) MoEs, since _every input uses a different weighted combination of the parameters._ We recommend the "computational cost" discussion in (Muqeeth et al., 2023) that addresses these issues. ### Current limitations Auto-regressive decodingOne of the key aspects of Soft MoE consists in smartly merging all tokens in the input. This makes the use of Soft MoEs in auto-regressive decoders difficult, since causality between past and future tokens has to be preserved during training. Although causal masks used in attention layers could be used, one must be careful to not introduce any correlation between token and slot _indices_, since this would bias which token indices each expert is trained on. The use of Soft MoE in auto-regressive decoders is a promising research avenue that we leave for future works. Lazy experts & memory consumptionWe extensively show in Section 3 that one slot per expert tends to be the optimal choice. In other words, rather than feeding one expert with two slots (or mixes of tokens), it is more effective from a performance standpoint to use two experts with one slot each. We hypothesize same-expert slots tend to somewhat align and provide small informational gains, and single experts may lack the flexibility to accommodate very different slot projections. We show this in Appendix H. Consequently, Soft MoE tends to leverage a large number of experts and -while its cost is still similar to the dense backbone-the memory requirements of the model can grow large. ## 3 Image Classification Experiments We present three types of experiments on image classification: **Training Pareto frontiers**. First, in Section 3.3 we systematically compare dense ViT models at the Small, Base, Large and Huge sizes with their sparse counterparts based on the most common routing techniques (Tokens Choice, Experts Choice) and Soft MoE routing. We study the training FLOPs versus performance and training time versus performance plots to conclude that Soft MoE dominates all other approaches. **Inference-time optimized models**. Second, in Section 3.4, we present longer training runs ("overtraining"). Relative to ViT, Soft MoE brings large improvements in terms of inference speed (small models: S, B) and absolute performance (large models: L, H). **Model ablations**. Third, in Section 3.5 we investigate some of the central aspects of Soft MoE routing (such as number of experts, slots per expert, etc), and compare their behavior with other routing algorithms. We present the optimal configurations for Soft MoE and the source of the improvement benefits. ### Training and evaluation data We pretrain our models on JFT-4B (Sun et al., 2017), a proprietary dataset whose latest version contains more than 4B images, covering more than 29k classes. During pretraining, we typically evaluate the models on two metrics: upstream validation precision-at-1 on JFT-4B, and ImageNet 10-shot accuracy. The latter is computed by freezing the model weights and replacing the head with a new one that is only trained on a dataset containing 10 images per class from ImageNet-1k (Deng et al., 2009). Finally, we provide the accuracy on the validation set of ImageNet-1k after finetuning on the training set of ImageNet-1k (\(1.3\) million images). ### Sparse routing algorithms We compare to the following popular MoE routing algorithms: _Tokens Choice_. Every token selects the top-\(K\) experts with the highest routing score for the token (Shazeer et al., 2017). Increasing \(K\) typically leads to better performance at the expense of extra computational cost. Batch Priority Routing (BPR) (Riquelme et al., 2021) significantly improves the model performance, especially in the case of \(K=1\) (see Appendix, Table 8). Accordingly we use Top-\(K\) routing with BPR and \(K\in\{1,2\}\). We also optimize the number of experts (Appendix, Figure 15). _Experts Choice_. Alternatively, experts can select the top-\(C\) tokens in terms of routing scores (Zhou et al., 2022). In this case, \(C\) is the buffer size, and we typically set \(E\cdot C=c\cdot T\) where \(E\) is the number of experts, \(T\) is the total number of tokens in the group, and \(c\) is the capacity multiplier. When \(c=1\), all tokens can be served via the union of experts. Note that in this type of routing, it is common that some tokens are simultaneously selected by several experts whereas some other tokens are not selected at all. Figure 14 illustrates this phenomenon. We experiment with \(c=0.5,c=1\) and \(c=2\). ### Training Pareto-optimal models We train VIT-S/8, VIT-S/16, VIT-S/32, VIT-B/16, VIT-B/32, VIT-L/16, VIT-L/32 and VIT-H/14 models, and their sparse counterparts. We consider three routing algorithms for the sparse models: Token Choice, Expert Choice, and Soft MoE. In each case, we train several model variants (different \(K\), \(C\) and number of experts where it corresponds). In total, we train 106 models. The models are trained for 300k steps with batch size 4096 in all cases, and inverse square root learning rate decay. Figure 2(a) and Figure 2(b) show the results for models in each class that lie on their respective training cost / performance Pareto frontiers. On both metrics, Soft MoE strongly outperforms dense and other sparse approaches for any given FLOPs or time budget. Table 9 in Appendix I list all the models, with their parameters, performance and costs, and are shown in Figure 22. Figures 23 to 25 in Appendix F compare Soft MoE individually to Dense, Token Choice and Expert Choice models respectively. #### 3.4.1 Comparison with large-scale Vision Transformers We trained a number of Soft MoEs on JFT, following a comparable setting to that used by Zhai et al. (2022a). We replace the last half of the blocks in ViT S/16, B/16, L/16, and H/14 with Soft MoE layers with 128 experts, using one slot per expert. We train models ranging from 1B to 54B parameters. Large Soft MoE models incur in a small wall-clock time overhead compared to their dense counterparts due to the extra data transfers required by model parallelism. All variants were trained for 4M steps, except for H/14s which was trained for 2M steps for cost reasons. Figure 4 shows the JFT-4B precision, ImageNet 10-shot accuracy, and the ImageNet finetuning accuracy for Soft MoE and ViT versus training cost in ExaFLOPS. Table 2 contains all the results, and Figure 19 shows performance versus core-hours. Soft MoE models widely outperform Vision Transformer models for a given compute budget. For instance, the Soft MoE S/16 performs better than ViT B/16 on JFT and 10-shot ImageNet, and it also improves finetuning scores on the full ImageNet data, even though its training (and inference) cost is significantly smaller. Similarly, Soft MoE B/16 outperforms ViT L/16 upstream, and only lags 0.5 behind after finetuning while being 3x faster and requiring almost 4x fewer FLOPs. Finally, the Soft MoE L/16 model outperforms the dense H/14 one while again being around 3x faster to train and serve at inference. #### 3.4.2 Soft MoEs optimized for inference Encouraged by the fact that Soft MoEs with smaller backbones can match the quality of larger Vision Transformers, we continue training the small backbones to obtain models of higher quality at very low inference cost. Even after additional (over) training, the overall training time with respect to larger ViT models is comparable. For these long runs, we observe that longer cooldowns (period where the learning rate is decreased linearly to zero (Zhai et al., 2022a)) work well for Soft MoE. Therefore, we increase the cooldown from 50k steps (used elsewhere) to up to 500k steps. Figure 5 presents these models. We now summarize our main results. Soft MoE B/16 trained for 1k TPUv3 days outperforms ViT H/14 trained on a similar time budget (see Table 1, ViT H/14, 1M steps) while being **10\(\times\) cheaper at inference** in FLOPs and 5.7\(\times\) in wall-clock time, and it almost matches the ViT H/14 model performance even if we **double** ViT-H/14's training budget (2M steps and 2039.8 train days for ViT H/14 versus 1011.4 days for Soft MoE B/16). Soft MoE L/16 beats all models substantially while being almost 2\(\times\) faster at inference than ViT H/14. Figure 3: **Pareto Models. Soft MoE dominates both ViTs (dense) and popular MoEs (Experts Choice,Tokens Choice) on the training cost / performance Pareto frontier. Each point is a model trained for 300k steps, and larger marker sizes indicate larger models: S/32, S/16, B/32, B/16, L/16 and H/14. Cost is shown both in terms of FLOPS and realized TPU-v3 training time. MoE runs include different configuration; for clarity, only models on their respective Pareto frontier are displayed. Figure 22 in Appendix F shows all models.** ### Soft MoE Ablations Here we establish the optimal configurations for Soft MoE models by exploring the following: _Optimal number of slots per expert_. One or two slots per expert work best. We demonstrate this by fixing the total number of slots (which determines the compute cost of the model), and changing the number of experts, i.e. the slots per expert (Figure 6). _Optimal number of experts_. Roughly the same number of experts as input tokens work best when using one slot per expert. The model is then similarly expensive in terms of FLOPs as its dense equivalent. To show this, we increase the number of experts and train models for the same amount of time, and find the best performing model (Figure 8). _Architectural/algorithmic ablations_. To disentangle the source of the benefits, we compare Soft MoE to a number of ablated versions: route token \(i\) deterministically to expert \(i\), fixed uniform dispatch/combine weights, and others (TabITable 3). _MoE layers placement_. An additional ablation regarding where to place MoE layers is presented in Appendix D. \begin{table} \begin{tabular}{l r r r r r r r r} \hline \hline Model & Params & Train steps & Train days \& exaFLOP & Eval Ms/img \& GFLOP/img & JFT P@1 & IN/10shot & IN/ft \\ \hline ViT S/16 & 33M & 4M (50k) & 153.5 & 227.1 & 0.5 & 9.2 & 51.3 & 67.6 & 84.0 \\ ViT B/16 & 108M & 4M (50k) & 410.1 & 864.1 & 1.3 & 35.1 & 56.2 & 76.8 & 86.6 \\ ViT L/16 & 333M & 4M (50k) & 1290.1 & 3025.4 & 4.9 & 122.9 & 59.8 & 81.5 & 88.5 \\ ViT H/14 & 669M & 2M (50k) & 2039.8 & 4120.3 & 8.6 & 334.2 & 59.7 & 83.3 & 88.9 \\ \hline Soft MoE S/14 256E & 1.8B & 10M (50k) & 494.7 & 814.2 & 0.9 & 13.2 & 60.1 & 80.6 & 87.5 \\ Soft MoE B/16 128E & 3.7B & 9M (500k) & 1011.4 & 1769.5 & 1.5 & 32.0 & 62.4 & 82.9 & 88.5 \\ Soft MoE L/16 128E & 13.1B & 4M (500k) & 1355.4 & 2734.1 & 4.8 & 111.1 & 63.0 & 84.3 & 89.2 \\ \hline \hline \end{tabular} \end{table} Table 1: Training and finetuning results for Soft MoE and dense models. Finetuning performed on ImageNet at 384 resolution. Steps used for linear cooldown indicated in parentheses, these are included in the total train steps count. Results are plotted in Figure 5. Figure 4: **Long runs.** Soft MoE and ViT models trained for 4 million steps with batch size 4096 (H/14 models trained for 2 million steps instead). Equivalent model classes (S/16, B/16, L/16, H/14) have similar training costs, but Soft MoE outperforms ViT on all metrics. We show ImageNet 10-shot (left), JFT precision at 1 (middle) and ImageNet accuracy after finetuning (right), versus total training FLOPs. See Table 2. We report training wall-clock time in Figure 19. #### 3.5.1 Number of Experts and Slots per Expert When applying Soft MoE to a given architecture and input sequence length, one must decide how many experts and how many slots per expert to use. The total number of slots determines the amount of work (FLOPs) applied in the MoE layer (ignoring the small the routing cost). If the total number of slots is greater than the number of input tokens, the model will require more FLOPs than dense Transformers: more "tokens" will be processed in the MoE layer. Conversely, if the number of slots is lower than the original number of tokens, Soft MoE will save some compute relative to dense Transformers. Unless stated otherwise, the following experiments use a ViT-S/16 backbone trained for 300k steps with batch size 4096. The MoEs have expert layers in their last six of twelve blocks. **Optimal number of slots per expert**. In this experiment the total amount of compute is fixed, and we compare different configurations. Specifically, we fix the total number of slots to 4096, and we train models with different number of experts. MoE algorithms are often unable to scale well to a large number of experts (over 100). The model sizes range from just 38M (with 2 experts) to 9.7B parameters (when using 4096 experts). Figure 6 (and Figure 26) shows the results in terms of pre-training precision (left) and the few-shot evaluation (middle). In the case of Experts Choice and Tokens Choice MoEs, the size of the union of all expert buffers is also 4096 per input image. We just vary the number of experts keeping the total number of tokens processed across the union of experts constant, as for Soft MoE. For the Sparse MoEs (Experts/Tokens Choice), there is an implementation detail: The "group size" is the subset of the batch that is routed together. All tokens in a group compete to be selected by each expert. This can range from one image/group to the entire batch/group; the latter is more flexible, but increases computational overhead in routing (sorting the Figure 5: **Soft MoE optimized for inference. These plots show the quality on JFT-4B (Precision-at-1) and ImageNet (10-shot Accuracy) achieved by different models with different training and inference cost (measured both in TPUv3 time and FLOPs). Red and light blue curves correspond (respectively) to ViT and Soft MoE S/16, S/14, B/16, L/16 and H/14 trained for 4 million steps (except H/14, that was trained for 2 million steps), following a recipe similar to (Zhai et al., 2022a). Dark blue curves correspond to Soft MoE S/14, B/16, L/16 trained for additional steps as detailed in Table 1. We observe that the overtrained Soft MoE B/16 is better than the best ViT model (H/14) while using \(10\times\) less computation (\(5.7\times\) time). Soft MoE L/16 is the most performant model requiring one third of the inference FLOPs (one half of the time). Detailed results in Tables 1 and 2.** items). In Figure 6, we use group size eight. Figure 20, Appendix, shows other options. Figure 6 shows that Soft MoE scales with increased experts. The best configurations are 2048 and 4096 experts, at one/two slots per experts, respectively. In contrast, Experts Choice and Tokens Choice do not scale well with the number of experts, and performance degrades after 512 experts. In addition, Figure 6, right, shows the step time for each model. Due to sorting leading to increased computational overhead, the Sparse MoE's step time increases substantially with more experts, which is not the case for Soft MoE. **Optimal number of experts.** From the previous analysis, we set the number of slots per expert to one. The next question is how many experts to use. Here, the cost of models are _not_ matched: more experts will increase cost (through more slots). Figure 7 shows that, both for Soft MoE and Experts Choice, more experts do better (up to 1024). Next, we match the total training time for each model by adjusting the number of training steps (Figure 8). At this scale (ViT-S), the optimal number of experts for a given training budget is around 128 or 256 experts. The number of input tokens is 196, this corresponds to the minimum number of experts that does not lead to a strong token bottleneck (many fewer than 196 slots) in the MoE layer. For any number of experts, Soft MoE outperforms Experts Choice. Both models have the same capacity, but Experts Choice is significantly more expensive, especially with large group size. **More slots per expert.** Appendix C explores how Soft MoE behaves when increasing the number of slots per expert. Appendix H looks at the (strong) correlation between the learned slot parameters in this case. #### 3.5.2 Algorithmic Ablations: Identity & Uniform Routing Soft MoE relies on learning how to mix tokens for each expert. To understand the impact of finding useful linear combinations of input tokens, we ablate this aspect by testing some natural choices: _Identity routing._ Tokens are not mixed: the first token goes to first expert, second token goes to second expert, etc. _Uniform Mixing._ Every slot mixes all input tokens in the same way: by uniformly averaging them, both for dispatching and combining. In this case, we must independently and randomly initialize every expert as Figure 6: **Performance (left, center), and training step time (right) as a function of number of experts, for models with a fixed number of slots (Soft MoE) or expert buffer capacity (Sparse MoEs) on a ViT-S/16 backbone with MoEs in the last two layers. Soft MoE achieves much better scaling with more experts, while cost is roughly constant. However, with Experts and Tokens Choice routers, having too many experts not only hurts performance but also significantly increases the cost (Tokens Choice reaches 3.9x with 4096 experts).** otherwise the additional capacity coming from different experts will not be used (we end up with copies). _Soft / Uniform._ We learn to mix tokens to create the slots (dispatch weights), but we uniformly average all expert outputs. This implies every input token is identically updated before the residual connection. _Uniform / Soft._ All slots are filled with the uniform average of the input tokens. We learn to mix the expert output tokens depending on the input tokens. Table 3 summarizes our results, and Appendix A contains further details. Learning to mix tokens for dispatching and for combining tokens after expert processing seems essential to perform well, and dispatch mixing is slightly more important than the combine mixing. Dense underperform all variants. ## 4 Contrastive learning experiments We test whether the learned representations are also significantly better when used for other tasks. In this section we explore a popular paradigm, image-language contrastive learning. We follow the approach in Zhai et al. (2022b) where the image tower is pre-trained on an image classification task, and then frozen while training the text encoder on a dataset of image-text pairs. We re-use the models trained on JFT in the previous section and compare their performance on a number of downstream applications. For contrastive learning we train on WebLI (Chen et al., 2022), a proprietary dataset consisting of 10B images and their ALT texts crawled from the internet. The image encoder is frozen, while the text encoder is trained from scratch. Table 4 shows our results. Overall, the gaps we observed on image classification are preserved in this setting. For instance, Soft MoE-L/16 outperforms ViT-L/16 by more than 1% and 2% on Imagenet and Cifar-100 zero-shot, respectively. Retrieval numbers are generally modest. ## 5 Model Inspection In this section, we take a look at various aspects of the routing the model learns. Figure 7: **Performance (left, center) and step time (right) for models trained with increased experts and one slot (or token) per expert for a fixed number of steps (300k). The performance of all models improves as their capacity increases. However, the cost of Experts Choice grows faster than that of Soft MoE, especially when the group size is larger (gs\(=32\)).** **Tokens contributions to slots.** While there is no dropping in Soft MoE, it is still possible that some tokens contribute little to _all_ slots if their logits are much lower than those of other tokens. We would like to see if some tokens contribute to slots in a disproportionate manner. Figure 9 (left) shows the distribution across tokens for the total weight each token provides to slots (i.e. summed over all slots). This was computed over a batch with 256 images with 196 tokens each on a Soft MoE S/16 finetuned on ImageNet. We see there is a heavy tail of tokens that provide a stronger total contribution to slots, and the shape is somewhat similar across layers. Around 2-5% of the tokens provide a summed weight above 2. Also, between 15% and 20% of the tokens only contribute up to 0.25 in total weight. The last layer is slightly different, where token contribution is softer tailed. Appendix G further explores this. **Experts contributions to outputs.** Similarly, we would like to understand how much different slots end up contributing to the output tokens. We focus on the case of one slot per expert. We can approximate the total contribution of each expert (equivalently, slot) by averaging their corresponding coefficients in the linear combinations for all output tokens in a batch. Figure 9 (center) shows such (normalized) importance across experts for different MoE layers. We see that, depending on the layer, some experts can impact output tokens between 3x and 14x more than others. **Number of input tokens per slot.** For each slot, Figure 9 (right) shows how many input tokens are required to achieve a certain cumulative weight in its linear combination. The distribution varies significantly across slots. For a few slots the top 20-25 tokens account for 90% of the slot weight, while for other slots the distribution is more uniform and many tokens contribute to fill in the slot. In general, we see that slots tend to mix a large number of tokens unlike in standard Sparse MoEs. **Visual inspection.** In order to provide some intuition regarding how slots average input tokens, Figure 10 graphically shows the linear combinations for 8 different slots for the image shown in Figure 1. We shade patches inversely proportionally to their weight in the slots; note that all tokens representations are eventually combined into a single one (with hidden dimension \(h\)) before being passed to the expert (unlike in our plot, where they are arranged in the usual way). These plots correspond to a Soft MoE S/16 with 128 experts and one slot per expert, and we handpicked 8 out of the 128 slots to highlight how different slots tend to focus on different elements of the image. Figure 8: **Performance of models trained with increasing experts (one slot/token per expert), with matched training duration. The total number of steps in each case is computed to match the total training time of 300k steps for 1024-expert Experts Choice with 32 images per group. For context, the dashed line corresponds to Dense ViT-S/16. Here, Soft MoE outperforms Experts Choice at all capacities, and the optimum point is at around 512 experts.** \begin{table} \begin{tabular}{l r r r r r r} \hline \hline Model & Experts & IN/0shot & Cifar100/0shot & Pet/0shot & Coco & Img2Text & Coco & Text2Img \\ \hline ViT-S/16 & – & 74.2\% & 56.6\% & 94.8\% & 53.6\% & 37.0\% \\ Soft MoE-S/16 & 128 & 81.2\% & 67.2\% & 96.6\% & 56.0\% & 39.0\% \\ Soft MoE-S/14 & 256 & 82.0\% & 75.1\% & 97.1\% & 56.5\% & 39.4\% \\ \hline ViT-B/16 & – & 79.6\% & 71.0\% & 96.4\% & 58.2\% & 41.5\% \\ Soft MoE-B/16 & 128 & 82.5\% & 74.4\% & 97.6\% & 58.3\% & 41.6\% \\ \hline ViT-L/16 & – & 82.7\% & 77.5\% & 97.1\% & 60.7\% & 43.3\% \\ Soft MoE-L/16 & 128 & 83.8\% & 79.9\% & 97.3\% & 60.9\% & 43.4\% \\ Souped Soft MoE-L/16 & 128 & 84.3\% & 81.3\% & 97.2\% & 61.1\% & 44.5\% \\ \hline ViT-H/14 & – & 83.8\% & 84.7\% & 97.5\% & 62.7\% & 45.2\% \\ Soft MoE-H/14 & 256 & 84.6\% & 86.3\% & 97.4\% & 61.0\% & 44.8\% \\ \hline \hline \end{tabular} \end{table} Table 4: LIT-style evaluation with a ViT-g text tower trained for 18B input images (\(\sim 5\) epochs). \begin{table} \begin{tabular}{l r r r r r r r r} \hline \hline Model & Params & Train steps & Train days & \& exaFLOP & Eval Ms/img & \& GFLOP/img & JFT P@1 & IN/10s & IN/ft \\ \hline ViT S/16 & 33M & 4M (50k) & 153.5 & 227.1 & 0.5 & 9.2 & 51.3 & 67.6 & 84.0 \\ Soft MoE S/16 128E & 93MM & 4M (50k) & 175.1 & 211.9 & 0.7 & 8.6 & 58.1 & 78.8 & 86.8 \\ Soft MoE S/16 128E & 93M & 10M (50k) & 437.7 & 529.8 & 0.7 & 8.6 & 59.2 & 79.8 & 87.1 \\ Soft MoE S/14 256E & 1.8B & 4M (50k) & 197.9 & 325.7 & 0.9 & 13.2 & 58.9 & 80.0 & 87.2 \\ Soft MoE S/14 256E & 1.8B & 10M (500k) & 494.7 & 814.2 & 0.9 & 13.2 & 60.9 & 80.7 & 87.7 \\ \hline ViT B/16 & 108M & 4M (50k) & 410.1 & 864.1 & 1.3 & 35.1 & 56.2 & 76.8 & 86.6 \\ Soft MoE B/16 128E & 3.7B & 4M (50k) & 449.5 & 786.4 & 1.5 & 32.0 & 60.0 & 82.0 & 88.0 \\ \hline ViT L/16 & 33M & 4M (50k) & 1290.1 & 3025.4 & 4.9 & 122.9 & 59.8 & 81.5 & 88.5 \\ Soft MoE L/16 128E & 13.1B & 1M (50k) & 338.9 & 683.5 & 4.8 & 111.1 & 60.2 & 82.9 & 88.4 \\ Soft MoE L/16 128E & 13.1B & 2M (50k) & 677.7 & 1367.0 & 4.8 & 111.1 & 61.3 & 83.3 & 88.9 \\ Soft MoE L/16 128E & 13.1B & 4M (50k) & 1355.4 & 2734.1 & 4.8 & 111.1 & 61.3 & 83.7 & 88.9 \\ \hline ViT H/14 & 669M & 1M (50k) & 1019.9 & 2060.2 & 8.6 & 334.2 & 58.8 & 82.7 & 88.6 \\ ViT H/14 & 669M & 2M (50k) & 2039.8 & 4120.3 & 8.6 & 334.2 & 59.7 & 83.3 & 88.9 \\ Soft MoE H/14 128E & 27.3B & 1M (50k) & 1112.7 & 1754.6 & 8.8 & 284.6 & 61.0 & 83.7 & 88.9 \\ Soft MoE H/14 128E & 27.3B & 2M (50k) & 2225.4 & 3509.2 & 8.8 & 284.6 & 61.7 & 84.2 & 89.1 \\ Soft MoE H/14 256E & 54.1B & 1M (50k) & 1276.9 & 2110.1 & 10.9 & 342.4 & 60.8 & 83.6 & 88.9 \\ Soft MoE H/14 256E & 54.1B & 2M (50k) & 2553.7 & 4220.3 & 10.9 & 342.4 & 62.1 & 84.3 & 89.1 \\ \hline \hline \end{tabular} \end{table} Table 2: Training and finetuning results for Soft MoE and dense models. Finetuning results on ImageNet at 384 resolution. We use one slot per expert and did not increase this number during finetuning, thus Soft MoEs become cheaper than ViT, as the number of input tokens grows to 576 (patch size 16x16) and 752 (patch size 14x14) but the number slots is fixed to a much smaller number (either 128 or 256). \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Method & Experts & Mixing & Learned Dispatch & Learned Combine & JFT P@1 & IN/10shot \\ \hline Soft MoE & ✓ & ✓ & ✓ & ✓ & 54.3\% & 74.8\% \\ Soft / Uniform & ✓ & ✓ & ✓ & & 53.6\% & 72.0\% \\ Uniform / Soft & ✓ & ✓ & & ✓ & 52.6\% & 71.8\% \\ Uniform & ✓ & ✓ & & & 51.8\% & 70.0\% \\ Identity & ✓ & & & & 51.5\% & 69.1\% \\ Dense ViT & & & & & 48.3\% & 62.3\% \\ \hline \hline \end{tabular} \end{table} Table 3: Algorithmic ablation on an S/14 backbone trained for 300k steps (with 256 experts). ## 6 Discussion Sparse models can face infrastructural challenges which may have slowed down their broad adoption. Since these models were originally conceived to unlock massive model sizes, they tend to be distributed and most routing algorithms require additional communication costs: additional activations, gradients, or expert parameters are sent across devices. This is also true for Soft MoEs, where the experts may also be distributed. However, modern dense models are now sufficiently large that they are also distributed, thus closing the gap in this axis. In addition, the benefits of sparsity shine at small model scales, both in prior work (Riquelme et al., 2021) and with Soft MoE, fitting with the current needs of the industry for faster inference. We presented Soft MoE, a new sparse Transformer architecture that avoids the discrete token-to-expert assignment problem that is common in sparse mixture of experts models. By merging input tokens into linear combinations before dispatching them to experts, we are able to train a fast and fully-differentiable model. We perform extensive image-classification and image-language contrastive learning experiments comparing the performance of dense models and several sparse methods (Tokens Choice, Experts Choice, Soft MoE). These experiments suggest Soft MoE is surprisingly effective and strongly outperforms the other approaches while often being computationally cheaper. How to deal with causal masking for language decoders is an exciting and impactful research direction for future work. ## Acknowledgements We thank Rodolphe Jenatton, who provided extremely valuable feedback on an earlier version of this manuscript; Ilya Tolstikhin, who suggested the "Identity router" used in Appendix A (or "Liquid router", as he dubbed it); and the rest of Google DeepMind folks for providing a supportive research environment, very especially to our colleagues in Europe. Figure 9: **(Left) Distribution of summed dispatch weights per token for different MoE layers. For instance, in layer 11, the dispatch weights for 90-95% of the input tokens summed over all the slots are at most 1. Only a tiny fraction of tokens contribute to slots by summing more than 3. (Middle) Distribution of combine weights per slot (or expert, as we use one slot per expert) summed across all input tokens. We normalize the sum by its minimum value across experts. (Right) Each curve corresponds to one slot. Dispatch weights from all tokens to each slot add up to 1. Distribution of how many inputs tokens are needed to achieve a certain fraction of the total weight for the slot.** Figure 10: Linear combinations for 8 slots when using input image in Figure 1. Model is Soft MoE S/16 with 128 experts and one slot per expert, and it was finetuned on ImageNet. We show results for the first MoE layer (seventh block). The selected slots (among 128) are cherry-picked to highlight differences across slots.
2301.12686
GibbsDDRM: A Partially Collapsed Gibbs Sampler for Solving Blind Inverse Problems with Denoising Diffusion Restoration
Pre-trained diffusion models have been successfully used as priors in a variety of linear inverse problems, where the goal is to reconstruct a signal from noisy linear measurements. However, existing approaches require knowledge of the linear operator. In this paper, we propose GibbsDDRM, an extension of Denoising Diffusion Restoration Models (DDRM) to a blind setting in which the linear measurement operator is unknown. GibbsDDRM constructs a joint distribution of the data, measurements, and linear operator by using a pre-trained diffusion model for the data prior, and it solves the problem by posterior sampling with an efficient variant of a Gibbs sampler. The proposed method is problem-agnostic, meaning that a pre-trained diffusion model can be applied to various inverse problems without fine-tuning. In experiments, it achieved high performance on both blind image deblurring and vocal dereverberation tasks, despite the use of simple generic priors for the underlying linear operators.
Naoki Murata, Koichi Saito, Chieh-Hsin Lai, Yuhta Takida, Toshimitsu Uesaka, Yuki Mitsufuji, Stefano Ermon
2023-01-30T06:27:48Z
http://arxiv.org/abs/2301.12686v2
# GibbsDDRM: A Partially Collapsed Gibbs Sampler for ###### Abstract Pre-trained diffusion models have been successfully used as priors in a variety of linear inverse problems, where the goal is to reconstruct a signal from noisy linear measurements. However, existing approaches require knowledge of the linear operator. In this paper, we propose GibbsDDRM, an extension of Denoising Diffusion Restoration Models (DDRM) to a blind setting in which the linear measurement operator is unknown. GibbsDDRM constructs a joint distribution of the data, measurements, and linear operator by using a pre-trained diffusion model for the data prior, and it solves the problem by posterior sampling with an efficient variant of a Gibbs sampler. The proposed method is problem-agnostic, meaning that a pre-trained diffusion model can be applied to various inverse problems without fine tuning. In experiments, it achieved high performance on both blind image deblurring and vocal dereverberation tasks, despite the use of simple generic priors for the underlying linear operators. Machine Learning, ICML ## 1 Introduction Inverse problems are frequently encountered in various science and engineering fields such as image processing, acoustic signal processing, and medical imaging. In an inverse problem, the goal is to restore a clean data signal from measurements generated by some forward (measurement) process. In image processing, problems such as deblurring (Zhu et al., 2018; Kupyn et al., 2019; Tu et al., 2022), inpainting (Yeh et al., 2017), and colorization (Larsson et al., 2016) are naturally formulated as inverse problems. In audio signal processing, problems such as dereverberation (Nakatani et al., 2010; Saito et al., 2022) and band extension (Larsen and Aarts, 2005) are also classic inverse problems. In medical imaging, many problems such as computed tomography (CT) (Zhu et al., 2018; Song et al., 2021) also rely on inverse problem solving. In general, inverse problems are ill-posed because information in the original data is lost through the measurement process (e.g., because of noise), the incorporation of prior knowledge about the original data is thus critical. In the past, assumptions such as sparsity (Candes and Wakin, 2008), low rank (Fazel et al., 2008), and total variation (Candes et al., 2006) were made for the data distribution to narrow the set of plausible candidate solutions. A more recent trend has been to solve inverse problems by using richer deep generative models (Rick Chang et al., 2017; Anirudh et al., 2018; Kadkhodaie and Simoncelli, 2020; Whang et al., 2021) trained with a large amount of data as priors. In particular, the evolution of methods related to diffusion models (Kawar et al., 2021, 2022; Chung et al., 2022; Chung et al., 2022;a) has been significant, and many such methods are problem-agnostic, meaning that they do not require retraining of the generative model used for inference on each task (i.e., each inverse problem). Figure 1: Blind image deblurring results obtained by GibbsDDRM: (a) measurement (b) restored clean images with blur kernels (bottom right insets), and (c) ground truth images and blur kernels. Existing approaches typically assume that the measurement process is known. However, many settings are blind, which means the measurement process itself is (partially) unknown. This is known as a blind setting and includes problems such as blind image deblurring (Pan et al., 2016) and audio dereverberation (Nakatani et al., 2010). For example, in a blind image deblurring problem, the original image have to be restored from the convolution process where the blur kernel is unknown. To address this additional uncertainty, priors are introduced on both the data and the parameters of the linear operator involved (Chan and Wong, 1998; Krishnan and Fergus, 2009; Xu et al., 2013). BlindDPS (Chung et al., 2022) is a method that uses a pre-trained diffusion models for both data and parameters. However, while it can leverage widely available pre-trained diffusion models for signals such as images and audio, it requires training a diffusion model for the parameters of the linear operators of interest, severely restricting its applicability in practice. To overcome this limitation, we propose GibbsDDRM, which does not require a data-driven prior model of the measurement process. This method is an extension of Denoising Diffusion Restoration Models (DDRM) (Kawar et al., 2022) (a method designed for non-blind linear inverse problems) to the blind linear setting. It constructs a joint distribution of the data, the measurements, and the linear operator's parameter by using a pre-trained diffusion model for the data and a generic prior for the measurement parameters. Then, it performs approximate sampling from the corresponding posterior distribution of the data and parameters conditioned on the measurements. Here, we adopt a partially collapsed Gibbs sampler (PCGS) (Van Dyk and Park, 2008) to enable efficient sampling from a posterior distribution. The PCGS enables replacement of the intractable conditional distribution in a naive Gibbs sampler by a more tractable distribution without changing the stationary distribution. The PCGS alternately samples the data or latent variables and the linear operator's parameter, and the generative model's representational power can be exploited in the sampling of the parameter of the linear operator. This allows our method to accurately estimate both data and the parameter despite the use of a simple prior for the parameters. We conducted experiments on the tasks of blind image deblurring in the image processing domain and vocal dereverberation in the acoustic signal processing domain. The results confirm that high performance can be achieved on both tasks without assumptions on the prior for the linear operator's parameter. In the blind image deblurring task, GibbsDDRM demonstrates exceptional quantitative performance in terms of both image quality and faithfulness. It outperforms competing methods and BlindDPS by a large margin in LPIPS, which measures the perceptual closeness of images. The results also show that a faithful image can be restored even with large measurement noise. (Refer to Figure 1 for restored images and estimated blur kernels.) In vocal dereverberation, GibbsDDRM outperforms the comparison methods in terms of the quality of the processed vocal, the proximity of the signals, and the degree of reverberation removal. ## 2 Background Blind linear inverse problems.Blind linear inverse problems involve the estimation of both unknown clean data and the parameter of a linear operator from noisy measurements. This type of problem can be formulated as a linear system of equations of the following form: \[\mathbf{y}=\mathbf{H}_{\mathbf{\varphi}}\mathbf{x}_{0}+\mathbf{z}, \tag{1}\] where \(\mathbf{y}\in\mathbb{R}^{d_{\mathbf{y}}}\) is a vector of measurements, \(\mathbf{H}_{\mathbf{\varphi}}\in\mathbb{R}^{d_{\mathbf{y}}\times d_{\mathbf{x}_{0 }}}\) is a linear operator parameterized by \(\mathbf{\varphi}\in\mathbb{R}^{d_{\mathbf{y}}}\), \(\mathbf{x}_{0}\in\mathbb{R}^{d_{\mathbf{x}_{0}}}\) is the unknown original clean data to be estimated. \(\mathbf{z}\sim\mathcal{N}(\mathbf{0},\sigma_{\mathbf{y}}^{2}\mathbf{I})\) is a Gaussian measurement noise with known covariance \(\sigma_{\mathbf{y}}^{2}\mathbf{I}\), where \(\mathbf{I}\) is the identity matrix. For notational convenience, we index the clean data \(\mathbf{x}_{0}\) with "\(0\)" to distinguish it from latent variables of the diffusion model that are defined later. The aim here is to find estimates of both \(\mathbf{x}_{0}\) and \(\mathbf{\varphi}\) that fit the given noisy measurements \(\mathbf{y}\). The problem is ill-posed without any additional assumptions. To obtain a solution, it is assumed that \(\mathbf{x}_{0}\) is drawn from a generative model \(p_{\theta}(\mathbf{x}_{0})\) (close to the true data distribution), and a parameter \(\mathbf{\varphi}\) is drawn from a known prior \(p(\mathbf{\varphi})\) independently from the data. In the Bayesian framework, the optimal solution is to sample from the posterior \(p(\mathbf{x}_{0},\mathbf{\varphi}|\mathbf{y})\). Denoising Diffusion Probabilistic Models.Denoising Diffusion Probabilistic Models (Sohl-Dickstein et al., 2015; Ho et al., 2020; Song and Ermon, 2019; Song et al., 2021; Lai et al., 2022), or diffusion models for short, are generative models with a Markov chain \(\mathbf{x}_{T}\rightarrow\cdots\rightarrow\mathbf{x}_{t}\rightarrow\cdots \rightarrow\mathbf{x}_{0}\) represented by the following joint distribution: \[p_{\theta}(\mathbf{x}_{0:T})=p_{\theta}^{(T)}(\mathbf{x}_{T})\prod_{t=0}^{T-1 }p_{\theta}^{(t)}(\mathbf{x}_{t}|\mathbf{x}_{t+1}), \tag{2}\] where the model output is \(\mathbf{x}_{0}\). To train a diffusion model, a fixed variational inference distribution is introduced: \[q(\mathbf{x}_{1:T}|\mathbf{x}_{0})=q^{(T)}(\mathbf{x}_{T}|\mathbf{x}_{0})\prod _{t=1}^{T-1}q^{(t)}(\mathbf{x}_{t}|\mathbf{x}_{t+1},\mathbf{x}_{0}), \tag{3}\] which gives an evidence lower bound (ELBO) on the maximum likelihood objective. With Gaussian parameterization for \(p_{\theta}\) and \(q\), the ELBO objective is reduced to the following denoising autoencoder objective: \[\sum_{t=1}^{T}\gamma_{t}\mathbb{E}_{(\mathbf{x}_{0},\mathbf{x}_{t})\sim p_{ \text{lin}}(\mathbf{x}_{0})q(\mathbf{x}_{t}|\mathbf{x}_{0})}\left[\left\| \mathbf{x}_{0}-f_{\theta}^{(t)}(\mathbf{x}_{t})\right\|_{2}^{2}\right]. \tag{4}\] Here, \(f^{(t)}_{\theta}\) is a \(\theta\)-parameterized neural network that estimates noiseless data \(\mathbf{x}_{0}\) from noisy \(\mathbf{x}_{t}\) and characterizes \(p_{\theta}\); \(\mathbf{x}_{\theta,t}\) denotes the estimate of noise-less data by \(f^{(t)}_{\theta}\); and \(\gamma_{t}\) are positive weighting coefficients determined by \(q\). Denoising Diffusion Restoration Models.Denoising Diffusion Restoration Models (DDRM) (Kawar et al., 2022) is a method that uses a pre-trained diffusion model as a prior for data in a non-blind linear inverse problem. It is defined as a Markov chain \(\mathbf{x}_{T}\rightarrow\mathbf{x}_{T-1}\rightarrow\cdots\rightarrow\mathbf{ x}_{1}\rightarrow\mathbf{x}_{0}\) (where \(\mathbf{x}_{t}\in\mathbb{R}^{d_{\mathbf{x}_{0}}}\)) conditioned on the measurements \(\mathbf{y}\): \[p(\mathbf{x}_{0:T}|\mathbf{y})=p^{(T)}_{\theta}(\mathbf{x}_{T}|\mathbf{y}) \prod_{t=0}^{T-1}p^{(t)}_{\theta}(\mathbf{x}_{t}|\mathbf{x}_{t+1},\mathbf{y}), \tag{5}\] where \(\mathbf{x}_{0}\) is the model's output. The conditionals in DDRM are defined in terms of the denoising function \(f^{(t)}_{\theta}\) of a pre-trained diffusion model; intriguingly, the objective derived by using the ELBO coincides with that of the unconditional diffusion model, except for a constant factor. This means that the unconditionally pre-trained diffusion model can be used during inference without finetuning. The core idea of DDRM is to use the singular value decomposition (SVD) of a linear operator \(\mathbf{H}\) to transform both the unknown input \(\mathbf{x}_{0}\) and the observed output \(\mathbf{y}\), which is potentially corrupted by noise, to a shared spectral space. In this space, DDRM executes denoising on dimensions for which information from \(\mathbf{y}\) is available (i.e., when the singular values are non zero). When such information is not available (i.e., when the singular values are zero or the noise in the dimension is large), DDRM performs imputation while explicitly considering the measurement noise. Partially collapsed Gibbs sampler.A Gibbs sampler is a simple, widely used Markov chain Monte Carlo method for sampling from the joint distribution of a set of variables (Casella and George, 1992). The procedure entails iterative sampling from the fully conditional distributions of each variable, given the current values of the other variables. A blocked Gibbs sampler (Liu et al., 1994) is a variant in which, instead of sampling each variable individually, variables in a group or "block" of variables are sampled simultaneously while conditioned on the other variables. This approach is effective when the variables within a block are highly correlated, and it can improve the sampler's convergence behavior. A partially collapsed Gibbs sampler (PCGS) (Van Dyk and Park, 2008; Kail et al., 2012) is a generalization of a blocked Gibbs sampler that effectively explores the probability space through three basic operations in the sampling procedure: _marginalization_, _permutation_, and _trimming_, which are described in detail in (Van Dyk and Park, 2008) and Appendix A. In short, the removal of certain variables among the conditional variables does not alter the Gibbs sampler's stationary distribution, as long as these variables are not included among the conditional variables until the next time they are sampled. Hence, we can achieve efficient sampling when the distributions obtained after trimming are tractable. ## 3 GibbsDDRM: Partially Collapsed Gibbs Sampler with DDRM ### Target joint distribution for blind linear inverse problems In this paper, we seek to solve blind linear inverse problems by sampling from the posterior of the joint distribution of the data and the linear operator's parameter, given the measurements. The joint distribution of the data \(\mathbf{x}_{0}\), parameter \(\boldsymbol{\varphi}\), and measurements \(\mathbf{y}\) is defined as follows: \[p(\mathbf{x}_{0},\mathbf{y},\boldsymbol{\varphi})=p_{\theta}(\mathbf{x}_{0})p( \boldsymbol{\varphi})\mathcal{N}(\mathbf{y}|\mathbf{H}_{\varphi}\mathbf{x}_{0}, \sigma_{\mathbf{y}}^{2}\mathbf{I}), \tag{6}\] where \(p_{\theta}(\mathbf{x}_{0})\) and \(p(\boldsymbol{\varphi})\) are the known prior distributions for the data and the parameter, respectively. The Gaussian distribution \(\mathcal{N}(\mathbf{y}|\mathbf{H}_{\varphi}\mathbf{x}_{0},\sigma_{\mathbf{y}}^ {2}\mathbf{I})\) comes from the measurement model given in Eq. (1). The aim is to sample from the joint posterior distribution \(p(\mathbf{x}_{0},\boldsymbol{\varphi}|\mathbf{y})\). Using a pre-trained generative model as a prior \(p_{\theta}(\mathbf{x}_{0})\) can drastically improve the solutions in inverse problems, however, inference can be challenging. Even in the non-blind setting where \(\boldsymbol{\varphi}\) is known, sampling from the posterior is intractable and requires approximations like in DDRM (Kawar et al., 2022). Here we model the data distribution using a pre-trained diffusion model as in Eq. (2). This leads to the following joint distribution over the data, its latent variables, and the parameter, as shown in Figure 2, \[\begin{split}& p(\mathbf{x}_{0:T},\boldsymbol{\varphi},\mathbf{y}) \\ &=p^{(T)}_{\theta}(\mathbf{x}_{T})\prod_{t=0}^{T-1}p^{(t)}_{ \theta}(\mathbf{x}_{t}|\mathbf{x}_{t+1})p(\boldsymbol{\varphi})\mathcal{N}( \mathbf{y}|\mathbf{H}_{\boldsymbol{\varphi}}\mathbf{x}_{0},\sigma_{\mathbf{y}} ^{2}\mathbf{I}).\end{split} \tag{7}\] Note that sampling from the posterior distribution \(p(\mathbf{x}_{0:T}|\boldsymbol{\varphi},\mathbf{y})\) under a fixed \(\boldsymbol{\varphi}\) corresponds to the objective of DDRM. In addition, we also assume that the parameter's prior \(p(\boldsymbol{\varphi})\) is a generic and simple prior, such as a sparsity prior. Figure 2: Graphical model for the joint distribution in Eq. (7). ### Partially Collapsed Gibbs Sampler for the joint distribution To sample from the joint posterior in Eq. (7), we could attempt to sample from the joint posterior distribution that includes the latent variables of the diffusion model. However, it is still not feasible to run a naive Gibbs sampler for the posterior \(p(\mathbf{x}_{0:T},\mathbf{\varphi}|\mathbf{y})\), as it would require a conditional distribution for every individual variable, conditioned on all the other variables. For instance, the conditional distribution \(p(\mathbf{x}_{t}|\mathbf{x}_{0:t-1},\mathbf{x}_{t+1:T},\mathbf{\varphi},\mathbf{y})\) for the joint distribution defined in Eq. (7) is not obvious. A possible strategy is to use a blocked Gibbs sampler (Liu et al., 1994) with the variables divided into two groups, \(\mathbf{x}_{0:T}\) and \(\mathbf{\varphi}\), and sampled alternately. In more detail, after initializing \(\mathbf{\varphi}\), the sampling procedure of DDRM is performed keeping \(\mathbf{\varphi}\) fixed to obtain an estimate of the clean data \(\mathbf{x}_{0}\). Then, \(\mathbf{\varphi}\) is sampled such that it is consistent with the estimated data \(\mathbf{x}_{0}\) and measurements \(\mathbf{y}\). By repeating these operations, we can sample \(\mathbf{x}_{0}\) and \(\mathbf{\varphi}\) from the joint posterior. However, this approach may be inefficient because of the small number of updates made to \(\mathbf{\varphi}\): the entire sampling of \(\mathbf{x}_{0:T}\) must be performed for a step of sampling \(\mathbf{\varphi}\), which results in slow convergence. Hence, we adopt Partially collapsed Gibbs sampler (PCGS) (Van Dyk & Park, 2008) for the joint posterior. This strategy's main advantage is that we can still use a similar sampling method defined by the original DDRM. This enables simultaneous sampling of the latent variables \(\mathbf{x}_{1:T}\) and the linear operator's parameter \(\mathbf{\varphi}\) within a cycle of DDRM sampling, thus improving the convergence speed. In a naive Gibbs sampler, the order of sampling variables is arbitrary. In a PCGS, however, the sampling order must be carefully chosen to facilitate the trimming operation, which removes conditional variables from the conditional distribution. Specifically, once a variable has been marginalized and removed from the conditional set, it should not be added back until the next time it is sampled. We show a simple example of a PCGS in Appendix A. Figure 3 shows the sampling order of the proposed PCGS. After sampling \(\mathbf{x}_{T}\), the following operations are performed in descending order of \(t\), until \(t=0\): for each \(t\), \(\mathbf{x}_{t}\) is sampled once, and then \(\mathbf{\varphi}\) and \(\mathbf{x}_{t}\) are alternately sampled \(M_{t}\) times. One set of these operations constitutes a single cycle of the PCGS, and the operations are repeated for \(N\) cycles. The proposed PCGS is defined in Algorithm 1. The following proposition ensures that it samples from the true posterior distribution. **Proposition 3.1**.: _The PCGS defined in Algorithm 1 has the true posterior distribution \(p(\mathbf{x}_{0:T},\mathbf{\varphi}|\mathbf{y})\) as its stationary distribution if the approximations to the conditional distributions are exact. The approximations of the conditional distributions are accurate._ We give the proof in Appendix A. ``` Input: Measurement \(\mathbf{y}\), initial values \(\mathbf{\varphi}^{(0,0)}\). Output: Restored data \(\mathbf{x}_{0}^{(N,M_{0})}\), linear operator's parameter \(\mathbf{\varphi}^{(N,K)}\). \(K\gets 0\) # \(K\) counts the number of updates for \(\mathbf{\varphi}\) in a cycle. for\(n=1\)to\(N\)do \(\mathbf{\varphi}^{(n,0)}\leftarrow\mathbf{\varphi}^{(n-1,K)}\), \(K\gets 0\) Sample \(\mathbf{x}_{T}^{(n,0)}\sim p(\mathbf{x}_{T}|\mathbf{\varphi}^{(n,K)},\mathbf{y})\) // \(\uparrow\) approximated by \(p_{\theta}(\mathbf{x}_{T}|\mathbf{\varphi},\mathbf{y})\). for\(t=T-1\)to\(0\)do \(\chi_{t}\leftarrow\{\mathbf{x}_{t+1}^{(n,M_{t+1})},\mathbf{x}_{t+2}^{(n,M_{t+2})},\cdots,\mathbf{x}_{T}^{(n,0)}\}\) Sample \(\mathbf{x}_{t}^{(n,0)}\sim p(\mathbf{x}_{t}|\mathbf{\varphi}^{(n,K)},\chi_{t}, \mathbf{y})\) // \(\uparrow\) approximated by \(p_{\theta}(\mathbf{x}_{t}|\mathbf{x}_{t+1},\mathbf{\varphi},\mathbf{y})\). for\(m=1\)to\(M_{t}\)do Sample \(\mathbf{\varphi}^{(n,K+1)}\sim p(\mathbf{\varphi}|\mathbf{x}_{t}^{(n,m-1)},\chi_{t}, \mathbf{y})\) // \(\uparrow\) Langevin sampling with the approximated score \(\nabla_{\mathbf{\varphi}}\log p(\mathbf{y}|\mathbf{x}_{\theta,t},\mathbf{\varphi})\). \(K\gets K+1\) Sample \(\mathbf{x}_{t}^{(n,m)}\sim p(\mathbf{x}_{t}|\mathbf{\varphi}^{(n,K)},\chi_{t}, \mathbf{y})\) // \(\uparrow\) approximated by \(p_{\theta}(\mathbf{x}_{t}|\mathbf{x}_{t+1},\mathbf{\varphi},\mathbf{y})\). endfor endfor endfor ``` **Algorithm 1** Proposed PCGS for the posterior in Eq. (7) Proposition 3.1 states that it is possible to sample reasonable data and parameters by executing the PCGS defined in Algorithm 1, but the conditional distributions that the PCGS includes are intractable. Hence, we replace each conditional distribution with approximations that can be efficiently sampled from. In the following paragraphs, we provide the details of the sampling procedures at each step. Sampling of \(\mathbf{x}_{T}\).The sampling of \(\mathbf{x}_{T}\) is performed with the distribution \(p(\mathbf{x}_{T}|\mathbf{\varphi},\mathbf{y})\), which is obtained by trimming \(\mathbf{x}_{0:T-1}\). Because this conditional distribution is intractable, as discussed above, we use modified DDRM to approximate Figure 3: Sampling order of variables in the proposed PCGS, whose output entails the final sample of data \(\mathbf{x}_{0}\) and parameter \(\mathbf{\varphi}\). the conditional distribution. Here, in order to introduce the modified DDRM, we use SVD of the linear operator \(\mathbf{H}_{\mathbf{\varphi}}\) and its spectral space, similarly to previous studies (Kawar et al., 2021, 2022). The SVD is given as \(\mathbf{H}_{\mathbf{\varphi}}=\mathbf{U}_{\mathbf{\varphi}}\mathbf{\Sigma}_{\mathbf{\varphi}} \mathbf{V}_{\mathbf{\varphi}}^{\mathsf{T}}\), where \(\mathbf{U}_{\mathbf{\varphi}}\in\mathbb{R}^{d_{\mathbf{\varphi}}\times d_{\mathbf{\varphi}}}\) and \(\mathbf{V}_{\mathbf{\varphi}}\in\mathbb{R}^{d_{\mathbf{\alpha}_{0}}\times d_{\mathbf{ \alpha}_{0}}}\) are orthogonal matrices, and \(\mathbf{\Sigma}_{\mathbf{\varphi}}\in\mathbb{R}^{d_{\mathbf{\varphi}}\times d_{\mathbf{\alpha }_{0}}}\) is a rectangular diagonal matrix. Here we assume \(d_{\mathbf{\gamma}}\leq d_{\mathbf{\kappa}}\), but our method would work for \(d_{\mathbf{\gamma}}>d_{\mathbf{\kappa}}\). The diagonal elements of \(\mathbf{\Sigma}_{\mathbf{\varphi}}\) are the singular values of \(\mathbf{H}_{\mathbf{\varphi}}\) in descending order, denoted \(s_{1,\mathbf{\varphi}},s_{2,\mathbf{\varphi}},\cdots,s_{d_{\mathbf{\gamma}},\mathbf{\varphi}}\). Hereafter, we omit the subscript \(\mathbf{\varphi}\) from the singular values for notational simplicity. The values in the spectral space are represented as follows: \(\overline{\mathbf{x}}_{t}^{(i)}\) is the \(i\)-th element of \(\overline{\mathbf{x}}_{t}=\mathbf{V}_{\mathbf{\varphi}}^{\mathsf{T}}\mathbf{x}_{t}\), and \(\overline{\mathbf{y}}^{(i)}\) is the \(i\)-th element of \(\overline{\mathbf{y}}=\mathbf{\Sigma}_{\mathbf{\varphi}}^{\dagger}\mathbf{U}_{\mathbf{ \varphi}}^{\mathsf{T}}\mathbf{y}\), where \(\mathbf{A}^{\dagger}\) is the Moore-Penrose pseudo-inverse of a matrix \(\mathbf{A}\). Note that the spectral space also depends on the parameter \(\mathbf{\varphi}\), which is unknown in our blind setting, unlike in DDRM. Our modified DDRM update for sampling \(\mathbf{x}_{T}\) is defined as follows: \[p_{\theta}^{(T)}\left(\overline{\mathbf{x}}_{T}^{(i)}\mid\mathbf{ y},\mathbf{\varphi}\right)=\] \[\begin{cases}\mathcal{N}\left(\overline{\mathbf{y}}^{(i)},\sigma_ {T}^{2}-\sigma_{\mathbf{y}}^{2}/s_{i}^{2}\right)&\text{ if }s_{i}>0\\ \mathcal{N}\left(0,\sigma_{T}^{2}\right)&\text{ if }s_{i}=0\end{cases}, \tag{8}\] where the only difference from the original DDRM is that the parameter \(\mathbf{\varphi}\) is treated as a random variable. Sampling of \(\mathbf{x}_{t}\).The sampling of \(\mathbf{x}_{t}\) (\(t<T\)) is performed by sampling from the conditional distribution \(p(\mathbf{x}_{t}|\mathbf{x}_{t+1:T},\mathbf{\varphi},\mathbf{y})\), which trims \(\mathbf{x}_{0:t-1}\) if \(t>0\). As in the sampling of \(\mathbf{x}_{T}\), we approximate the conditional distribution by modifying DDRM. Denoting the prediction of \(\mathbf{x}_{0}\) at every time step \(t\) by \(\mathbf{x}_{\theta,t}\) which is made by the diffusion model as in Sec. 2, modified DDRM is defined as follows: \[p_{\theta}^{(t)}\left(\overline{\mathbf{x}}_{t}^{(i)}\mid \mathbf{x}_{t+1},\mathbf{\varphi},\mathbf{y}\right)=\] \[\begin{cases}\mathcal{N}\left(\overline{\mathbf{x}}_{\theta,t}^{(i) }+\sqrt{1-\eta^{2}}\sigma_{t}\frac{\overline{\mathbf{x}}_{t+1}^{(i)}-\overline {\mathbf{x}}_{\theta,t}^{(i)}}{\sigma_{t+1}},\eta^{2}\sigma_{t}^{2}\right)& \text{ if }s_{i}=0\\ \mathcal{N}\left(\overline{\mathbf{x}}_{\theta,t}^{(i)}+\sqrt{1-\eta^{2}} \sigma_{t}\frac{\overline{\mathbf{y}}_{\theta,t}^{(i)}}{\sigma_{\mathbf{\gamma}}/ s_{i}},\eta^{2}\sigma_{t}^{2}\right)&\text{ if }\sigma_{t}<\frac{\sigma_{\mathbf{\gamma}}}{s_{i}}\\ \mathcal{N}\left((1-\eta_{b})\overline{\mathbf{x}}_{\theta,t}^{(i)}+\eta_{b} \overline{\mathbf{y}}^{(i)},\sigma_{t}^{2}-\frac{\sigma_{\mathbf{\gamma}}^{2}}{s _{i}^{2}}\eta_{b}^{2}\right)&\text{ if }\sigma_{t}\geq\frac{\sigma_{\mathbf{\gamma}}}{s_{i}}\end{cases}, \tag{9}\] where \(0\leq\eta\leq 1\) and \(0\leq\eta_{b}\leq 1\) are hyperparameters, and \(0=\sigma_{0}<\sigma_{1}<\sigma_{2}<\cdots<\sigma_{T}\) are noise levels that is the same as that defined with the pre-trained diffusion model. Thus we have the approximation \[p(\mathbf{x}_{t}|\mathbf{x}_{t+1:T},\mathbf{\varphi},\mathbf{y}) \simeq p_{\theta}(\mathbf{x}_{t}|\mathbf{x}_{t+1:T},\mathbf{\varphi}, \mathbf{y})\] \[=p_{\theta}(\mathbf{x}_{t}|\mathbf{x}_{t+1},\mathbf{\varphi},\mathbf{y }), \tag{10}\] where the final equation comes from the Markov property of the modified DDRM. Sampling of \(\mathbf{\varphi}\).At time step \(t\), the sampling of the parameter \(\mathbf{\varphi}\) is done by using the conditional distribution \(p(\mathbf{\varphi}|\mathbf{x}_{t:T},\mathbf{y})\). For the joint distribution defined by Eq. (7), the conditional distribution is not easily obtained because, while \(\mathbf{\varphi}\) and \(\mathbf{x}_{t:T}\) are related through \(\mathbf{x}_{0}\), the distribution of \(\mathbf{x}_{0}\) cannot be evaluated at this point. Hence, we use the approximation in (Chung et al., 2022b;a) for the score of the conditional distribution and then perform sampling by Langevin dynamics (Langevin, 1908), as follows: \[\mathbf{\varphi}\leftarrow\mathbf{\varphi}+(\xi/2)\nabla_{\mathbf{\varphi}}\log p(\mathbf{ \varphi}|\mathbf{x}_{t:T},\mathbf{y})+\sqrt{\xi}\mathbf{\epsilon}, \tag{11}\] where \(\xi\) is a step size and \(\mathbf{\epsilon}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\). By Bayes' rule, the score \(\nabla_{\mathbf{\varphi}}\log q(\mathbf{\varphi}|\mathbf{x}_{t:T},\mathbf{y})\) can be decomposed into two terms: \[\nabla_{\mathbf{\varphi}}\log p(\mathbf{\varphi}|\mathbf{x}_{t:T},\mathbf{ y})=\] \[\nabla_{\mathbf{\varphi}}\log p(\mathbf{y}|\mathbf{x}_{t:T},\mathbf{\varphi})+ \nabla_{\mathbf{\varphi}}\log p(\mathbf{\varphi}|\mathbf{x}_{t:T}). \tag{12}\] Regarding the first term, we exploit the following theorem. **Theorem 3.2**.: _(modified version of Theorem 1 in (Chung et al., 2022b)) For the measurement model in Eq. (1), we have_ \[p(\mathbf{y}|\mathbf{x}_{t:T},\mathbf{\varphi})\simeq p(\mathbf{y}|\mathbf{x}_{ \theta,t},\mathbf{\varphi}), \tag{13}\] _and the approximation error can be quantified with the Jensen gap (Gao et al., 2017), which is upper bounded by_ \[\mathcal{J}\leq\frac{d_{\mathbf{x}_{0}}}{\sqrt{2\pi}\sigma_{\mathbf{\gamma}}^{2}}e^{ -1/2}s_{1}m_{1}, \tag{14}\] _where \(m_{1}:=\int\|\mathbf{x}_{0}-\mathbf{x}_{\theta,t}\|p(\mathbf{x}_{0}|\mathbf{x}_{t: T})d\mathbf{x}_{0}\), and \(s_{1}\) is the largest singular value of \(\mathbf{H}_{\mathbf{\varphi}}\)._ By leveraging Theorem 3.2, we obtain the approximate gradient with respect to \(\mathbf{\varphi}\) for the Langevin dynamics: \[\nabla_{\mathbf{\varphi}}\log p(\mathbf{y}|\mathbf{x}_{t:T},\mathbf{\varphi})\simeq \nabla_{\mathbf{\varphi}}\log p(\mathbf{y}|\mathbf{x}_{\theta,t},\mathbf{\varphi}), \tag{15}\] and for our measurement model in Eq. (1), the gradient is \[\nabla_{\mathbf{\varphi}}\log p(\mathbf{y}|\mathbf{x}_{\theta,t},\mathbf{\varphi})=- \frac{1}{2\sigma_{\mathbf{\gamma}}^{2}}\nabla_{\mathbf{\varphi}}\|\mathbf{y}-\mathbf{H}_{ \mathbf{\varphi}}\mathbf{x}_{\theta,t}\|_{2}^{2}, \tag{16}\] which is tractable in practice. As for the second term in Eq. (12), the conditional variables can be eliminated since \(\mathbf{x}_{t:T}\) and \(\mathbf{\varphi}\) are independent from Eq. (7). As a result, we can use a simple prior distribution (e.g., a Gaussian prior) for \(\mathbf{\varphi}\) that does not depend on \(\mathbf{x}_{t:T}\). We now have the conditional score of \(\mathbf{\varphi}\) for the Langevin dynamics as follows: \[\nabla_{\mathbf{\varphi}}\log p(\mathbf{\varphi}|\mathbf{x}_{t:T},\mathbf{y})\] \[\simeq-\frac{1}{2\sigma_{\mathbf{\gamma}}^{2}}\nabla_{\mathbf{\varphi}}\| \mathbf{y}-\mathbf{H}_{\mathbf{\varphi}}\mathbf{x}_{\theta,t}\|_{2}^{ Note that at a particular time step \(t\), \(\mathbf{x}_{t}\) varies because of the Gibbs sampling, and so does \(\mathbf{x}_{\theta,t}\). This iterative process can be viewed as feeding the information from the diffusion model to the parameter estimation. It allows for accurate parameter estimation even with simple priors. We refer to the proposed PCGS as the Gibbs Denoising Diffusion Restoration Models (GibbsDDRM), and we describe the details of its instantiation for each of our experimental tasks in Appendix B. ### Implementation considerations Initialization of \(\mathbf{\varphi}\).In GibbsDDRM, the initialization for \(\mathbf{\varphi}\) is arbitrary. If an existing simple method can be used to obtain an estimate of \(\mathbf{\varphi}\), then we can use that estimate as the initial value. In our experiments, for the blind image deblurring task, we initializes the blur kernel with a Gaussian blur kernel in the blind image deblurring task. For the vocal dereverberation task, the parameter is initialized with an estimate obtained by the weighted prediction error method (WPE) (Nakatani et al., 2010), which is an unsupervised method that is not based on machine learning, to accelerate the convergence speed. Dependence of number of iterations, \(M_{t}\), on time step.When \(t\) is large, the estimation of \(\mathbf{x}_{0}\) ( \(=\mathbf{x}_{\theta,t}\)) is difficult because of the large amount of noise in \(\mathbf{x}_{t}\). This uncertainty can lead to instability in the sampling of \(\mathbf{\varphi}\). The number of sampling steps for \(\mathbf{\varphi}\) can vary across the diffusion time steps and may even be zero. Accordingly, we use a strategy of not updating \(\mathbf{\varphi}\) when \(t\) is large. ## 4 Experiments We demonstrate our approach through two tasks: blind image deblurring in the image processing domain, and vocal dereverberation in the audio processing domain. ### Blind image deblurring. The aim of blind image deblurring is to restore a clean image from a noisy blurred image, without knowledge of the blur kernel. The details of the problem formulation and its instantiation as a linear inverse problem are given in Appendix B. Experimental settings.We conduct experiments on the Flickr Face High Quality (FFHQ) \(256\times 256\) dataset (Karras et al., 2019) and the Animal Faces-HQ (AFHQ) \(256\times 256\) dataset (Choi et al., 2020). We use a 1000-image validation set for FFHQ, and a 500-image test set for the dog class in AFHQ. All images are normalized to the range \([0,1]\). The blur type used is motion blur, and blur kernels of size \(64\times 64\) are generated via code 1, with an intensity value of \(0.5\). We use the pre-trained diffusion models from (Choi et al., 2021) 2 for FFHQ and from (Dariwal and Nichol, 2021) for AFHQ, without finetuning for this task. Measurements are generated by convolving the blur kernel with a ground truth image and adding Gaussian noise with \(\sigma_{\mathbf{\mathbf{y}}}=0.02\). We use \(\eta=0.80\) and \(\eta_{b}=0.90\) for the proposed method. The number of steps, \(T\), is set to \(100\), and \(N\) is set to \(1\). Following the discussion in Section 3.3, \(M_{t}\) is set to 0 for \(70\leq t\leq 100\) and to 3 for \(t<70\). The number of iterations and the step size for Langevin dynamics (Eq. (16)) are set to \(500\) and \(1.0\times 10^{-11}\), respectively. The blur kernel is initialized with a Gaussian blur kernel. We use a Laplace prior for the parameter \(\mathbf{\varphi}\), which has the form \(\nabla_{\mathbf{\varphi}}\log p(\mathbf{\varphi})=-\lambda\nabla_{\mathbf{\varphi}}\|\mathbf{ \varphi}\|_{1}\) The diversity hyperparameter \(\lambda\) is set to \(10^{3}\). Footnote 1: [https://github.com/LeviBorodenko/motionblur](https://github.com/LeviBorodenko/motionblur) Footnote 2: [https://github.com/jychoi118/ilvr_adm](https://github.com/jychoi118/ilvr_adm) Figure 4: Visualization of GibbsDDRM for the blind image deblurring task on the AFHQ dataset. Comparison methods.We compare GibbsDDRM with several other methods as baselines. These include MPRNet (Zamir et al., 2021) and DeblurGANv2 (Kupyn et al., 2019) as supervised learning-based baselines, pan-dark channel prior (Pan-DCP) (Pan et al., 2017) as an optimization-based method, and SelfDeblur (Ren et al., 2020), which utilizes deep image prior (DIP) for co-estimation of the data and kernel. We also list the results for BlindDPS (Chung et al., 2022) as reported in that paper, though it uses a prior for the blur kernel that is trained in a supervised manner, thus giving it an unfair advantage. Evaluation metrics.For quantitative comparison of the different methods, the main metrics are the peak signal-to-noise-ratio (PSNR), the Learned Perceptual Image Patch Similarity (LPIPS) (Zhang et al., 2018), and the Frechet Inception Distance (FID) (Heusel et al., 2017). Results.Table 1 summarizes the quantitative results of blind image deblurring on FFHQ and AFHQ. GibbsDDRM outperforms all the other methods in terms of the LPIPS which measures faithfulness to the original image, while showing a lower FID score, which measures the quality of generated data. To investigate the performance limit of our method, we also list results of DDRM with a ground truth kernel. Figure 4 visualizes the evolution of the variables for \(N=2\). We can see that even in steps where \(\mathbf{x}_{t}\) is still quite noisy, the estimated \(\mathbf{x}_{\theta,t}\) is close to the ground truth. This leads to accurate sampling of the blur kernel, which is quite close to ground truth at \(t=0\). Figure 5 shows the restoration results for different measurement noise levels. We can see that even with large noise, a faithful image can be restored via the SVD. We find that BlindDPS has a lower (better) FID score, but the restored images are relatively far from the original image in terms of the quantitative results. We think that this is because our method uses DDRM, which enables efficient treatment of information obtained from measurements through the SVD, whereas BlindDPS performs more generation than is necessary for noisy observations, which may negatively affect its faithfulness. Figure 6 shows the results obtained by our method and the comparison methods. The supervised method MPR achieves a highest PSNR in all the methods, but our method outperforms it in FID and LPIPS. It is observed that the images obtained by the MPRNet exhibits a certain degree \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{4}{c}{**FFHQ (\(256\times 256\))**} & \multicolumn{4}{c}{**AFHQ (\(256\times 256\))**} \\ \cline{2-7} **Method** & FID\(\downarrow\) & LPIPS \(\downarrow\) & PSNR \(\uparrow\) & FID \(\downarrow\) & LPIPS\(\downarrow\) & PSNR\(\uparrow\) \\ \hline GibbsDDRM (ours) & 38.71 & **0.115** & 25.80 & 48.00 & **0.197** & 22.01 \\ \hline MPRNet (Zamir et al., 2021) & 62.92 & 0.211 & **27.23** & 50.43 & 0.278 & **27.02** \\ DeblurGANv2 (Kupyn et al., 2019) & 141.55 & 0.320 & 19.86 & 156.92 & 0.429 & 17.64 \\ Pan-DCP (Pan et al., 2017) & 239.69 & 0.653 & 14.20 & 185.40 & 0.632 & 14.48 \\ SelfDeblur (Ren et al., 2020) & 283.69 & 0.859 & 10.44 & 250.20 & 0.840 & 10.34 \\ \hline \hline BlindDPS (Chung et al., 2022)* & **29.49** & 0.281 & 22.24 & **23.89** & 0.338 & 20.92 \\ \hline DDRM (Kawar et al., 2022) with GT kernel & 33.97 & 0.062 & 30.64 & 24.60 & 0.078 & 29.37 \\ \hline \hline \end{tabular} \end{table} Table 1: Blind image deblurring results on FFHQ and AFHQ (\(256\times 256\)). The blurred images have additive Gaussian noise with \(\sigma_{\mathbf{y}}=0.02\). (*) The results for BlindDPS (Chung et al., 2022), as reported in the original paper, are also listed, though that method uses a pre-trained score function for blur kernels. The results of DDRM (Kawar et al., 2022) with the ground truth kernels (i.e., non-blind setting) are also listed. **Bold**: Best. under: second best. Figure 5: Blurry images and restored images obtained with a restored blur kernel in blind image deblurring under different measurement noise conditions. The top row contains the ground truth images and blur kernels. of blurriness when compared to the ground truth images, whereas the images obtained by GibbsDDRM looks to be of superior quality in terms of visual perception. GibbsDDRM takes approximately 56 second of computation time per image using one RTX3090 with a batch size of 4. ### Vocal dereverberation Problem formulation.The objective of vocal dereverberation is to restore the original dry vocal from a noisy, reverberant (wet) vocal. Appendix B gives the details of the problem formulation and the specific implementation of the GibbsDDRM that we use for this task. Experimental settings.The proposed method is quantitatively evaluated on wet vocal signals. A pre-trained diffusion model is trained with dry vocal signals from an internal proprietary dataset of various genres and singers, with a total duration of 15 hours. A test dataset comprising 1000 wet vocal signals, with a total duration of around 1.4 hours, is prepared by adding artificial reverb to dry vocal signals from the NHSS dataset Sharma et al. (2020), which contains 100 English pop songs by different singers, with a total duration of 285.24 minutes. Both the training and testing data are monaural recordings sampled at 44.1 kHz. The artificial reverb is added with commercial software by using 10 presets with an RT60 shorter than 2 second. The wet vocal signals are prepared by creating \(100\times 10\) signals, dividing them into 5-second samples, and randomly selecting 1000 of the resulting signals. For the GibbsDDRM algorithm, the following parameter values are used: \(\eta=0.8\), \(\eta_{b}=0.8\), and \(\sigma_{y}=1.0\times 10^{-3}\). We set \(T=50\) for the number of sampling steps, and \(N=1\). The parameter \(M_{t}\) is set to zero for \(40\leq t\leq 50\), and to \(5\) for \(t\leq 40\). The linear operator's parameter is initialized using results from the WPE algorithm Nakatani et al. (2010), which is an unsupervised method for dereverberation. The number of iterations and the learning rate for Langevin dynamics (Eq. (16)) are set to \(400\) and \(1.0\times 10^{-13}\), respectively. We use a Laplace prior, and the diversity hyperparameter \(\lambda\) is set to \(2.0\). Appendix C gives the details of the network architecture and the dataset. Comparison methods.We evaluate the proposed method against three baselines: Reverb Conversion (RC) Koo et al. (2021), Music Enhancement (ME) Kandpal et al. (2022), and Unsupervised Dereverberation (UD) Saito et al. (2022). RC is a state-of-the-art, end-to-end, DNN-based method that requires pairs of wet and dry vocal signals for dereverberation. It is trained with wet and dry vocal signals that are obtained with different commercial reverb plugins from those used for the test dataset. ME is a supervised method based on diffusion models that denoise and dereverb music signals containing vocal signals. It is trained with pairs of 16-kHz reverberant noisy and clean music signals, and is evaluated at 16 kHz. UD is a method similar to ours, in that it uses DDRM; however, it differs in how it estimate the linear operator's parameter. Evaluation metrics.For quantitative comparison of the different methods, the metrics are the scale-invariant signal-to-distortion ratio (SI-SDR) Roux et al. (2019) improvement, the Frechet Audio Distance (FAD) Kilgour et al. (2018), and the speech-to-reverberation modulation energy ratio (SRMR) Santos et al. (2014). Because the FAD uses the pre-trained classification model VGGish Hershey et al. (2017), which is originally trained with \(16\) kHz audio samples, we downsample all the signals to \(16\) kHz to compute FAD. Results.Table 2 lists the scores for each metric. GibbsDDRM outperforms the comparison methods on all metrics. In particular, the result for UD demonstrates that our proposed way of estimating the linear operator's parameter give better performance than UD's way. Moreover, ME doesn't work at all, which may have been because the distribution of its training dataset does not cover that of its test dataset. Indeed, the wet signals for ME's training are created using only simulated natural reverb with some background noise Kandpal et al. (2022). ## 5 Conclusion We have proposed GibbsDDRM, a method for solving blind linear inverse problems by sampling data and the parameter of a linear operator from a posterior distribution by using a PCGS. The PCGS procedure ensures that the stationary distribution is unchanged from that of the original Gibbs sampler. GibbsDDRM performed well in experiments on blind image deblurring and vocal dereverberation, particularly in terms of preserving the original data, despite its use of a simple prior distribution for the parameter. Additionally, GibbsDDRM has problem-agnostic characteristics, which means that a single pre-trained diffusion model can be used for various tasks. One limitation of the proposed method is that it is not easily applicable to problems that involve linear operators for which the SVD is computationally infeasible. \begin{table} \begin{tabular}{c|c|c|c} \hline \multirow{2}{*}{Method} & \multirow{2}{*}{FAD \(\downarrow\)} & SL-SDR \(\uparrow\) & SRMR \(\uparrow\) \\ \hline Wet (unprocessed) & \(5.74\) & – & 7.11 \\ \hline Reverb Conversion Koo et al. (2021) & \(5.69\) & \(0.02\) & 7.23 \\ Music Enhancement Kandpal et al. (2022) & \(7.51\) & \(-23.9\) & 7.92 \\ Unsupervised Dereverberation Saito et al. (2022) & \(4.99\) & \(0.37\) & 7.94 \\ \hline **GibbsDDRM** & \(\mathbf{4.21}\) & \(\mathbf{0.59}\) & \(\mathbf{8.40}\) \\ \hline \end{tabular} \end{table} Table 2: Vocal dereverberation results. **Bold**: Best.
2305.19224
Scanning Gate Microscopy response for local tip potentials beyond perturbation theory
We propose an analytical formulation for the Scanning Gate Microscopy (SGM) response to local tips with arbitrary strength in two terminal nano-structures. The real space resolved conductance is expressed in terms of the unperturbed quantities underlying the scattering problem. Providing a non-dynamical approach for obtaining the SGM maps, the proposed expression enables for a significant reduction in the computational cost of SGM response calculations. This feature is particularly advantageous for deep learning-based approaches which have been recently proposed for accessing local properties and disorder landscapes from conductance measurements. This opens up new possibilities for the SGM technique and holds exciting prospects for quantum transport. Further, the formula's versatility extends beyond this specific application, offering a straightforward and computationally efficient method for obtaining the SGM response in a more general context.
Ousmane Ly
2023-05-30T17:17:19Z
http://arxiv.org/abs/2305.19224v2
# Scanning Gate Microscopy response for local tip potentials beyond perturbation theory ###### Abstract We propose an analytical formulation for the Scanning Gate Microscopy (SGM) response to local tips with arbitrary strength in two dimensional nanostructures. The real space resolved conductance is expressed in terms of the unperturbed quantities underlying the scattering problem. Providing a non-dynamical approach for obtaining the SGM maps, the proposed expression enables for a significant reduction in the computational cost of SGM response calculations. This feature is particularly advantageous for deep learning-based approaches which have been recently proposed for accessing local properties and disorder landscapes from conductance measurements. This opens up new possibilities for the SGM technique and holds exciting prospects for quantum transport. Further, the formula's versatility extends beyond this specific application, offering a straightforward and computationally efficient method for obtaining the SGM response in a more general context. Scanning gate microscopy is an experimental technique used to study space-resolved quantum transport features [1; 2; 3]. In this technique, an atomic force microscopy (AFM) tip is capacitively coupled to a two-dimensional electron gas located at a certain distance from the scanned surface, and the conductance of the system is measured as the tip is moved throughout the structure. Conductance maps obtained through SGM have been found to display interesting local properties, such as branching flow of electrons out of quantum point contacts, wavefunction-related features in quantum rings and mesoscopic cavities [4; 5; 6]. It was only more than a decade after the proposition of this experimental technique [1; 7], that an analytical formulation was proposed to describe the space resolved features underlying SGM. This was first worked out using a perturbation theory framework of the underlying scattering problem and applied in the context of transport calculations in disorder-free quantum point contacts [8; 9]. Further, the proposed perturbation theory has been utilized to establish a quantitative correspondence relation between SGM responses and partial local density of states (PLDOS) within the neighborhood of a quantum point contact in the presence of disorder [10]. This analytical derivation of the two lowest terms in tip strength of the SGM response has further triggered interesting efforts towards the establishment of more sophisticated experimental setups in the weakly invasive regime [11], where the responses are assumed to be governed by the PLDOS flowing out of quantum point contacts. Since most of the SGM experiments are performed in the non-perturbative regime, it is of foremost importance to formulate a more general theoretical framework to describe the response to an arbitrary tip strength, at least in special cases like the one of a local probe that we treat in this work. Re-summing a perturbative series to infinite order is a notoriously difficult problem that can only be achieved in specific cases. Among them, the recently obtained zero SGM conductance correction for the peculiar zero transverse energy mode of a metallic armchair graphene nanoribbon under a long range tip potential [12]. Another motivation for developing the analytical approach of SGM beyond perturbation theory is to avoid the costly implementation of numerical schemes when a full transmission calculation should be performed for each tip position. Such a situation is encountered in the training of machine-learning based algorithms where a huge amount of SGM maps is required, considering different parameters such as the structure's size, the tip potential strength and temperature. This limitation has been recently pointed out in studies [13; 14] that use machine learning techniques to extract key local transport features, namely PLDOS and disorder background in two-dimensional systems. In the present letter, we undertake the challenging task of deriving an analytical formula for computing SGM responses. The proposed formula assumes only a delta-like SGM tip with arbitrary strength and without restrictions on the considered geometries, which can be disordered and/or of any shape. The delta-likeness is achieved using short-range tip potentials whose real-space extent is below the Fermi wavelength. This is the regime where the SGM-PLDOS correspondence is indeed expected [10], although under very moderate strengths of the probing tip. Interestingly, the full SGM response can be systematically deduced from the perturbation theory results. We find that the SGM conductance of the structure is simply related to the system's scattering matrix, its unperturbed scattering wave-functions and the real-space diagonal elements of the retarded Green's function. To derive the exact formula for the SGM response, we find it pedagogical to first recall the lowest order terms of the perturbation theory for the conductance corrections due to a local potential, as described in Refs. [8; 9]. Further, we demonstrate that the entire SGM response series can be deduced straightforwardly from these perturbative results through a mere renormalization of the tip potential matrix elements. To this end, we consider an arbitrary quantum scatterer, located at position \(x=0\) and attached to two semi-infinite leads. The asymptotic form of the unperturbed scattering wave-functions (in the absence of the tip potential) can be written in terms of the elements of the scattering matrix and the wave-functions (\(\varphi^{(\pm)}_{1ea}\)) of the quasi one-dimensional free electron leads: \[\psi^{(0)}_{1ea}(\mathbf{r}) =\left\{\begin{array}{ll}\varphi^{(-)}_{1ea}(\mathbf{r})+\sum_{ b=1}^{N}r_{ba}\,\varphi^{(+)}_{1ea}(\mathbf{r}),&x<0\\ \sum_{b=1}^{N}t_{ba}\,\varphi^{(+)}_{2eb}(\mathbf{r}),&x>0\end{array}\right.\] \[\psi^{(0)}_{2ea}(\mathbf{r}) =\left\{\begin{array}{ll}\sum_{b=1}^{N}t^{\prime}_{ba}\,\varphi^ {(+)}_{1ea}(\mathbf{r}),&x<0\\ \varphi^{(-)}_{2ea}(\mathbf{r})+\sum_{b=1}^{N}r^{\prime}_{ba}\,\varphi^{(+)}_{ 2eb}(\mathbf{r}),&x>0\end{array}\right. \tag{1}\] Here, the \(\pm\) signs on the leads' wavefunctions denote, respectively, outgoing and impinging energy modes, and \(l=1,2\) stands for the lead number. The matrices \(r\), \(r^{\prime}\), \(t\), and \(t^{\prime}\) are elements of the scattering matrix \(S\) defined as: \[S=\left(\begin{array}{cc}r&t^{\prime}\\ t&r^{\prime}\end{array}\right). \tag{2}\] Next, we consider a potential \(V_{T}(\mathbf{r})\) that perturbs the initial system described above. The underlying scattering wave-functions in Eq. (1) are therefore modified. Up to linear order in the tip potential, the correction to the scattering wave-function in the presence of the tip is obtained using the Lippmann Schwinger expansion (see supplemental materials [15]). Assuming a delta tip potential \(V_{\mathrm{T}}(\mathbf{r})=v_{\mathrm{T}}\delta(\mathbf{r}-\mathbf{r}_{ \mathrm{T}})\), the first order correction to the scattering wave function reads \[\psi_{l,\varepsilon,a}(\mathbf{r})=\psi^{(0)}_{l,\varepsilon,a}(\mathbf{r})+v _{\mathrm{T}}\mathcal{G}^{(0)}(\mathbf{r},\mathbf{r}_{\mathrm{T}}, \varepsilon)\psi^{(0)}_{l,\varepsilon,a}(\mathbf{r}_{\mathrm{T}}), \tag{3}\] with \(\mathcal{G}^{(0)}\) being the unperturbed retarded Green function. The two lowest order corrections to the conductance are obtained [15] as \[g^{(1)}=4\pi\mathrm{Im}\left\{\mathrm{Tr}\left[t^{\dagger}t\ \mathcal{V}^{11}+t^{ \dagger}r^{\prime}\ \mathcal{V}^{21}\right]\right\}, \tag{4}\] and \[g^{(2)}=4\pi^{2}\mathrm{Tr}\left\{\mathrm{Re}[\mathcal{V}^{11}(t^{\dagger}t \mathcal{V}^{11}+2t^{\dagger}r^{\prime}\mathcal{V}^{21}+r^{\prime\dagger}r^{ \prime}\mathcal{V}^{22})]\right\}. \tag{5}\] It can be observed that the first term in the rectangular bracket of Eq. (4) is real. Therefore, the conductance correction can be further simplified, and only the second term would survive after taking the imaginary part. In the actual context, we consider the most general expression as the formula will be applied to complex matrix elements as we will see in the following. In order to calculate the higher order terms of the scattering wave-function, we start again from the Lippmann-Schwinger expansion applied to a delta tip potential. The resulting corrected scattering wave-function reads \[\psi_{1ea}(\mathbf{r})=\psi^{(0)}_{lea}(\mathbf{r})+v_{\mathrm{T}}\mathcal{G }^{(0)}(\mathbf{r},\mathbf{r}_{\mathrm{T}},\varepsilon)\psi_{1ea}(\mathbf{r}_ {\mathrm{T}}). \tag{6}\] To obtain the above scattering wave-function only in terms of the unperturbed quantities, we first evaluate \(\psi_{1ea}(\mathbf{r})\) at the tip position \(\mathbf{r}_{\mathrm{T}}\) \[\psi_{1ea}(\mathbf{r}_{\mathrm{T}})=\beta_{\varepsilon}(\mathbf{r}_{\mathrm{ T}})\psi^{(0)}_{1ea}(\mathbf{r}_{\mathrm{T}}), \tag{7}\] where \(\beta_{\varepsilon}(\mathbf{r}_{\mathrm{T}})\) is a tip position dependent complex function defined as \[\beta_{\varepsilon}(\mathbf{r}_{\mathrm{T}})=1/(1-v_{\mathrm{T}}\mathcal{G}^{ (0)}(\mathbf{r}_{\mathrm{T}},\mathbf{r}_{\mathrm{T}},\varepsilon)). \tag{8}\] By plugging (7) into (6), we find that the scattering state at an arbitrary position \(\mathbf{r}\) is simply given by \[\psi_{1ea}(\mathbf{r})=\psi^{(0)}_{lea}(\mathbf{r})+v_{\mathrm{T}}\beta_{ \varepsilon}(\mathbf{r}_{\mathrm{T}})\mathcal{G}^{(0)}(\mathbf{r},\mathbf{r}_ {\mathrm{T}},\varepsilon)\psi^{(0)}_{lea}(\mathbf{r}_{\mathrm{T}}). \tag{9}\] This expresses the scattering wave-function in terms of the unperturbed quantities. That is in the absence of the perturbing probe. It can be noticed that Eq. (9) is equivalent to Eq. (3), up to the prefactor \(\beta_{\varepsilon}(\mathbf{r}_{\mathrm{T}})\) in the second term on the right-hand side of the latter. This is the key aspect that allows for a simple deduction of the full conductance correction corresponding to (9) from the previous perturbation theory results. The generalized conductance corrections can be straightforwardly obtained from (4) and (5). The prefactor \(\beta_{\varepsilon}(\mathbf{r}_{\mathrm{T}})\) would simply alter the matrix elements \(\mathcal{V}^{ll^{\prime}}\), leading to a complex factor \(\beta_{\varepsilon}(\mathbf{r}_{\mathrm{T}})\) at the level of the first-order like contribution, and \(|\beta_{\varepsilon}(\mathbf{r}_{\mathrm{T}})|^{2}\) at the level of the second order-like correction. Therefore, we are left with the following expression \[g=4\pi\mathrm{Im}\{\beta_{\varepsilon}(\mathbf{r}_{\mathrm{T}})\mathrm{Tr}[t^ {\dagger}t\mathcal{V}^{11}+t^{\dagger}r^{\prime}\mathcal{V}^{21}]\}+4\pi^{2}| \beta_{\varepsilon}(\mathbf{r}_{\mathrm{T}})|^{2}\mathrm{Tr}\left\{\mathrm{Re}[ \mathcal{V}^{11}(t^{\dagger}t\mathcal{V}^{11}+2t^{\dagger}r^{\prime}\mathcal{V }^{21}+r^{\prime\dagger}r^{\prime}\mathcal{V}^{22})]\right\}. \tag{10}\] Equation (10) is the central result of this letter. It gives the analytical SGM response in the presence of a delta like probe with arbitrary tip strength. The formula does not assume any particular shape of the scattering region, which may also include disorder. Although the formula is expressed in terms of the two lowest orders of the perturbation theory, the resulting conductance is neither linear nor quadratic in \(v_{\mathrm{T}}\). In fact, the complex coefficient \(\beta_{e}(\mathbf{r}_{\mathrm{T}})\) contains higher-order terms, as it is expressed as a power series of \(v_{\mathrm{T}}\mathcal{G}^{(0)}\). In order to validate the proposed full SGM response Eq. (10), we performed numerical simulations on a disordered ring geometry defined on a tight binding network with lattice parameter \(a\). The ring had an inner radius of \(25a\) and an outer radius of \(50a\). The underlying momentum dependent Hamiltonian is given by \[H=\frac{1}{2m^{*}}(k_{x}^{2}+k_{y}^{2})+V_{d}(\mathbf{r})+V_{conf}(\mathbf{r}), \tag{11}\] where \(k_{x}\) and \(k_{y}\) are the momenta in the two dimensions of space. The terms \(V_{d}\) and \(V_{conf}\) stand respectively for disorder and confining potentials. To compute the conductance of the system the ring is attached to two free electron leads. The effective mass was taken as \(m^{*}=0.04m\), where \(m\) is the bare electron mass as in Ref. [5]. The Hamiltonian Eq. (11), is discretized on the tight-binding lattice. Subsequently, the scattering problem is solved using the quantum transport package Kwant [16]. In Fig. 1, we plotted the real space resolved conductance at different tip potentials. The exact numerical simulations are displayed in the right column. In the central panels, the results corresponding to the implementation of the analytical formula (Eq. (10)) are shown. In the left column, the lowest order conductance correction of the perturbation theory [8] is computed. As expected, the perturbative results remain a good approximation of the response at low \(v_{t}\). However, the analytical formula reproduces nicely the full numerical calculations at all considered tip strengths. Furthermore, we focus on the high tip voltage limit, as this is the regime where the perturbation theory breaks down. In Fig. 2, we plotted the conductance of the ring versus the strength \(v_{\mathrm{T}}\) of the local tip. Each color line corresponds to an arbitrarily chosen tip position in the disordered ring. The dashed lines correspond to the analytical prediction (Eq. (10)). The full SGM response obtained by means of the exact numerical computation is represented by the solid lines. It can be noticed that the SGM response at the considered positions behaves differently. Semi-classically speaking, some tip locations might favor the reflection of impinging trajectories, while others would only slightly deviate them. The former case would therefore favor a reduction of the transmission with respect to its unperturbed value, leading to a decrease in the computed conductance. In the latter scenario, the presence of the tip might trigger additional trajectories that are unexpected in its absence and therefore lead to the enhancement of the conductance with respect to its value in the unperturbed structure. One observes that the analytical predictions coincide perfectly with the fully numerically calculated SGM response. This provides a validation of the proposed formula. Yet, we shall emphasize that the perturbation theory fails drastically at reproducing the exact SGM conductance, though a delta tip was assumed. The proposed analytical formula for the real space resolved conductance in the presence of a local tip is expressed in terms of the elements of the scattering matrix, the scattering states, and the unperturbed Green's function. An important simplification arises from the fact that these quantities only need to be computed once, even if the tip strength is modified. However, calculating the real-space Green's function throughout the studied geometry can be memory consuming for very large systems, if the brute force diagonalization method is employed, and caution should be taken in this case. In our simulations, we used the Kwant software [16] to obtain the real-space retarded Green's function, but further theoretical efforts have been devoted to make the computation of \(\mathcal{G}_{0}\) more efficient for large systems [17]. Moreover, evaluating the Green's function by directly computing the underlying energy integrals is also feasible using the non-equilibrium transport package Tkwant [18], which also incorporates the Green's function formalism. The actual analytical formulation can be particularly useful in machine learning-based approaches designed to obtain local properties within an inverse problem framework. These approaches require a large number of SGM maps to train the underlying neural networks. A task that would be impossible to achieve or very demanding if the standard numerical procedure of computing SGM is followed. The formula we propose enables for the optimization of these calculations and further facilitates the incorporation of pertinent parameters of the problem, such as the system's size and the tip strength in addition to temperature, with a minimal computational expense. We believe that our proposal will enable new possibilities in harnessing SGM data to provide unprecedented access to quantum transport properties in mesoscopic systems. Furthermore, our analytical formulation can be applied in the study of nonlinear quantum transport [19] and SGM induced thermoelectric effect [20; 21; 22]. Although experimentally extended tips are often used in SGM, the delta-tip conductance formula effectively captures all the relevant features. It is important to note that in the presence of an extended tip, the electrostatic profile would blur the corresponding local responses and trigger a stronger response, which scales with the diameter of the probe [15]. Finally, we shall highlight that experimental efforts have been devoted to fabricate shielded co-axial tips [23; 24]. However, the derivation of a more general formula remains of relevant interest to more quantitatively describe these experiments. The actual formalism will be certainly insightful for approaching this interesting task, that can be addressed at least in specific albeit relevant circumstances. In summary, we have demonstrated that the full SGM response series can be obtained in an exact fashion, provided that a very narrow scanning tip is assumed. By numerically implementing the formula, one can obtain SGM maps at a minimal computational cost. The analytical formula is demonstrated to be in perfect agreement with exact numerical evaluation of the SGM response. The possibility of applying the obtained SGM expression in the context of machine learning based approaches has been discussed. We thank D. Weinmann and R. Jalabert for their careful reading of the manuscript, and for helpful discussions and suggestions. We also thank T. Ihn, X. Waintal and A. Abbott for useful discussions. We are grateful to the hospitality of IPCMS and University of Strasbourg where this work was initiated. We acknowledge computing resources on the supercomputer SHAHEEN granted by the KAUST Supercomputing Lab.
2305.06304
On the derivation of new non-classical hydrodynamic equations for Hamiltonian particle systems
We consider a Hamiltonian system of particles, interacting through of a smooth pair potential. We look at the system on a space scale of order {\epsilon}^1, times of order {\epsilon}^2, and mean velocities of order {\epsilon}, with {\epsilon} a scale parameter, under initial conditions where the system is in a local Gibbs state with parameters corresponding to density and temperature with gradients of order 1. Assuming that the phase space density of the particles is given by a suitable series in {\epsilon} the behavior of the system under this rescaling is described, to the lowest order in {\epsilon}, by new non-classical hydrodynamic equations that cannot be derived from the compressible Navier-Stokes equations in the small Mac number limit. The analogous equations in kinetic theory are called ghost effect equations.
Raffaele Esposito, Rossana Marra
2023-05-10T16:49:14Z
http://arxiv.org/abs/2305.06304v1
# On the derivation of new non-classical hydrodynamic equations for Hamiltonian particle systems. ###### Abstract. We consider a Hamiltonian system of particles, interacting through of a smooth pair potential. We look at the system on a space scale of order \(\varepsilon^{-1}\), times of order \(\varepsilon^{-2}\), and mean velocities of order \(\varepsilon\), with \(\varepsilon\) a scale parameter, under initial conditions where the system is in a local Gibbs state with parameters corresponding to density and temperature with gradients of order \(1\). Assuming that the phase space density of the particles is given by a suitable series in \(\varepsilon\) the behavior of the system under this rescaling is described, to the lowest order in \(\varepsilon\), by new non-classical hydrodynamic equations that cannot be derived from the compressible Navier-Stokes equations in the small Mac number limit. The analogous equations in kinetic theory are called ghost effect equations. ## 1. Introduction The problem of deriving the hydrodynamical equations from the Hamiltonian equations of motion of atoms, in the limit when a scale parameter \(\varepsilon\) is small, is one of the main open problems of non-equilibrium Statistical Mechanics. The compressible Navier-Stokes system (CNSE) is a phenomenological description of the dissipative hydrodynamics and the incompressible Navies-Stokes-Fourier system (INSF) can be derived from the compressible one in the low Mach number limit. Unfortunately the CNSE has no scaling space-time invariance and hence cannot be obtained from a microscopic description while such an obstruction is not present for the INSF system and, assuming the temperature constant at the \(\varepsilon^{0}\) order, a formal derivation from particles was given in [13], while a rigorous proof at the level of the Boltzmann equation was obtained, among the others, in [7], [9]. The situation is very different when the assumption of constant temperature at \(\varepsilon^{0}\) order is removed. In this case, a set of new physically relevant hydrodynamical equations has been derived formally starting from the Boltzmann equation [27][19][20][7][3]. They are characterized by a correction to the Navier-Stokes stress tensor that depends on derivatives of the temperature. This effect was already known to Maxwell [23]. The relevance of this system is that it is non-classical in the sense that it cannot be derived from the CNSE, and hence in some sense it indicates a failure of the CNSE in describing the real world. Since these equations are derived from the Boltzmann equation, the state equation is the one of a perfect gas. Sone has given to these equations the name of ghost effect system. The name is suggested by the fact that a vanishingly small velocity field produces finite size modifications of the usual heat equation. There are situations (particular geometries, stationary cases etc.) in which the classical heat-conduction equation fails to correctly describe the temperature field of the gas. These modifications are confirmed by many numerical experiments. There has been a big theoretical, numerical and experimental work on this and for the details we refer to [27] and references therein. It is a natural question to ask if such kind of equations can be derived also from a Hamiltonian particle system. In this paper we answer affirmatively to that, but only at a formal level. A system of many interacting particles, moving according to the Newton equations of motion, can be described on a space scale much larger than the typical microscopic scale (say the range of the interaction) in terms of density, velocity and temperature fields, satisfying hydrodynamic equations, like Euler or Navier-Stokes equations. The scale separation and the local conservation laws are responsible of this reduced description. In fact, on the macroscopic scale the quantities which are locally conserved (slow modes) play a major role in the motion of the fluid. The derivation of the Euler equations is based on the assumption of local equilibrium. On times of order \(\varepsilon^{-1}\), the system is expected to be described approximately by a local Gibbs measure, with parameters varying on regions of order \(\varepsilon^{-1}\), \(\varepsilon\) being a scale parameter. The local equilibrium assumption implies that the parameters of the local Gibbs measures satisfy the Euler equations [24], [8], [6]. The microscopic structure (the potential) appears only in the state equation which links pressure and internal energy to the other macroscopic parameters. The microscopic locally conserved quantities converge, as \(\varepsilon\to 0\), by a law of large numbers, to macroscopic fields. To make this correct, the many particles Hamiltonian system must have good dynamical mixing properties to approach and stay in a state close to the local equilibrium. At the moment it is not understood how to provide such properties. Therefore the only rigorous results are obtained by adding some noise to the Hamiltonian evolution [25] (see [28] for a review on the rigorous results for stochastic systems). The derivation of the Navier-Stokes equations presents many more difficulties. These equations, which describe the behavior of a fluid in the presence of dissipative effects, do not have an immediate interpretation in terms of scale separation. This is not surprising because the NS equations do not have a natural space-time scale invariance like the Euler equations. In fact to see the effect of the viscosity and the thermal conduction one has to look at times such that neighboring regions in local equilibrium exchange a sensible amount of momentum and energy. Simple considerations show that the right scale of time is \(\varepsilon^{-2}\). On the other hand we cannot hope to find the compressible Navier-Stokes-Fourier behavior from the particle system under the parabolic rescaling \(x\to\varepsilon^{-1}x\) and \(t\to\varepsilon^{-2}t\) since the NS equations are not invariant under this scaling, due to the presence of the transport terms. A way out is to consider the incompressible limit simultaneously, because the incompressible Navier-Stokes-Fourier equations (INSF) have the required scaling invariance. Along this path, in [13] we gave a formal derivation of the INSF from a Hamiltonian particle system under the parabolic rescaling, in the low Mach number regime. In the paper [13] the main ingredient is the assumption that the non-equilibrium density solution of the rescaled Liouville equation can be expressed as a truncated series in the parameter \(\varepsilon\). We followed a procedure inspired to the Hilbert-Chapmann-Enskog expansion [5], used to construct the solution of the rescaled Boltzmann equation. From the physical point of view we think of the system as being in local equilibrium with parameters which are themselves given by a series in \(\varepsilon\). However there is a non-hydrodynamic correction to the local equilibrium which depends from the non conserved quantities in the system (fast modes) and we assume that this correction does not affect the first order in the expansion, that is at the first order the system is still described by a local Gibbs measure with parameters which differ from constants by terms of order \(\varepsilon\). The parameters conjugate to density and temperature are constant plus terms of order \(\varepsilon\) and the one conjugate to the velocity field is of order \(\varepsilon\). This last is strictly related to the incompressibility assumption and would be false in the case of finite Mach number. This assumption is the translation of the Hilbert expansion for the Boltzmann equation to the particle system case. On the other hand, the non hydrodynamical corrections in the second order are important on the scale \(\varepsilon^{-2}\) and give rise to the N.S. terms. It is worth to mention here that very strong and rather uncontrollable assumptions are necessary even to give sense to the formal calculations below: * the space of the invariant observables for the microscopic dynamics reduces to the locally conserved quantities, mass, momentum and energy and functions of them; * some equilibrium time correlation functions decay sufficiently fast. Such assumptions are far from being sufficient for a mathematical proof. Under the same scaling, in the context of the Boltzmann equations, for initial conditions such that the density and temperature have gradients of order \(1\) the formal limiting equations are different and some new terms depending on gradients of temperature appear in the momentum equations and the velocity field is not anymore divergence-less, see e.g. [27][19][7]. The rigorous derivation for the stationary Boltzmann equation has been obtained recently in [10]. Here, we try to formally derive the analogous equations as limiting equations from a system of interacting particles. Again, the main ingredient is the assumption that the non-equilibrium density can be expressed as a truncated series in the parameter \(\varepsilon\). The difference, to be consistent with the initial conditions, is that the first term of the expansion is the Gibbs measure with parameters conjugate to density and temperature depending on time and position. This a particular _local_ Gibbs measure and we denote it by \(G_{0}\). We stress that it is not anymore an equilibrium. If the initial condition is the local Gibbs measure \(G_{0}\), plus correction of order \(\varepsilon\), the empirical fields evolve on the macroscopic space-time scales close to the solution of the non-classical new equations, while if the initial condition is the global Gibbs measure plus correction of order \(\varepsilon\), the empirical fields evolves on the macroscopic space-time scales close to the solution of the INSF, since in this case the perturbation of order \(\varepsilon\) is too small to generate the new terms in the limiting equations. In Section 2 we introduce the empirical fields and the microscopic evolution equations for them in terms of the currents. The currents cannot be expressed back in terms of the hydrodynamical fields, and one has to solve the closure problem in a suitable way to get the hydrodynamical equations. We consider the averages versus \(F^{\varepsilon}\), solution of the Liouville equation, and the lowest order in \(\varepsilon\) gives the following hydrodynamic equations, derived in Section 3, for the velocity field \(u\), the density \(\rho\) and the internal energy \(e\) \[\left\{\begin{array}{rcl}\nabla P&=&0,\\ \partial_{t}\rho+\nabla_{x}\cdot(\rho u)&=&0,\\ \rho\partial_{t}u+u\cdot\nabla_{x}u+\nabla_{x}\mathfrak{p}&=&\nabla_{x}\cdot \left(\tau^{(1)}-\tau^{(2)}\right),\\ \rho[\partial_{t}e+u\cdot\nabla e]+P\big{(}\nabla_{x}\cdot u\big{)}&=&\nabla_{ x}\cdot\left(\kappa\frac{\nabla_{x}T}{2T^{2}}\right),\end{array}\right. \tag{1.1}\] where, for \(\alpha,\beta=1,\ldots d\), with \(d\) the space dimension, \[\tau^{(1)}_{\alpha\beta}:=\eta\left(\partial_{\alpha}u^{\beta}+\partial_{ \beta}u^{\alpha}-\frac{2}{d}\delta_{\alpha\beta}\partial_{\alpha}u^{\alpha} \right)+\zeta\delta_{\alpha\beta}\partial_{\alpha}u^{\alpha},\] \[\tau^{(2)}_{\alpha\beta}:=[K_{1}(\partial_{\alpha}T\partial_{\beta}T-\frac{1 }{d}\sum_{\alpha}(\partial_{\alpha}T)^{2})+\omega_{1}\sum_{\alpha}(\partial_{ \alpha}T)^{2}+K_{2}(\partial_{\alpha\beta}^{2}T-\frac{1}{d}\sum_{\alpha} \partial_{\alpha\alpha}^{2}T)+\omega_{2}\sum_{\alpha}\partial_{\alpha\alpha}^ {2}T)]\] Here, \(P\) and \(e\) are the thermodynamical functional pressure and internal energy which are functions of \(\rho,T\) as determined by the Gibbs measure. These equations are a new set of hydrodynamic equations, which cannot be derived from the compressible Navier-Stokes equations. The divergence of the velocity field is not zero as in INSF, even if we sent the Mach number to zero. There are new "thermal stress" terms in the equation for the velocity field. We observe that, even if in the thermal stress tensor \(\tau^{(2)}\) there are third order derivatives in \(T\), these terms (the last two) appear in the equation as a gradient and hence can be absorbed in the unknown pressure \(\mathfrak{p}\)[22][27]. We also show that the entropy associated to the solutions of these equations, correctly, is increasing in time. Our derivation gives the viscosity coefficient \(\eta\), bulk viscosity \(\zeta\) and the conductivity \(\kappa\) in terms of the local equilibrium time correlation functions. The fluctuation-dissipation theory relates the transport coefficients to time-integrated correlation functions (see [17], [21]). The expression we find agrees with the Green-Kubo formula for the shear and bulk viscosity as well for the conductivity. Moreover, in these new equations, in particular in the equation for the momentum, there are more transport coefficients and we find an expression for them in terms of the potential and of a double time integral in the Appendix. These expression reminds the ones in the second order dissipative hydrodynamics introduced in [30] ([18]). We remark that the linear response theory cannot give these new transport coefficients. In the last section we compare this equations with the ones derived from the Boltzmann equation. ## 2. Conservation laws We consider a system of many, \(N\), identical particles of unit mass in a torus \(\Lambda_{\varepsilon}\) of size \(\varepsilon^{-1}\) in \(\mathbb{R}^{d}\), interacting via a pair central potential \(V\) of finite range. The Newton equations are \[\frac{d\xi_{i}}{d\tau}(\tau) =v_{i}(\tau),\] \[\frac{dv_{i}}{dt}(\tau) =-\sum_{i\neq j}\nabla V(\xi_{i}-\xi_{j}|),\] where \(\xi_{i},v_{i},\tau\), denote the microscopic coordinates, velocities and time, and \(i=1,\ldots,N\), After rescaling space as \(\varepsilon^{-1}\) and time as \(\varepsilon^{-2}\), they become, \[\frac{dx_{i}}{dt}(t)=\varepsilon^{-1}v_{i}(t), \tag{2.1}\] \[\frac{dv_{i}}{dt}(t)=-\varepsilon^{-2}\sum_{i\neq j}\nabla V( \varepsilon^{-1}|x_{i}-x_{j}|),\] with \(x_{i}=\varepsilon\xi_{i}\) the macroscopic coordinates and \(t=\varepsilon^{2}\tau\) the macroscopic time. The number of particles \(N\) is assumed to be of order \(\varepsilon^{-d}\) to keep the density finite. The rescaled Newton equations, with initial data randomly distributed, are equivalent to the Liouville equation for the evolution of a distribution \(\mu_{0}^{\varepsilon}(x_{1},\cdots,x_{N},v_{1},\cdots,v_{N}):=\mu_{0}( \varepsilon^{-1}x_{1},\cdots,\varepsilon^{-1}x_{N}v_{1},\cdots,v_{N})\) at time \(0\) on the \(N\) particle phase space \[\frac{\partial}{\partial t}\mu_{t}^{\varepsilon}=\varepsilon^{-2}\mathscr{L} ^{*}\mu_{t}^{\varepsilon}, \tag{2.2}\] where \(\mathscr{L}^{*}\) is the Liouville operator \[\mathscr{L}^{*}\phi(x_{1},\cdots,x_{N},v_{1},\cdots,v_{N})=-\sum_{i=1}^{N} \sum_{k=1}^{d}\big{\{}\varepsilon v_{i}^{k}\frac{\partial\phi}{\partial x_{i}^ {k}}-\sum_{i\neq j}\partial_{k}V\Big{(}\varepsilon^{-1}|x_{i}-x_{j}|\Big{)} \frac{\partial\phi}{\partial v_{i}^{k}}\big{\}}, \tag{2.3}\] with \(\mathscr{L}^{*}=-\mathscr{L}\) the adjoint of \(\mathscr{L}\) with respect to the scalar product induced by the a priori measure \(d\mathbb{Z}=\frac{1}{n!}d^{d}x_{1}d^{d}v_{1}\,\ldots d^{d}x_{N}d^{d}v_{N}\). To be more explicit, we denote the average versus the a priori measure \(d\mathbb{Z}\) by \(\left\langle\phi\right\rangle\) and define the scalar product \(\left\langle\phi,\psi\right\rangle=\left\langle\phi\psi\right\rangle\) so that \[\left\langle\phi,\mathscr{L}^{*}\psi\right\rangle=-\langle\mathscr{L}\phi, \psi\rangle.\] The total number of particles, the \(d\) components of the total momentum and the total energy are the conserved quantities. We define the corresponding empirical fields: _empirical density_ \[z^{0}(x)=\varepsilon^{d}\sum_{i=1}^{N}\delta(x_{i}-x), \tag{2.4}\] empirical velocity field density_ \[z^{\alpha}(x)=\varepsilon^{d}\sum_{i=1}^{N}v_{i}^{\alpha}\delta(x_{i}-x),\ \ \ \alpha=1,\ldots,d, \tag{2.5}\] _empirical energy density_ \[z^{d+1}(x)=\varepsilon^{d}\sum_{i=1}^{N}\frac{1}{2}\big{[}v_{i}^{2}+\sum_{j\neq i =1}^{N}V(\varepsilon^{-1}|x_{i}-x_{j}|)\big{]}\delta(x_{i}-x)]. \tag{2.6}\] Their meaning is as follows: The average of the integral of \(z^{\alpha}\) over a small region is equal to the average number of particles, momentum, energy associated to the region. We will write use the short notation \[z^{\mu}(x)=\varepsilon^{d}\sum_{i=1}^{N}\delta(x_{i}-x)z_{i}^{\mu}, \tag{2.7}\] with \[z_{i}^{0}=1;\ \ z_{i}^{\alpha}=v_{i}^{\alpha},\alpha=1,\ldots,d;\ \ z_{i}^{d+1}= \frac{1}{2}[v_{i}^{2}+\sum_{i\neq j=1}^{N}V(\varepsilon^{-1}|x_{i}-x_{j}|)].\] Note that the correct way of writing these quantities would be to add an \(\varepsilon\) index: \(z_{\varepsilon}^{\mu}(x)=z^{\mu}(\xi)\). We will omit this index in the following, being clear from the context whether we are referring to microscopic or macroscopic variables. The empirical fields satisfy the following local conservation laws, which are obtained by integrating (2.7) against a smooth \(\mathbb{R}^{d}\) test function \(f(x)\), \(t\)-differentiating and using the Newton equations (2.1): \[\frac{d}{dt}\varepsilon^{d}\sum_{i=1}^{N}f(x_{i})=\varepsilon^{-1}\varepsilon ^{d}\sum_{i=1}^{N}\sum_{\alpha=1}^{d}\partial_{\alpha}f(x_{i})v_{i}^{\alpha}, \tag{2.8}\] \[\frac{d}{dt}\varepsilon^{d}\sum_{i=1}^{N}f(x_{i})v_{i}^{\beta}=\varepsilon^{- 1}\varepsilon^{d}\sum_{i=1}^{N}\sum_{\alpha=1}^{d}\Big{\{}\partial_{\alpha} f(x_{i})v_{i}^{\alpha}v_{i}^{\beta}-\varepsilon^{-1}\sum_{j\neq i=1}^{N} \partial_{\beta}V(\varepsilon^{-1}|x_{i}-x_{j}|)f(x_{i})\Big{\}}, \tag{2.9}\] \[\frac{d}{dt}\varepsilon^{d}\sum_{i=1}^{N}f(x_{i})z_{i}^{d+1}=\varepsilon^{-1 }\varepsilon^{d}\sum_{i=1}^{N}\sum_{\alpha=1}^{d}\Big{\{}\partial_{\alpha}f(x _{i})v_{i}^{\alpha}z_{i}^{d+1}-\frac{1}{2}\varepsilon^{-1}\sum_{i\neq j=1}^{N} \partial_{\alpha}V(\varepsilon^{-1}|x_{i}-x_{j}|)v_{i}^{\alpha}f(x_{i})\Big{\}}. \tag{2.10}\] Here \(\partial_{\beta}V(\xi)=\partial V(\xi)/\partial\xi_{\beta}\). Because of the symmetry properties of the potential we can write, as usual, the second term in the r.h.s. of (2.9) as \[-\frac{1}{2}\varepsilon^{d-2}\sum_{i\neq j=1}^{N}\partial_{\beta}V(\varepsilon ^{-1}|x_{i}-x_{j}|)[f(x_{i})-f(x_{j})]. \tag{2.11}\] Since \(f\) is slowly varying on the microscopic scale we can write \[f(\varepsilon\xi_{i})-f(\varepsilon\xi_{j})=\sum_{\gamma=1}^{d}\partial_{\gamma}f (x_{i})\varepsilon[\xi_{i}^{\gamma}-\xi_{j}^{\gamma}]+\varepsilon^{2}D_{0}+ \varepsilon^{3}D+O(\varepsilon^{4}), \tag{2.12}\] where \[D_{0}=\frac{1}{2}\sum_{\gamma,\nu=1}^{d}\partial_{\alpha\nu}^{2}f(x_{i})[\xi_{ i}^{\gamma}-\xi_{j}^{\gamma}][\xi_{i}^{\nu}-\xi_{j}^{\nu}], \tag{2.13}\] \[D=\frac{1}{6}\sum_{\gamma,\nu,\alpha=1}^{d}\partial_{\gamma\alpha\nu}^{3}f(x_{ i})[\xi_{i}^{\gamma}-\xi_{j}^{\gamma}][\xi_{i}^{\nu}-\xi_{j}^{\nu}][\xi_{i}^{ \alpha}-\xi_{j}^{\alpha}]. \tag{2.14}\] The last term of (2.9) becomes: \[\begin{array}{l}\frac{1}{2}\varepsilon^{-1}\varepsilon^{d}\sum_{i,j=1}^{N} \sum_{\gamma=1}^{d}\partial_{\gamma}f(x_{i})\Psi^{\beta\gamma}(\varepsilon^{- 1}|x_{i}-x_{j}|)\\ +\frac{1}{4}\varepsilon^{d}\sum_{i,j=1}^{N}\sum_{\gamma,\nu=1}^{d} \partial_{\gamma\nu}^{2}f(x_{i})\Phi_{0}^{\beta\nu\gamma}(\varepsilon^{-1}|x_ {i}-x_{j}|)\\ +\frac{1}{12}\varepsilon\varepsilon^{d}\sum_{i,j=1}^{N}\sum_{\gamma,\nu, \alpha=1}^{d}\partial_{\gamma\nu\alpha}^{3}f(x_{i})\Phi^{\beta\alpha\gamma\nu }(\varepsilon^{-1}|x_{i}-x_{j}|)+O(\varepsilon^{2}).\end{array} \tag{2.15}\] with \[\Psi^{\beta\gamma}(\xi)=-\partial_{\beta}V(\xi)\xi^{\gamma},\quad\Phi_{0}^{ \beta\nu\gamma}=-\partial_{\beta}V(\xi)\xi^{\gamma}\xi^{\nu},\quad\Phi^{\beta \alpha\gamma\nu}(\xi)=-\partial_{\beta}V(\xi)\xi^{\gamma}\xi^{\alpha}\xi^{\nu}. \tag{2.16}\] We also set \[\begin{array}{l}\bar{\Phi}_{0}^{\beta\gamma\nu}(x)=\frac{1}{2} \varepsilon^{d}\sum_{ij=1}^{N}\delta(x_{i}-x)\Phi_{0}^{\beta\gamma\nu}( \varepsilon^{-1}|x_{i}-x_{j}|),\\ \bar{\Phi}^{\alpha\beta\gamma\nu}(x)=\frac{1}{2}\varepsilon^{d} \sum_{ij=1}^{N}\delta(x_{i}-x)\Phi^{\alpha\beta\gamma\nu}(\varepsilon^{-1}|x_ {i}-x_{j}|),\end{array} \tag{2.17}\] An analogous computation, which we omit for sake of shortness, can be done for the energy equation. The general form of the rescaled local conservation laws is (up to higher order in \(\varepsilon\)) \[\frac{\partial}{\partial t}\int dxf(x)z^{\beta}(x)=\varepsilon^{-1}\int dx \sum_{k=1}^{d}\partial_{k}f(x)w^{\beta k}(x), \tag{2.18}\] where \(w^{\beta k}\), \(\beta=0,\ldots,d+1;\ k=1,\ldots,d\) are the currents corresponding to the fields \(z^{\beta}\) and are explicitly given by \[w^{0k}(x)=\varepsilon^{d}\sum_{i=1}^{N}\delta(x_{i}-x)v_{i}^{k}, \tag{2.19}\] \[w^{\beta k}(x) =\varepsilon^{d}\sum_{i=1}^{N}\delta(x_{i}-x)\big{\{}v_{i}^{\beta}v_ {i}^{k}+\frac{1}{2}\sum_{j=1}^{N}\Psi^{\beta k}(\varepsilon^{-1}|x_{i}-x_{j}|) \big{\}}\] \[+\frac{1}{4}\varepsilon\sum_{\gamma=1}^{d}\partial_{\gamma}\Phi_{0 }^{k\beta\gamma}(x)+\frac{1}{12}\varepsilon^{2}\sum_{\gamma,\nu=1}^{d} \partial_{\gamma\nu}^{2}\Phi^{k\beta\gamma\nu}(x)+O(\varepsilon^{3}),\quad \beta=1,\ldots,d, \tag{2.20}\] \[w^{d+1\,k}(x) =\varepsilon^{d}\sum_{i=1}^{N}\big{\{}v_{i}^{k}z_{i}^{d+1}+\frac{ 1}{2}\sum_{j=1}^{N}\sum_{\gamma=1}^{d}\Psi^{\gamma k}(\varepsilon^{-1}|x_{i}-x _{j}|)\frac{1}{2}[v_{i}^{\gamma}+v_{j}^{\gamma}]\big{\}}+O(\varepsilon),\] We introduce the notation \(w_{i}^{\beta k}\) such that \[w^{\beta k}(x)=\varepsilon^{d}\sum_{i=1}^{N}\delta(x_{i}-x)w_{i}^{\beta k}.\] Hence \(w_{i}^{0k}=v_{i}^{k}\), and so on. We also denote the r.h.s. of the first line of (2.20) by \[w_{*}^{\beta k}(x):=\varepsilon^{d}\sum_{i=1}^{N}\delta(x_{i}-x)\big{\{}v_{i}^ {\beta}v_{i}^{k}+\frac{1}{2}\sum_{j=1}^{N}\Psi^{\beta k}(\varepsilon^{-1}|x_{i }-x_{j}|)\big{\}}. \tag{2.21}\] We stress that we keep terms of order \(\varepsilon\) and \(\varepsilon^{2}\) in (2.20) while we do not do the same in (2.21) and that such new terms were not taken into account in the derivation of the incompressible Navier-Stokes equation [13] because do not give contribution at the lowest order. In this new setting instead we will see that the \(D\) term gives a contribution at the lowest order for the momentum equation. A key point in the argument is the remark that the empirical fields \(z^{\alpha}(\xi)\) are _approximate integrals of the motion_ in the following sense: For any smooth function \(f(x)\) on the torus \(\mathbb{T}_{1}^{d}\), from previous calculation it follows that \[\mathscr{L}\big{[}\varepsilon^{d}\sum_{i=1}^{N}f(\varepsilon\xi_{i})z_{i}^{ \alpha}\big{]}=O(\varepsilon). \tag{2.22}\] On the other hand, the evolution of a generic observable \(\Phi(x_{1},\cdots,x_{N},v_{1},\cdots,v_{N})\) according to the rescaled Newton equations is given by \[\partial_{t}\Phi=\varepsilon^{-2}\mathscr{L}\Phi. \tag{2.23}\] Therefore, while a generic observable has a time derivative of order \(\varepsilon^{-2}\), there are special observables as the empirical fields associated to the mass, momentum and energy, that have a time derivative only of order \(\varepsilon^{-1}\). So they have a comparatively slow evolution and this justifies the denomination of approximate integrals of motion we used above. We stress that we _need to assume_ that the only observables with this property are the empirical fields (and of course any function of them as well). Now we turn to the definition of a special class of states, the local Gibbs states. A (global) Gibbs state in a macroscopic volume \(\Lambda\) (hence a volume \(\varepsilon^{-1}\Lambda\) in microscopic variables) is defined as a probability distribution on the \(I\!\!R_{\Lambda}\)-space \(\bigcup_{N\geq 0}(\Lambda\times{\mathbb{R}}^{d})^{N}\), whose density with respect to the a priori measure \(d{\mathbb{Z}}\) is given by (in the grandcanonical setting) \[G_{\Lambda}(x_{1},v_{1},\ldots,x_{N},v_{N})=Z_{\Lambda}^{-1}\prod_{j=1}^{N} \tilde{z}\exp\Big{[}-\frac{(v_{j}-u)^{2}}{2T}+\frac{1}{T}\sum_{k\neq j}V\Big{(} \frac{|x_{j}-x_{k}|}{\varepsilon}\Big{)}\Big{]}. \tag{2.24}\] Here \(\tilde{z}\) is the activity and \(Z_{\Lambda}\) is a normalization factor. We can write the previous expression in terms of \(\underline{\lambda}=\{\lambda^{\alpha}\}\), \(\alpha=0,\ldots,d+1\), \(d+2\) real numbers called _chemical potentials_ associated to the mass, momentum and energy empirical fields and parametrizing the family of the Gibbs states \[G(\underline{\lambda})=Z_{\Lambda}^{-1}\exp\{\sum_{i=1}^{N}\sum_{\mu=0}^{d+1} \lambda^{\mu}z_{i}^{\mu}\}. \tag{2.25}\] Their explicit expression is \[\lambda^{0}=\log\tilde{z}-\frac{1}{2}\beta|u|^{2},\quad\lambda^{\nu}=\frac{1} {T}u^{\nu},\nu=1,\cdots,d\quad\lambda^{d+1}=-\frac{1}{T}.\] A more conventional parametrization is given by \(\rho>0\), the _mass density_, \(u\in I\!\!R^{d}\), the _velocity field_ and \(T>0\), the _temperature_. Note that for a global Gibbs state, due to the Galilean invariance, the velocity field is usually assumed to vanish. In the definition of local Gibbs states, on the contrary we keep a non vanishing velocity field. The notion of _local_ Gibbs state is related to the one of local equilibrium and is obtained formally by replacing the constant chemical potentials \(\{\lambda^{\alpha}\}\) by some smooth functions \(\{\lambda^{\alpha}(x,t)\}\) which are functions of \(x_{i}=\varepsilon\xi_{i}\): hence the local equilibrium state is defined, for a fixed \(\varepsilon\) as \[\tilde{G}(\underline{\lambda})=\tilde{Z}_{\Lambda}^{-1}\exp\{\sum_{i=1}^{N} \sum_{\mu=0}^{d+1}\lambda^{\mu}(x_{i},t)z_{i}^{\mu}\}, \tag{2.26}\] or in the equivalent form \[\tilde{G}(\underline{\lambda})=\tilde{Z}_{\Lambda}^{-1}\exp\left[\int_{ \varepsilon^{-1}{\mathbb{T}}^{d}}d\xi\sum_{\alpha=0}^{d+1}\lambda^{\alpha}(x, t)z^{\alpha}(\xi)\right]=\tilde{Z}_{\Lambda}^{-1}\exp\varepsilon^{-d}\left[\int_{{ \mathbb{T}}^{d}}dx\sum_{\alpha=0}^{d+1}\lambda^{\alpha}(x,t)z^{\alpha}( \varepsilon^{-1}x)\right], \tag{2.27}\] with \(\tilde{Z}_{\Lambda}\) a suitable normalization factor and \({\mathbb{T}}^{d}\) the macroscopic torus. Note that the local Gibbs states corresponding to any choice of smooth chemical potentials, being given by the exponential of linear combinations of the approximate integrals of motion, satisfy the condition (2.22), so that it results \[{\mathscr{L}}^{*}G(\underline{\lambda}(\,\cdot\,,t))=O(\varepsilon). \tag{2.28}\] So, in a local Gibbs state, in sufficiently small region and for a sufficiently small time the state looks like being in equilibrium, hence the name _local equilibrium_. This situation is particularly convenient to describe a regime close to the hydrodynamics and, as discussed in [24], [8], gives formally the Euler equations. However, this is not sufficient to describe the Navier-Stokes regime and the ghost regime we are going to discuss here. To show how the conservation laws give the hydrodynamic equations we follow a procedure similar to the Hilbert expansion proposed to approximate the solutions of the Boltzmann equation. Let us start with the phase space distribution function \[F_{\varepsilon}(x_{1},\ldots,x_{N},v_{1},\ldots,v_{N},t)=F(\varepsilon^{-1}x_{1 },\ldots,\varepsilon^{-1}x_{N},v_{1},\ldots,v_{N},\varepsilon^{-2}t)\] for the rescaled system, which satisfies the rescaled Liouville equation \[\frac{\partial F_{\varepsilon}}{\partial t}=\varepsilon^{-2}\mathscr{L}^{*}F_ {\varepsilon}. \tag{2.29}\] We prepare the system at time zero as described by a local equilibrium distribution \(\bar{G}_{0}\) defined as \[\bar{G}_{0}=Z_{0}^{-1}\exp\{\{\sum_{i=1}^{N}\sum_{\mu=0}^{d+1}\lambda_{0}^{\mu }(x_{i},0)z_{i}^{\mu}\}, \tag{2.30}\] \[\lambda_{0}^{\mu}(x,0)=0,\mu=1,\ldots,d;\quad\lambda_{0}^{0}(x,0)\neq 0, \lambda_{0}^{d+1}(x,0)\neq 0, \tag{2.31}\] \(\lambda_{0}^{0}\) and \(\lambda_{0}^{d+1}\) functions of \((x,t)\) with \(x\)-gradient of order \(1\). Note that \(\bar{G}_{0}\) is not a stationary state for (2.29). Writing \(F_{\varepsilon}\) as a series in \(\varepsilon\), \(F_{\varepsilon}=\sum_{n}\varepsilon^{n}F^{n}\), and substituting it in (2.29) we get the diverging terms \(\varepsilon^{-2}\mathscr{L}^{*}F_{0}\) and \(\varepsilon^{-1}\mathscr{L}^{*}F_{1}\) which we have to take care of. Due to the initial conditions \(\lambda_{0}^{\beta}(x,0)=0\), for \(\beta=1,\ldots,d\), in absence of external forces we have \(\lambda_{0}^{\beta}(x,t)=0\) at the lowest order, and we are forced to choose \(F_{0}\) as a local Gibbs state \(G_{0}\) with \(\lambda_{0}^{0}\) and \(\lambda_{0}^{4}\) functions of \((x,t)\) to be determined while \(\lambda_{0}^{\beta}=0\), for \(\beta=1,\ldots,d\). To single out the non-hydrodynamic contribution to \(F_{\varepsilon}\) let us decompose \(F_{\varepsilon}\) in a part which is Gibbsian with parameters slowly depending on the microscopic variables and depending on \(\varepsilon\) by means of a series in \(\varepsilon\), and a remainder. More explicitly, we put \[F_{\varepsilon}=G_{\varepsilon}+\varepsilon G_{0}R_{\varepsilon}, \tag{2.32}\] with \[G_{\varepsilon}=Z_{\varepsilon}^{-1}\exp\{\sum_{i=1}^{N}\sum_{\mu=0}^{d+1} \lambda_{\varepsilon}^{\mu}(x_{i},t)z_{i}^{\mu}\},\] \[\lambda_{\varepsilon}^{\mu}(x,t)=\sum_{n=0}^{\infty}\varepsilon^{n}\lambda_{ n}^{\mu}(x,t);\quad\lambda_{0}^{\mu}=0,\mu=1,\ldots,d;\quad\lambda_{0}^{0}(x,t) \neq 0,\lambda_{0}^{d+1}(x,t)\neq 0, \tag{2.33}\] where \(\lambda_{n}^{\mu}\) are functions of \((x,t)\) to be determined. Note that, if \(\lambda_{0}^{0}\) and \(\lambda_{0}^{d+1}\) do not depend on \((x,t)\), we are back to to the case of the incompressible Navier-Stokes-Fourier system discussed in [13]. In our context \(G_{0}\), the zero order term in the expansion, is a particular time dependent _local_ equilibrium, but not all the relevant terms are included in \(G_{0}\). We include all the hydrodynamic terms in \(G_{\varepsilon}\) and we can assume that in \(R_{\varepsilon}\) there are no terms which are combinations of the invariant quantities \(z^{\alpha}\) with coefficients depending on the macroscopic variables, since these terms are already present in \(G_{\varepsilon}\). \(R_{\varepsilon}\) represents the non-equilibrium part of the distribution \(F_{\varepsilon}\), which takes into account the fast modes in the system, namely the non-conserved quantities. They appear at the hydrodynamic level only through dissipative effects and determine the expression of transport coefficients. To make this concept more precise we refer to [29], [28] where it is introduced the Hilbert space of the local observables equipped with the scalar product \[(\phi,\psi)=\int dx[\langle\phi\tau_{x}\psi\rangle_{G_{0}}-\langle\phi\rangle_{ G_{0}}\langle\psi\rangle_{G_{0}}]. \tag{2.34}\] Here \(\langle\cdot\rangle_{G_{0}}\) is the average on the local Gibbs measure \(G_{0}\). In terms of this scalar product we define the projector on the invariant space as \[\mathscr{P}\phi=\sum_{\mu,\nu=0}^{d+1}(\phi,z^{\mu})(z,z)^{-1}_{\mu\nu}z^{\nu}, \tag{2.35}\] where \((z,z)^{-1}\) denotes the inverse of the matrix with elements \(\langle z_{\mu}z_{\nu}\rangle_{G_{0}}\). We assume that \(R_{\varepsilon}\) has no component on the invariant space, in other words we ask \[\mathscr{P}[R_{\varepsilon}]=0. \tag{2.36}\] We also assume that \[G_{0}R_{\varepsilon}(t)=G_{0}R_{1}+\varepsilon G_{0}R_{2}(t)+O(\varepsilon^{2 }). \tag{2.37}\] We need explicit expressions for \(R_{i},i=1,2\), in terms of the empirical fields, so that, inserting (2.32) in the conservation laws averaged with respect to \(F_{\varepsilon}\), we can get close equations for the empirical fields up to order \(\varepsilon\). To find such expressions, we insert the expansion (2.32) for \(F_{\varepsilon}\) in the Liouville equation (2.29) \[\begin{split}\partial_{t}G_{\varepsilon}+\varepsilon\partial_{t }(G_{0}R_{\varepsilon})=\varepsilon^{-2}\mathscr{L}^{*}G_{\varepsilon}+ \varepsilon^{-1}\mathscr{L}^{*}G_{0}R_{\varepsilon}\\ =\varepsilon^{-2}\mathscr{L}^{*}G_{0}+\varepsilon^{-1}\mathscr{L }^{*}G_{1}+\varepsilon^{-1}\mathscr{L}^{*}G_{0}R_{1}+\mathscr{L}^{*}G_{0}R_{ 2}+O(\varepsilon).\end{split} \tag{2.38}\] By (2.28), \(\varepsilon^{-1}\mathscr{L}^{*}G_{0}=O(1)\). In fact, the expression of \(\mathscr{L}^{*}G_{0}\) can be computed as \[\mathscr{L}^{*}G_{0}=\varepsilon G_{0}\sum_{i=1}^{N}\sum_{\mu=0,d+1}\sum_{ \gamma=1}^{d}\partial_{\gamma}\lambda_{0}^{\mu}(x_{i})w_{i}^{\mu\gamma}= \varepsilon G_{0}\sum_{i=1}^{N}[\partial_{\gamma}\lambda_{0}^{0}(x_{i})w_{i}^ {0\gamma}+\partial_{\gamma}\lambda_{0}^{d+1}(x_{i})w_{i}^{d+1\gamma}]. \tag{2.39}\] Next we compute \(\mathscr{L}^{*}G_{1}\). We write \[G_{1}=G_{0}g_{1}, \tag{2.40}\] where \[g_{1}=\sum_{j=1}^{N}\sum_{\mu=0}^{d+1}\lambda_{1}^{\mu}(x_{j},t)[z_{j}^{\mu}-< z_{j}^{\mu}>_{G_{0}}]. \tag{2.41}\] Since \(g_{1}\) is a linear combination of the invariant quantities \(z\) with coefficients depending on the macroscopic variables, the action of \(\mathscr{L}^{*}\) on it gives a linear combination of the currents \(w\) with a factor \(\varepsilon\): \[-\varepsilon^{-1}\mathscr{L}^{*}G_{1}=\sum_{i=1}^{N}\sum_{\mu=0}^{d+1}\sum_{ \gamma=1}^{d}G_{0}\partial_{\gamma}\lambda_{1}^{\mu}\;(x_{i},t)w_{i}^{\mu\gamma }+\varepsilon^{-1}g_{1}\mathscr{L}^{*}G_{0}=O(1). \tag{2.42}\] We conclude that \(\varepsilon^{-1}\mathscr{L}^{*}G_{1}\), \(\partial_{t}G_{\varepsilon}\) and \(\mathscr{L}^{*}G_{0}R_{2}\) are of order at least \(1\). If we multiply by \(\varepsilon\) (2.38), in the limit \(\varepsilon\to 0\), we have \[\varepsilon^{-1}\mathscr{L}^{*}G_{0}+\mathscr{L}^{*}G_{0}R_{1}=0. \tag{2.43}\] Moreover, it is easy to check that \(\mathscr{L}^{*}G_{0}\) is odd by the exchange \(v\to-v\) because \(w_{i}^{\mu\gamma}\), for \(\mu=0,d+1\quad\gamma=1\cdots d\) is odd in \(v\). Hence, the condition (2.43) can determine only the odd part of \(R_{1}\) that we call \(R_{1}^{a}\). To find \(R_{1}^{s}\), the even part of \(R_{1}\), we apply again \(\mathscr{L}^{*}\) to (2.43) to get, in the limit \(\varepsilon\to 0\), \[\mathscr{L}^{*}\mathscr{L}^{*}G_{0}R_{1}^{s}=-\varepsilon^{-1}\mathscr{L}^{* }\mathscr{L}^{*}G_{0}. \tag{2.44}\] The term \(\varepsilon^{-1}\mathscr{L}^{*}\mathscr{L}^{*}G_{0}\) is even (see Appendix A.1) so that this determines \(R_{1}^{s}\). It has a part of order \(\varepsilon\) which enter in the form of the new transport coefficients in the momentum equation and a part of order \(1\) which does not give contribution in the equation. The expression of \(\varepsilon^{-2}\mathscr{L}^{*}\mathscr{L}^{*}G_{0}\) is very complicate and involves two space derivatives of the chemical potentials. To summarize, \(R_{1}=R_{1}^{a}+R_{1}^{s}\) with \(R_{1}^{a}\) odd of order \(1\) and \(R_{1}^{s}\) even solutions of (2.43) and (2.44). Now we go back to (2.38) and get, in the limit \(\varepsilon\to 0\), \[\mathscr{L}^{*}G_{0}R_{2}=-\varepsilon^{-1}\mathscr{L}^{*}G_{1}-\partial_{t}G _{0}. \tag{2.45}\] This condition determines \(R_{2}\) in the limit \(\varepsilon\to 0\) as \[\mathscr{L}^{*}G_{0}R_{2}=G_{0}\sum_{i=1}^{N}\sum_{\mu=0}^{d+1}\sum_{\gamma=1 }^{d}\partial_{\gamma}\lambda_{1}^{\mu}(x_{i},s)w_{i}^{\mu\gamma}-\varepsilon ^{-1}g_{1}\mathscr{L}^{*}G_{0}-\partial_{t}G_{0}. \tag{2.46}\] In order the previous equations to be satisfied the r.h.s. cannot have components on the null space. This condition will be satisfied a-posteriori using the fact that \((\rho,e,u)\) is solution of the hydrodynamical equations (in Appendix A.2). We assume that there exists a unique solution \(R_{1}^{a}(t)\) to (2.43) and \(R_{2}\) to (2.46) such that \(\mathscr{P}R_{1}^{a}(t)=0\), \(\mathscr{P}R_{2}(t)=0\), which are expressed formally in terms of \(\mathscr{L}^{*}{}^{-1}\). We assume also that there exists a unique solution to (2.44) with \(\mathscr{P}R_{1}^{s}(t)=0\) which is expressed formally in terms of \(\mathscr{L}^{*}{}^{-1}\mathscr{L}^{*}{}^{-1}\). This is the assumption we really need on the inverse of \(\mathscr{L}^{*}\) to get the result. For sake of simplicity, we consider from now on a particular form for \(g_{1}\), namely we put to zero \(\lambda_{1}^{0}\) and \(\lambda_{1}^{d+1}\): \[\lambda_{1}^{0}=0,\quad\lambda_{1}^{d+1}=0. \tag{2.47}\] This is sufficient to obtain the limiting equation for \((\rho,e,u)\). Assuming a general form of \(g_{1}\) we would also get equations for the first corrections \(\rho_{1}\) and \(e_{1}\), which are not needed to obtain the ghost equations (1.1). ## 3. Hydrodynamic equations The incompressible limit corresponds to the assumption that the velocity field is small compared with the sound speed. In other words we assume that \(U^{\mu}(x,t)\equiv\langle z^{\mu}(x)\rangle_{F_{\varepsilon}(t)}\), \(\mu=1,\dots,d\), starts with a term of order \(\varepsilon\). Under the assumptions on \(F_{\varepsilon}\), this corresponds to choose \(\lambda_{0}^{\mu}=0\) for \(\mu=1,\dots,d\). Moreover, it results \(U^{\mu}(x,t)=\varepsilon\rho T\lambda_{1}^{\mu}(x,t)+O(\varepsilon^{2})\) with \(T\), the temperature of the Gibbs state \(G_{0}\), given by \((\lambda_{0}^{d+1})^{-1}\) and \(\rho\) the density of the Gibbs state \(G_{0}\) corresponding to the chemical potential \(\lambda_{0}^{0}\). We denote by \(u^{\mu}(x,t)\) the rescaled velocity field given by \(u^{\mu}(x,t)=T\lambda_{1}^{\mu}(x,t)\). For sake of simplicity, we will not write the explicit dependence on time in the chemical potentials from now on. We will use the notation \(\Big{\langle}f\Big{\rangle}_{G_{0}}=\Big{\langle}G_{0}f\Big{\rangle}\) where \(\Big{\langle}\cdot\Big{\rangle}\) is the already introduced integration w.r.t. the _a priori_ measure \(d\mathbb{Z}=\frac{1}{n!}d^{d}x_{1}d^{d}v_{1}\,\dots d^{d}x_{N}d^{d}v_{N}\). ### Continuity equation Let's derive first the continuity equation. To obtain it, we start from the conservation law for the empirical density (2.8) and we take the expectation with respect to the non-equilibrium measure \(F_{\varepsilon}(t)\) \[\Big{\langle}\varepsilon^{d}\sum_{i=1}^{N}f(x_{i})\Big{\rangle}_{F_{ \varepsilon}(t)}-\Big{\langle}\varepsilon^{d}\sum_{i=1}^{N}f(x_{i})\Big{\rangle} _{F_{\varepsilon}(0)}=\varepsilon^{-1}\int_{0}^{t}ds\Big{\langle} \varepsilon^{d}\sum_{i=1}^{N}\sum_{k=1}^{d}\partial_{k}f(x_{i})v_{i}^{k} \Big{\rangle}_{F_{\varepsilon}(s)}. \tag{3.1}\] Using (2.37) and the fact that \(\Big{\langle}\varepsilon^{d}\sum_{i=1}^{N}f(x_{i})\Big{\rangle}_{G_{0}}=\int dxf (x)\rho(x,t)\) we see that the l.h.s. of (3.1) gives \[\int dxf(x)\rho(x,t)-\int dxf(x)\rho(x,0)+O(\varepsilon).\] The term \[\varepsilon^{-1}\int_{0}^{t}ds\Big{\langle}\varepsilon^{d}\sum_{i=1}^{N} \sum_{k=1}^{d}\partial_{k}f(x_{i})v_{i}^{k}\Big{\rangle}_{G_{0}},\] in r.h.s. of (3.1) does not appear since \(G_{0}\) is Gaussian in the velocities with zero mean and the rhs of (3.1) becomes \[\int_{0}^{t}ds\Big{\langle}g_{1}\varepsilon^{d}\sum_{i=1}^{N}\sum_{k=1}^{d} \partial_{k}f(x_{i})v_{i}^{k}\Big{\rangle}_{G_{0}}+\int_{0}^{t}ds\Big{\langle} \varepsilon^{d}\sum_{i=1}^{N}\sum_{k=1}^{d}\partial_{k}f(x_{i})v_{i}^{k}R_{1} \Big{\rangle}_{G_{0}}+O(\varepsilon). \tag{3.2}\] The second term in (3.2) is \(0\) by using that \(\mathscr{P}R_{1}=0\). Now we discuss the first term. By (2.41), \[\int_{0}^{t}ds\Big{\langle}g_{1}\sum_{i=1}^{N}\sum_{k=1}^{d} \partial_{k}f(x_{i})v_{i}^{k}\Big{\rangle}_{G_{0}}=\int_{0}^{t}ds\Big{\langle} \sum_{j=1}^{N}\sum_{\mu=1}^{d}\lambda_{1}^{\mu}(x_{j})z_{j}^{\mu}\varepsilon^ {d}\sum_{i=1}^{N}\sum_{k=1}^{d}\partial_{k}f(x_{i})v_{i}^{k}\Big{\rangle}_{G_{ 0}}. \tag{3.3}\] We use the choice we made for \(\lambda_{1}\) so that the l.h.s. of (3.3) becomes \[\Big{\langle}\varepsilon^{d}\sum_{i=1}^{N}\sum_{\mu,k=1}^{d}\lambda_{1}^{\mu}(x_{ i})\partial_{\mu}f(x_{i})v_{i}^{\mu}v_{i}^{k}\Big{\rangle}_{G_{0}}. \tag{3.4}\] Since the average on \(G_{0}\) of \(v_{i}^{\mu}v_{i}^{k}\) contributes only for \(k=\mu\), in the limit \(\varepsilon\to 0\) we have, using \(u^{\mu}(x,t)=T\lambda_{1}^{\mu}(x,t)\), \[-\int_{0}^{t}ds\int dx\sum_{\mu=1}^{d}\partial_{\mu}(\rho u^{\mu})(x,t)f(x), \tag{3.5}\] for any test function \(f\) and for any \(t\). Hence \[\rho_{0}(x,t)-\rho(x,0)=-\int_{0}^{t}\operatorname{div}(\rho u), \tag{3.6}\] or in differential form \[\partial_{t}\rho=-\operatorname{div}(\rho u).\] ### Pressure We examine now the second conservation law (2.9). By averaging as before, for \(\beta=1,\ldots,d\), we get, in the limit \(\varepsilon\to 0\), \[\begin{split}&\Big{\langle}\varepsilon^{d}\sum_{i=1}^{N}f(x_{i})v_ {i}^{\beta}\Big{\rangle}_{F_{\varepsilon}(t)}-\Big{\langle}\varepsilon^{d} \sum_{i=1}^{N}f(x_{i})v_{i}^{\beta}\Big{\rangle}_{F_{\varepsilon}(0)}\\ &=\varepsilon^{-1}\int_{0}^{t}ds\Big{\langle}\varepsilon^{d}\int dx \sum_{k=1}^{d}\partial_{k}f(x)w^{\beta k}(x)\Big{\rangle}_{F_{\varepsilon}}+O (\varepsilon).\end{split} \tag{3.7}\] Using the assumptions on \(F_{\varepsilon}(t)\) we see that the l.h.s. is of order \(\varepsilon\), since the term of order \(1\) vanishes because \(\lambda_{0}^{\mu},\mu=1,2,3\) are zero. We write the current \(w^{\beta k}\) using the decomposition (2.20) as \(w_{*}^{\beta k}+\varepsilon\partial_{\gamma}\bar{\Phi}_{0}^{k\beta\gamma}(x) +O(\varepsilon^{2})\), with \(w_{*}^{\beta k}\) given by (2.21). Using the assumptions (2.32) and (2.37) we have: \[\varepsilon^{-1}\Big{\langle}w_{*}^{\beta k}\Big{\rangle}_{F_{\varepsilon}}= \varepsilon^{-1}\Big{\langle}w_{*}^{\beta k}\Big{\rangle}_{G_{\varepsilon}}+ \Big{\langle}w_{*}^{\alpha\beta}R_{\varepsilon}\Big{\rangle}, \tag{3.8}\] so that \[\varepsilon^{-1}\Big{\langle}w^{\beta k}\Big{\rangle}_{F_{\varepsilon}}= \varepsilon^{-1}\Big{\langle}w_{*}^{\beta k}\Big{\rangle}_{G_{\varepsilon}}+ \Big{\langle}\sum_{\gamma=1}^{d}\partial_{\gamma}\bar{\Phi}_{0}^{k\beta\gamma} (x)\Big{\rangle}_{G_{0}}+\Big{\langle}w_{*}^{\alpha\beta}R_{\varepsilon} \Big{\rangle}_{G_{0}}+O(\varepsilon). \tag{3.9}\] The second term on the r.h.s. is zero since the term \(\bar{\Phi}_{0}^{k\beta\gamma}(x)=\frac{1}{2}\varepsilon^{d}\sum_{i,j=1}^{N} \delta(x_{i}-x)\Phi_{0}^{\beta\gamma\nu}(\varepsilon^{-1}|x_{i}-x_{j}|)\) is antisymmetric under the exchange \(i\to j\) while \(G_{0}\) is symmetric. We introduce the currents \(\tilde{w}^{\beta k}\) as given by \(w_{*}^{\beta k}\) with the velocities \(v_{i}\) replaced by \(\tilde{v}_{i}=v_{i}-\varepsilon u(x_{i})\). Then \[w_{*}^{\beta k}(x)=\tilde{w}^{\beta k}(x)+\varepsilon^{2}u^{k}(x)u^{\beta}(x) +\varepsilon u^{k}(x)\tilde{v}^{\beta}+\varepsilon u^{\beta}(x)\tilde{v}^{k}. \tag{3.10}\] For the symmetry of the measure \(G_{\varepsilon}\) we have \(\left\langle\tilde{w}^{\beta k}(x)\right\rangle_{G_{\varepsilon}}=O(\varepsilon^ {2})\), if \(k\neq\beta\). The average of \(\tilde{w}^{\beta\beta}\), \(\beta=1,\ldots,d\), with respect the local Gibbs state \(G_{\varepsilon}\) is, by the virial theorem, the thermodynamic pressure \(P_{\varepsilon}\) in the state \(G_{\varepsilon}\)[26]. Now we consider the term \(\left\langle w_{*}^{\alpha\beta}R_{\varepsilon}\right\rangle_{G_{0}}\) and use (2.39), (2.43) and the 'identity' \({\mathscr{L}}^{*-1}{\mathscr{L}}^{*}R_{1}^{a}=R_{1}^{a}\). For sake of shortness we omit the time dependence in the chemical potentials. We obtain \[\left\langle w_{*}^{\alpha\beta}R_{\varepsilon}\right\rangle_{G_{0}}=\left\langle \tilde{w}_{*}^{\alpha\beta}R_{1}^{a}\right\rangle_{G_{0}}+O(\varepsilon)= \left\langle\sum_{i=1}^{N}\sum_{\gamma=1}^{d}[\partial_{\gamma}\lambda_{0}^{ 0}(x_{i})w_{i}^{0\gamma}+\partial_{\gamma}\lambda_{0}^{d+1}(x_{i})w_{i}^{d+1 \gamma}]{\mathscr{L}}^{-1}\tilde{w}^{\alpha\beta}\right\rangle_{G_{0}}+O( \varepsilon). \tag{3.11}\] We have \[\int dx\partial_{k}f(x)\Big{\langle}w_{*}^{\alpha\beta}R_{\varepsilon}\Big{\rangle} _{G_{0}}=\int dx\partial_{k}f(x)\Big{\langle}{\mathscr{L}}^{-1}w_{*}^{\alpha \beta}\sum_{i=1}^{N}\sum_{\gamma=1}^{d}[\partial_{\gamma}\lambda_{0}^{0}(x_{i} )w_{i}^{0\gamma}+\partial_{\gamma}\lambda_{0}^{d+1}(x_{i})w_{i}^{d+1\gamma}] \Big{\rangle}_{G_{0}}+O(\varepsilon).\] \[=\varepsilon^{-d}\int dx\partial_{k}f(x)\int dy\sum_{\nu=0,d+1}\sum_{\gamma=1 }^{d}\partial_{\gamma}\lambda_{0}^{\nu}\ (y)\Big{\langle}{\mathscr{L}}^{-1}w_{*}^{\alpha \beta}(x)w^{\nu\gamma}(y)\Big{\rangle}_{G_{0}}+O(\varepsilon).\] The symmetries of the microscopic current-current correlations imply (see [28]) that the cross correlations between \(\mu=0,d+1\) and \(\beta=1,2,3\) vanish. Summarizing, eq. (3.7) implies for \(P^{\varepsilon}=\left\langle w_{*}^{\beta\beta}\right\rangle_{G_{\varepsilon}}\) \[\varepsilon^{-1}\int dx\nabla f(x)P^{\varepsilon}(x,t)=O(\varepsilon). \tag{3.12}\] Since \(P^{\varepsilon}\) is a function of the thermodynamic parameters \(\lambda_{\varepsilon}\), we can expand it in series of \(\varepsilon\) as \(\sum_{k}\varepsilon^{k}P_{k}\), where \(P_{k}=\frac{d^{k}P^{\varepsilon}}{d\varepsilon^{k}}\big{|}_{\varepsilon=0}\) We have that \(P_{0}\) is a function of the \(\lambda_{0}^{0}\) and \(\lambda_{0}^{d+1}\), while \(P_{1}=\sum_{\mu=0}^{d+1}\frac{\partial P^{\varepsilon}}{\partial\lambda_{ \varepsilon}^{\nu}}\big{|}_{\varepsilon=0}\lambda_{1}^{\mu}\). In order to fulfill (3.12) for any test function \(f\), \[\nabla P_{0}=0,\quad\nabla P_{1}=0.\] In particular, the pressure \(P_{0}(\rho,T)\) is a function of \(\rho,T\) quantities conjugate to \(\lambda_{0}^{0}\) and \(\lambda_{0}^{d+1}\) respectively, and has to be constant versus \(x\). In the following, we will denote \(P_{0}\) simply as \(P\). ### Momentum equation To determine the equation for \(u^{\mu}(x,t)\), which is of order \(\varepsilon\), we have to rescale the empirical velocity field. This means that we have to look at the empirical field \[\bar{z}^{\alpha}(x)=\varepsilon^{-1}\varepsilon^{d}\sum_{i}v_{i}^{\alpha} \delta(x_{i}-x),\ \ \ \alpha=1,\ldots,d. \tag{3.13}\] We proceed as we did before to obtain (3.7), but we have to look at the explicit form of the terms \(O(\varepsilon)\) and \(O(\varepsilon^{2})\) because they have to be divided by \(\varepsilon^{2}\). We have: \[\begin{split}&\Big{\langle}\varepsilon^{d-1}\sum_{i=1}^{N}f(x_{i}) v_{i}^{\beta}\Big{\rangle}_{F_{\varepsilon}(t)}-\Big{\langle}\varepsilon^{d-1} \sum_{i=1}^{N}f(x_{i})v_{i}^{\beta}\Big{\rangle}_{F_{\varepsilon}(0)}=\\ &\varepsilon^{-2}\int_{0}^{t}ds\Big{\langle}\varepsilon^{d}\sum_{ i=1}^{N}\sum_{k=1}^{d}\partial_{k}f(x_{i})\{v_{i}^{k}v_{i}^{\beta}+\frac{1}{2} \sum_{j\neq i}\Psi^{\beta k}(\varepsilon^{-1}(x_{i}-x_{j}))\}\Big{\rangle}_{F _{\varepsilon}(s)}+\\ &\quad+\varepsilon^{-1}\int_{0}^{t}ds\Big{\langle}\frac{1}{4} \varepsilon^{d}\sum_{i,j=1}^{N}\sum_{\gamma,\nu=1}^{d}\partial_{\gamma\nu}^{2 }f(x_{i})\Phi_{0}^{\nu\beta\gamma}(\varepsilon^{-1}(x_{i}-x_{j}))\Big{\rangle} _{F_{\varepsilon}(s)}\\ &\quad+\int_{0}^{t}ds\Big{\langle}\frac{1}{12}\varepsilon^{d} \sum_{i,j=1}^{N}\sum_{\gamma,\nu,\alpha=1}^{d}\partial_{\gamma\nu\alpha}^{3}f (x_{i})\Phi^{\alpha\beta\gamma\nu}(\varepsilon^{-1}(x_{i}-x_{j}))\Big{\rangle} _{F_{\varepsilon}(s)}\end{split} \tag{3.14}\] The l.h.s. of (3.14) converges to \[\int dxf(x)\rho[u(x,t)-u(x,0)], \tag{3.15}\] as \(\varepsilon\to 0\). To get the equation for the velocity field we have to compute the non-equilibrium average of the velocity current tensor \(w^{\beta k}\) but now there is a factor \(\varepsilon^{-2}\) in front of it. Therefore we see that in this case also the terms of order \(\varepsilon^{2}\) in (2.32) have to be taken into account. We start by discussing the term containing \(D_{0}\) namely the second line in the r.h.s. of (3.14). The lowest order term in \(\varepsilon\) is \[\varepsilon^{-1}\int_{0}^{t}ds\Big{\langle}\frac{1}{2}\varepsilon^{d}\sum_{i, j=1}^{N}\sum_{\gamma,\nu=1}^{d}\partial_{\gamma\nu}^{2}f(x_{i})\Phi_{0}^{ \beta\gamma\nu}(\varepsilon^{-1}(x_{i}-x_{j}))[G_{0}+\varepsilon G_{0}g_{1}+ \varepsilon G_{0}R_{1}]\Big{\rangle}.\] The diverging term \[\varepsilon^{-1}\int_{0}^{t}ds\Big{\langle}\frac{1}{2}\varepsilon^{d}\sum_{i, j=1}^{N}\sum_{\gamma,\nu=1}^{d}\partial_{\gamma,\nu}^{2}f(x_{i})\Phi_{0}^{ \beta\gamma\nu}(\varepsilon^{-1}(x_{i}-x_{j}))\Big{\rangle}_{G_{0}}.\] is zero by symmetry property of the potential. In fact, \(\Phi_{0}^{\beta\gamma\nu}\) is antysimmetric in the exchange \(\xi_{i}\to\xi_{j}\) while \(G_{0}\) is symmetric. The term of order zero in \(\varepsilon\) is \[\int_{0}^{t}ds\Big{\langle}\frac{1}{2}\varepsilon^{d}\sum_{i,j=1}^{N}\sum_{ \gamma,\nu=1}^{d}\partial_{\gamma\nu}^{2}f(x)\Phi_{0}^{\beta\gamma\nu}( \varepsilon^{-1}(x_{i}-x_{j}))[G_{0}g_{1}+G_{0}R_{1}^{a}]\Big{\rangle}.\] For the same reasons the part involving \(g_{1}\) is zero because all the conserved quantities are symmetric. We are left with \[\int_{0}^{t}ds\Big{\langle}\frac{1}{4}\varepsilon^{d}\sum_{i,j=1}^{N}\sum_{ \gamma,\nu=1}^{d}\partial_{\gamma\nu}^{2}f(x_{i})\Phi_{0}^{\beta\gamma\nu}( \varepsilon^{-1}(x_{i}-x_{j}))R_{1}^{a}\Big{\rangle}_{G_{0}}.\] We can safely change \(\Phi_{0}^{\beta\gamma\nu}\) in \(\tilde{\Phi}_{0}^{\beta\gamma\nu}:=\Phi_{0}^{\beta\gamma\nu}-\mathscr{P}\Phi_{ 0}^{\beta\gamma\nu}\) since \(\mathscr{P}R_{1}\) is zero. By (2.36) we can use the 'identity' \((\mathscr{L}^{*})^{-1}\mathscr{L}^{*}R_{1}^{a}=R_{1}^{a}\) and the expression of \(R_{1}^{a}\) to get \[\int_{0}^{t}ds\Big{\langle}\frac{1}{4}\varepsilon^{d}\sum_{i,j=1}^{N}\sum_{ \gamma,\nu=1}^{d}\partial_{\gamma\nu}^{2}f(x_{i})\mathscr{L}^{-1}\tilde{\Phi }_{0}^{\beta\gamma\nu}(\varepsilon^{-1}(x_{i}-x_{j}))\sum_{k=1}^{d}[\partial _{k}\lambda_{0}^{0}(x,s)w_{i}^{0k}+\partial_{k}\lambda_{0}^{d+1}(x,s)w_{i}^{d+ 1k}]\Big{\rangle}_{G_{0}}.\] This term is zero by oddness in the velocity. We observe that the term containing \(D\), third line in (3.14) is not \(0\) because the lowest order is given by an average with respect to \(G_{0}\) (Gibbs measure with non constant parameters). The other terms are of higher order, hence do not contribute. The explicit expression is \[\begin{split}&\frac{1}{6}\int dx\sum_{\alpha,\gamma,\nu=1}^{d} \partial_{\alpha\gamma\nu}^{3}f(x)\Big{\langle}\bar{\Phi}_{\beta\alpha\gamma \nu}(x)\Big{\rangle}_{F_{\varepsilon}}=\\ &\Big{\langle}\frac{1}{12}\varepsilon^{d}\sum_{i\neq j=1}^{N}\sum_ {\alpha,\gamma,\nu=1}^{d}\partial_{\alpha\gamma\nu}^{3}f(\varepsilon\xi_{i}) \partial_{\beta}V(|\xi_{i}-\xi_{j}|)[\xi_{i}^{\gamma}-\xi_{j}^{\gamma}][\xi_{ i}^{\nu}-\xi_{j}^{\nu}][\xi_{i}^{\alpha}-\xi_{j}^{\alpha}]\Big{\rangle}_{G_{0}}+o( \varepsilon).\end{split} \tag{3.16}\] By symmetry, only the terms of the form \(\Big{\langle}\bar{\Phi}_{\alpha\alpha\beta\beta}\Big{\rangle}_{G_{0}}\) are different from zero. Define \(\Phi_{\alpha\beta}:=\bar{\Phi}_{\alpha\alpha\beta\beta}\) and \(\hat{\Phi}_{\alpha\beta}:=\Big{\langle}\Phi_{\alpha\beta}\Big{\rangle}_{G_{0}}\). \(\hat{\Phi}_{\alpha\beta}\) is a function of \(\rho,T\) and using the condition on \(\nabla P_{0}=0\) it can be seen as a function of \(T\) only, so only the gradient of \(T\) will appear in the final term. By two integrations by parts we move two \(x\)-derivatives in the main term in (3.16) on \(\hat{\Phi}_{\alpha\beta}\) and (3.16) becomes \[\frac{1}{6}\int dx\sum_{\alpha=1}^{d}\partial_{\alpha}f\partial_{\alpha\beta}^ {2}\hat{\Phi}_{\alpha\beta}(\rho,T).\] We have \[\partial_{\alpha}\hat{\Phi}_{\alpha\beta}=\frac{\partial\hat{\Phi}_{\alpha \beta}}{\partial T}\partial_{\alpha}T:=\hat{\Phi}_{\alpha\beta}^{\prime} \partial_{\alpha}T,\] and \[\partial_{\alpha\beta}^{2}\hat{\Phi}_{\alpha\beta}=\partial_{\beta}(\hat{ \Phi}_{\alpha\beta}^{\prime}\partial_{\alpha}T)=\partial_{\alpha}T\partial_{ \beta}(\hat{\Phi}_{\alpha\beta}^{\prime})+\hat{\Phi}_{\alpha\beta}^{\prime} \partial_{\alpha\beta}^{2}T\] \[=\partial_{\alpha}T\frac{\partial\hat{\Phi}_{\alpha\beta}^{\prime}}{ \partial T}\partial_{\beta}T+\hat{\Phi}_{\alpha\beta}^{\prime}\partial_{\alpha \beta}^{2}T=\hat{\Phi}_{\alpha\beta}^{\prime\prime}\partial_{\alpha}T\partial_{ \beta}T+\hat{\Phi}_{\alpha\beta}^{\prime}\partial_{\alpha\beta}^{2}T.\] Therefore \[\begin{split}&\frac{1}{6}\int dx\sum_{\alpha=1}^{d}\partial_{\alpha}f \partial_{\alpha\beta}^{2}\hat{\Phi}_{\alpha\beta}(\rho,T)=\\ &\sum_{\alpha=1}^{d}\Big{[}\int dx\partial_{\beta}f[Y_{1}( \partial_{\alpha}T\partial_{\beta}T-\delta_{\alpha\beta}\frac{1}{d}\sum_{ \gamma=1}^{d}(\partial_{\gamma}T)^{2})+\bar{\omega}_{1}\delta_{\alpha\beta} \sum_{\gamma=1}^{d}(\partial_{\gamma}T)^{2}\\ &+Y_{2}(\partial_{\alpha\beta}^{2}T-\delta_{\alpha\beta}\frac{1}{ d}\sum_{\gamma=1}^{d}\partial_{\gamma\gamma}^{2}T)+\bar{\omega}_{2}\delta_{ \alpha\beta}\sum_{\gamma=1}^{d}\partial_{\alpha\gamma}^{2}T)]\Big{]},\end{split} \tag{3.17}\] where \[Y_{1}=\frac{1}{6}\hat{\Phi}^{\prime\prime}_{\alpha\beta},\quad\alpha\neq\beta ;\quad Y_{2}=\frac{1}{6}\hat{\Phi}^{\prime}_{\alpha\beta}\quad\alpha\neq\beta ;\quad\bar{\omega}_{1}=\frac{1}{6d}\sum_{\gamma=1}^{d}\hat{\Phi}^{\prime\prime }_{\gamma\gamma},\quad\bar{\omega}_{2}=\frac{1}{6d}\sum_{\gamma=1}^{d}\hat{ \Phi}^{\prime}_{\gamma\gamma}. \tag{3.18}\] Finally, we turn to first term in the rhs of (3.14) and compute the contributions for the currents \(w_{*}\) and use the definition (3.10) of \(\tilde{w}\): \[\begin{split}&\varepsilon^{-2}\Big{\langle}w_{*}^{\beta k}(x) \Big{\rangle}_{G_{\varepsilon}}+\Big{\langle}w_{*}^{\beta k}(x)R_{\varepsilon }\Big{\rangle}_{G_{0}}=\varepsilon^{-2}\Big{\langle}\tilde{w}^{\beta k} \Big{\rangle}_{G_{\varepsilon}}+\rho u^{\beta}u^{k}+\varepsilon^{-1}\Big{\langle} w_{*}^{\beta k}R_{\varepsilon}\Big{\rangle}_{G_{0}}+O(\varepsilon)=\\ &\varepsilon^{-2}P_{0}+\varepsilon^{-1}P_{1}+P_{2}+\rho u^{\beta }u^{k}+\varepsilon^{-1}\Big{\langle}G_{0}R_{1}w_{*}^{\beta k}(x)\Big{\rangle} +\Big{\langle}G_{0}R_{2}w_{*}^{\beta k}(x)\Big{\rangle}+O(\varepsilon).\end{split} \tag{3.19}\] The first two terms in the second line of (3.19) do not contribute because \(P_{0}\) and \(P_{1}\) are constant. The fourth term in the rhs of (3.19) gives the non linear transport term, while \(P_{2}\) represents the second order correction to the thermodynamic pressure \(P_{\varepsilon}\) and gives rise to the unknown pressure \(\mathfrak{p}\) appearing in equation (1.1). Now we pass to examine the terms involving \(R_{i}\). To compute these terms, let us first introduce \(\bar{w}^{\beta k}=w_{*}^{\beta k}-\mathscr{P}w_{*}^{\beta k}\) and notice that (2.36) implies \[\Big{\langle}R_{1}\mathscr{P}w^{\alpha\beta}(x)\Big{\rangle}_{G_{0}}=0,\quad \Big{\langle}R_{2}\mathscr{P}w^{\alpha\beta}(x)\Big{\rangle}_{G_{0}}=0. \tag{3.20}\] By (2.36) we can use the 'identity' \((\mathscr{L}^{*})^{-1}\mathscr{L}^{*}G_{0}R_{i}=G_{0}R_{i},\quad i=1,2\) and (2.46). First of all, let's consider the diverging term due to \(R_{1}\) and start from the contribution due to \(R_{1}^{a}\): \[\varepsilon^{-1}\Big{\langle}\sum_{j=1}^{N}[\partial_{\gamma}\lambda_{0}^{0}(x )w_{j}^{0\gamma}+\partial_{\gamma}\lambda_{0}^{4}(x,s)w_{j}^{4\gamma}] \varepsilon^{d}\sum_{i=1}^{N}\partial_{k}f(x_{i})\mathscr{L}^{-1}\bar{w}_{i}^{ \beta k}\Big{\rangle}_{G_{0}}.\] The symmetries of the microscopic current-current correlations imply (see [28]) that the cross correlations between \(\mu=0,d+1\) and \(\beta=1,2,3\) vanish. Hence this term is zero. On the other hand the term involving \(\varepsilon^{-1}R_{1}^{s}\), \[\varepsilon^{-1}\Big{\langle}R_{1}^{s}\varepsilon^{d}\sum_{i=1}^{N}\bar{w}_{i}^{ \beta k}\partial_{k}f(x_{i})\Big{\rangle}_{G_{0}}, \tag{3.21}\] by our assumption is of order \(1\) and has to be computed. This term gives in the hydrodynamic equations a term similar to the one due to \(D\) and will add new terms to the transport coefficients \(Y_{1}\) and \(Y_{2}\). The explicit expression is computed in the Appendix A.1. The total transport coefficients in the equation will be denoted by \(K_{1}\) and \(K_{2}\). We discuss now the last term in the r.h.s of (3.19) involving \(R_{2}\), \[\begin{split}&\int dx\sum_{k=1}^{d}\partial_{k}f\Big{\langle} \bar{w}^{\beta k}(x)(\mathscr{L}^{*})^{-1}\mathscr{L}^{*}G_{0}R_{2}\Big{\rangle} =\Big{\langle}\mathscr{L}^{*}G_{0}R_{2}\mathscr{L}^{-1}\varepsilon^{d}\sum_ {i=1}^{N}\sum_{k=1}^{d}\bar{w}_{i}^{\beta k}\partial_{k}f(x_{i})\Big{\rangle} \\ =&-\varepsilon^{-1}\Big{\langle}\Big{[}G_{0} \mathscr{L}^{*}g_{1}+g_{1}\mathscr{L}^{*}G_{0}-\varepsilon\partial_{t}G_{0} \Big{]}\mathscr{L}^{-1}\varepsilon^{d}\sum_{i=1}^{N}\sum_{k=1}^{d}\bar{w}_{i }^{\beta k}\partial_{k}f(x_{i})\Big{\rangle}.\end{split} \tag{3.22}\] The term \(\partial_{t}G_{0}\mathscr{L}^{-1}\dots\) in the r.h.s. does not contribute because \(\mathscr{L}^{-1}\) is orthogonal to \(\partial_{t}G_{0}\). The second term is \[\begin{split}&\Big{\langle}g_{1}\sum_{j=1}^{N}\sum_{\gamma=1}^{d}[ \partial_{\gamma}\lambda_{0}^{0}(x)w_{j}^{0\gamma}+\partial_{\gamma}\lambda_{ 0}^{d+1}(x)w_{j}^{d+1\gamma}]\mathscr{L}^{-1}\varepsilon^{d}\sum_{i=1}^{N} \sum_{k=1}^{d}\bar{w}_{i}^{\beta k}\partial_{k}f(x_{i})\Big{\rangle}_{G_{0}} \\ =&\sum_{\ell=1}^{N}\sum_{\mu,\gamma=1}^{d}\sum_{j=1}^ {N}\partial_{\gamma}\lambda_{0}^{0}(x)\Big{\langle}\lambda_{1}^{\mu}(x_{\ell} )z_{\ell}^{\mu}w_{j}^{0\gamma}\mathscr{L}^{-1}\varepsilon^{d}\sum_{i=1}^{N} \sum_{k=1}^{d}\bar{w}_{i}^{\beta k}\partial_{k}f(x_{i})\Big{\rangle}_{G_{0}} \\ &+\sum_{\ell=1}^{N}\sum_{\mu,\gamma=1}^{d}\sum_{j=1}^{N}\lambda_{ 1}^{\mu}(x_{\ell})\partial_{\gamma}\lambda_{0}^{d+1}(x)\Big{\langle}z_{\ell}^ {\mu}w_{j}^{d+1\gamma}\mathscr{L}^{-1}\varepsilon^{d}\sum_{i=1}^{N}\sum_{k=1}^ {d}\bar{w}_{i}^{\beta k}\partial_{k}f(x_{i})\Big{\rangle}_{G_{0}}.\end{split} \tag{3.23}\] In (3.26) will appear a term of a similar form. All the terms together ((3.23)+second term in (3.26)) give in the hydrodynamic equation a term of the form \[\sum_{j,k,l=1}^{d}\partial_{j}[\alpha_{\beta jkl}(T)u_{k}\partial_{\ell}T],\] which is Galileian invariant for any temperature only if \(\alpha_{\beta jkl}=0\) (see [3] page 1071). So, \(\alpha_{\beta jkl}\) has to be zero based on this physical consideration. In the Appendix A.3 we will prove that this is indeed the case. We are left with the first term \[\Big{\langle}\sum_{j=1}^{N}\sum_{\mu=1}^{d}\sum_{l,k=1}^{d}\partial_{l}\lambda_ {1}^{\mu}(x_{j})\bar{w}_{j}^{\mu l}\mathscr{L}^{-1}\varepsilon^{d}\sum_{i} \bar{w}_{i}^{\beta k}\partial_{k}f(x_{i})\Big{\rangle}_{G_{0}}. \tag{3.24}\] We remark that \(\mathscr{L}^{-1}\) is "well defined" on \(\bar{w}\) by the assumptions discussed in section 2. The substitution of the current \(w\) with \(\bar{w}\) is correct because the range of \(\mathscr{L}^{-1}\) is orthogonal to \(\mathscr{P}w\). Since \(\lambda_{1}^{\mu}=\dfrac{u^{\mu}}{T}\) and \(T\) are not costant, we have two different contribution to (3.24) \[\sum_{j=1}^{N}\sum_{\mu=1}^{d}\sum_{l,k=1}^{d}\Big{\langle}\partial_{l} \lambda_{1}^{\mu}(x_{j})\bar{w}_{j}^{\mu l}\mathscr{L}^{-1}\varepsilon^{d} \sum_{i=1}^{N}\bar{w}_{i}^{\beta k}\partial_{k}f(x_{i})\Big{\rangle}_{G_{0}} \tag{3.25}\] \[=\sum_{j=1}^{N}\sum_{\mu=1}^{d}\sum_{l,k=1}^{d}\Big{\langle}[\frac{1}{T} \partial_{l}u^{\mu}(x_{j})-u^{\mu}\frac{1}{T^{2}}\partial_{l}T(x_{j})]\bar{w} _{j}^{\mu l}\mathscr{L}^{-1}\varepsilon^{d}\sum_{i=1}^{N}\bar{w}_{i}^{\beta k }\partial_{k}f(x_{i})\Big{\rangle}_{G_{0}}. \tag{3.26}\] The second term will combine with the terms in (3.23) to eliminate the term in the equation which is not Galileian invariant (see Appendix A.3). We discuss here the first term in (3.26). To find the expression of the transport coefficients we consider \[\Big{\langle}\sum_{j=1}^{2}\sum_{\mu,l=1}^{d} \frac{1}{T}\partial_{l}u^{\mu}(x_{j})w_{j}^{\mu l}\mathscr{L}^{-1} \varepsilon^{d}\sum_{i=1}^{N}\sum_{k=1}^{d}\bar{w}_{i}^{\beta k}\partial_{k}f (x_{i})\Big{\rangle}_{G_{0}}= \tag{3.27}\] \[\sum_{\mu,k,l=1}^{d}\int dy\frac{1}{T}\partial_{l}u^{\mu}(y)\int \varepsilon^{-d}dz\partial_{k}f(z)\Big{\langle}\bar{w}^{\mu l}(y)\mathscr{L}^ {-1}\bar{w}^{\beta k}(z)\Big{\rangle}_{G_{0}}.\] Since the Gibbsian state \(G_{0}\) is invariant under translations on \(\mathbb{R}^{d}\) we have \[\text{r.h.s. of \eqref{eq:diff_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eqeq_eq_eq_eq_eq_eq_eqeq_eq_eq_eq_eq_eq_eq_eq_eqeq_eq_eq_eq_eqeq_eq_eqeq_eq_eqeq_eqeq_eqeq_eqeq_eqeq_eqeq_eqeq_eqeqeq_ Therefore the time integral of (3.31) has only two independent coefficients \[\int_{0}^{\infty}d\tau c(\tau)=2\eta T;\quad\int_{0}^{\infty}d\tau c^{\prime}( \tau)=2T(\zeta-\frac{2}{d}\eta), \tag{3.32}\] where \(\eta\) and \(\zeta\) are the shear viscosity and the bulk viscosity respectively. They are finite if the correlations decay sufficiently fast to make the time integrals in (3.32) convergent. Notice that the subtraction of \(\mathscr{P}\tilde{w}^{\alpha\beta}\) has been crucial, because the self-correlation of the slow part of the current does not decay in time. Since \(\text{div}\ u\neq 0\), the term proportional to the bulk viscosity does appear in the limiting equation. The computation gives also the Green-Kubo formula for the bulk viscosity \(\zeta\) \[\zeta=\frac{1}{2d^{2}T}\int_{0}^{\infty}d\tau\int d\xi\Big{[}\Big{\langle}\sum _{\alpha}\bar{w}^{\alpha\alpha}(\xi,\tau)\sum_{\gamma}\bar{w}^{\gamma\gamma}(0,0)\Big{\rangle}_{G_{0}}-\Big{\langle}\sum_{\alpha}\bar{w}^{\alpha\alpha} \Big{\rangle}_{G_{0}}\Big{\langle}\sum_{\gamma}\bar{w}^{\gamma\gamma}\Big{ }\Big{\rangle}_{G_{0}}\Big{]}. \tag{3.33}\] The usual expression given in [28] is recovered using the explicit form of the projector \(\mathscr{P}\). The viscosity is given by \[\eta=\frac{1}{2T}\int_{0}^{\infty}d\tau\int d\xi\Big{\langle}\bar{w}^{12}(\xi,\tau)\bar{w}^{12}(0,0).\Big{\rangle} \tag{3.34}\] Putting all the terms together we have the following equation to the lowest order in \(\varepsilon\): \[\begin{split}\int dxf(x)\rho[u^{\beta}(x,t)-u^{\beta}(x,0)]=\\ \int_{0}^{t}ds\Big{[}\int dy\sum_{k=1}^{d}\partial_{k}f(y)\{\rho u ^{\beta}(y,s)u^{k}(y,s)-\eta[\partial_{k}u^{\beta}]^{Tr}(y,s)-p-\zeta\sum_{ \gamma=1}^{d}\partial_{\gamma}u^{\gamma}(y,s)\}\\ +\int dx\sum_{\alpha=1}^{d}\partial_{\alpha\beta\alpha}^{3}f(x) \hat{\Phi}_{\alpha\beta}(x,s)+\int dx\sum_{k=1}^{d}\partial_{k}f(x)\varepsilon ^{-1}\Big{\langle}R_{1}^{s}\bar{w}^{\beta k}\Big{\rangle}_{G_{0}}\Big{]},\end{split} \tag{3.35}\] for any test function \(f\), where \([A]^{Tr}\) means zero trace of \(A\). The differential form is then \[\begin{split}\partial_{t}(\rho u)+\nabla\cdot(u\otimes u)+\nabla p =\nabla\cdot(\tau^{(1)}-\tau^{(2)})\\ \tau^{(1)}_{\alpha\beta}:=\eta\left(\partial_{\alpha}u^{\beta}+ \partial_{\beta}u^{\alpha}-\frac{2}{d}\delta_{\alpha\beta}\partial_{\alpha}u^ {\alpha}\right)+\zeta\delta_{\alpha\beta}\partial_{\alpha}u^{\alpha},\end{split}\] \[\tau^{(2)}_{\alpha\beta}:=[K_{1}(\partial_{\alpha}T\partial_{\beta}T-\frac{1} {d}\sum_{\alpha=1}^{d}(\partial_{\alpha}T)^{2})+\omega_{1}\sum_{\alpha=1}^{d}( \partial_{\alpha}T)^{2}+K_{2}(\partial_{\alpha\beta}^{2}T-\frac{1}{d}\sum_{ \alpha=1}^{d}\partial_{\alpha\alpha}^{2}T)+\omega_{2}\sum_{\alpha=1}^{d} \partial_{\alpha\alpha}^{2}T)], \tag{3.36}\] with \[K_{i}=Y_{i}+Z_{i},\quad\omega_{i}=\bar{\omega}_{i}+\phi_{i},\quad i=1,2,\] computed in the Appendix A.1. ### Energy equation Using the arguments developed before it is possible also to find the equation for the energy. We consider the conservation law for the energy (2.10). Therefore we look for the equation for the quantity \(z_{i}^{d+1}\). By (2.10)we have \[\frac{d}{dt}\varepsilon^{d}\sum_{i=1}^{N}f(x_{i})z_{i}^{d+1}=\varepsilon^{-1} \varepsilon^{d}\sum_{i=1}^{N}\sum_{k=1}^{d}\partial_{k}f(x_{i})w_{i}^{d+1\,k}+O (\varepsilon). \tag{3.37}\] We take the average of both sides w.r.t. \(F_{\varepsilon}\). We have \[\big{\langle}\varepsilon^{d}\sum_{i=1}^{N}f(x_{i})z_{i}^{d+1}\big{\rangle}_{F_ {\varepsilon}}=\int dxf(x)\rho(x)e(x)+O(\varepsilon),\] and, by using the expression of \(F_{\varepsilon}=G_{\varepsilon}+G_{0}R_{1}+O(\varepsilon)\), \[\varepsilon^{-1}\varepsilon^{d}\sum_{i=1}^{N}\sum_{k=1}^{d}\big{\langle} \partial_{k}f(x_{i})w_{i}^{d+1\,k}\big{\rangle}_{F_{\varepsilon}}=\int dx \partial_{k}f(x)[\varepsilon^{-1}\big{\langle}w^{d+1\,k}(x)\big{\rangle}_{G_{ \varepsilon}}+\big{\langle}G_{0}R_{1}w^{d+1\,k}(x)\big{\rangle}]+O( \varepsilon).\] We need to evaluate \(\big{\langle}\varepsilon w^{d+1\,k}(x)G_{0}R_{1}\big{\rangle}\) and \(\varepsilon^{-1}\big{\langle}w^{d+1\,k}(x)\big{\rangle}_{G_{\varepsilon}}\). The first of them gives the diffusive correction. We introduce \(\tilde{w}^{d+1\,k}\) and \(\tilde{z}^{d+1}\) defined by (2.6) with \(v\) replaced by \(\tilde{v}\) and \[w_{i}^{d+1\,k}=\tilde{w}_{i}^{d+1\,k}+\varepsilon\big{\{}u^{k}(x_{i})z_{i}^{ d+1}+\sum_{\gamma=1}^{d}\big{(}u^{\gamma}\tilde{v}_{i}^{\gamma}\tilde{v}_{i}^{k}+ \sum_{j\neq i}\Psi^{\gamma k}(\varepsilon^{-1}|x_{i}-x_{j}|)\frac{1}{2}[u^{ \gamma}(x_{i})+u^{\gamma}(x_{j})]\big{)}\big{\}}. \tag{3.38}\] Then, \[\big{\langle}\tilde{w}^{d+1\,k}(x){\mathscr{L}^{*}}^{-1}{\mathscr{L}^{*}}G_{ 0}R_{1}\big{\rangle}=-\varepsilon^{-1}\big{\langle}{\mathscr{L}^{-1}}\bar{w}^ {d+1\,k}(x){\mathscr{L}^{*}}G_{0}\big{\rangle},\] and by (2.43) ( only \(R_{1}^{a}\) enters since \(w^{d+1}\) is odd), \[\Big{\langle}\bar{w}^{d+1\,k}G_{0}R_{1}^{a}\Big{\rangle}=-\Big{\langle}\sum_{j =1}^{N}\sum_{l=1}^{d}\sum_{\mu=0,4}\partial_{l}\lambda_{0}^{\mu}(x_{j})\bar{w}_ {j}^{\mu l}{\mathscr{L}^{-1}}\bar{w}^{d+1\,k}(x)\Big{\rangle}_{G_{0}}+O( \varepsilon), \tag{3.39}\] where \(\bar{w}^{d+1\,k}=\tilde{w}^{d+1\,k}-{\mathscr{P}}(\tilde{w}^{d+1\,k})\). Because of time-reversal and rotation invariance of the Gibbs state the only correlations different from zero are (see [28]) \[\int dx\Big{\langle}\bar{w}^{d+1\,k}(x,\tau)\bar{w}^{d+1\,l}(0,0)\Big{\rangle}_ {G_{0}}=\delta_{lk}a(\tau), \tag{3.40}\] and \(\int d\tau a(\tau)=2\kappa T^{2}\). Therefore the conductivity \(\kappa\) is given by \[\kappa=\frac{1}{2dT^{2}}\int d\tau\Big{\{}\int d\xi\Big{\langle}\sum_{k,l=1} ^{d}w^{d+1\,k}(x,\tau)w^{d+1\,l}(0,0)\Big{\rangle}-d(T(e+P)^{2}/\rho\Big{\}}. \tag{3.41}\] We observe that, since \(\lambda_{\varepsilon}=-(T_{\varepsilon})^{-1}\), \(\lambda_{0}^{d+1}\) is given by \(-T^{-1}\). Using the previous arguments one can see that the second term in (3.38) gives no contribution to \(\langle w^{d+1\,k}R_{\varepsilon}\rangle\) in the limit \(\varepsilon\to 0\). The mean of the energy current on the Gibbs state \(G_{\varepsilon}\), i.e. \(\langle w^{d+1\,k}\rangle_{G_{\varepsilon}}\), is nothing but \((\rho_{\varepsilon}e_{\varepsilon}+P_{\varepsilon})u_{\varepsilon}\) (see [28]), where \(e_{\varepsilon}=\langle z^{d+1}\rangle_{G_{\varepsilon}}\). Hence, \(\varepsilon^{-1}\langle w^{d+1\,k}\rangle_{G_{\varepsilon}}=(\rho e+P)u+o( \varepsilon)\) with \(e\) the order zero term for the internal energy and \(P\) the order zero term for the pressure. Summarizing, if we take the average of (3.37) with respect \(F_{\varepsilon}\) we obtain \[\begin{split}&\int dxf(x)[\rho e(x,t)-\rho e(x,0)]=\\ &\int_{0}^{t}ds\int dxf(x)\{-\mathrm{div}[(\rho e+P)u](x,s)+ \nabla(k\nabla T)(x,s)\}.\end{split} \tag{3.42}\] In differential form \[\partial_{t}(\rho e)=-\mathrm{div}[(\rho e+P)u]+\nabla(k\nabla T).\] By using the equation for \(\rho\) we get also \[\rho[\partial_{t}e+u\cdot\nabla e]=-P\nabla\cdot u+\nabla(k\nabla T).\] ### Entropy Now we show that these equations imply the growth in time of the total thermodynamic entropy and hence the second law of the thermodynamics. Let \(s(\rho,e)\) denote the entropy as function of the density \(\rho\) and the internal energy \(e\) and notice that \(\frac{\partial s}{\partial e}=\frac{1}{T};\quad\frac{\partial s}{\partial\rho} =-\frac{P}{T\rho^{2}}\). We have \[\partial_{t}(\rho s)+\nabla\cdot(\rho us)=\rho\partial_{t}s+s\partial_{t}\rho +\nabla\cdot(\rho us)=\frac{\partial s}{\partial e}\rho\partial_{t}e+\frac{ \partial s}{\partial\rho}\rho\partial_{t}\rho+s\partial_{t}\rho+\nabla\cdot( \rho us)\] \[=\frac{1}{T}\rho\partial_{t}e-\frac{P}{T\rho}\partial_{t}\rho-s\nabla\cdot( \rho u)+\nabla\cdot(\rho us).\] By using the equation for the energy \[\partial_{t}(\rho s)+\nabla\cdot(\rho us)=-\frac{1}{T}\rho u\cdot\nabla e- \frac{P}{T}\nabla\cdot u+\frac{1}{T}\nabla(k\nabla T)+\frac{P}{T\rho}\nabla \cdot(\rho u)-u\rho\cdot\nabla s\] \[=\nabla(\frac{k}{T}\nabla T)+\frac{1}{T^{2}}|\nabla T|^{2}.\] We have used the identity \[\rho u\cdot[\frac{1}{T}\nabla e+\frac{P}{T\rho^{2}}\nabla\rho]=\rho u\cdot\nabla s\] Hence, by integrating over \(x\) on a torus \[\partial_{t}\int dx\rho s=\int dx\frac{1}{T^{2}}|\nabla T|^{2}\geq 0\] ## 4. Comparison with the Boltzmann case It is well known that the incompressible Navier-Stokes-Fourier equations for the perfect gas can be derived from the Boltzmann equation under a suitable diffusive scaling (scale space as \(\varepsilon^{-1}\) and time as \(\varepsilon^{-2}\)) and taking the Mach number proportional to \(\varepsilon\) in the limit \(\varepsilon\to 0\). Under the same scaling but taking initial conditions with gradient of density and temperature of order 1 and/or diffusive boundary condition with gradient of temperature of order 1 the formal limiting equation are different (usually called ghost effect equations following Sone) \[\left\{\begin{array}{rcl}\nabla P&=&\rho T,\\ \partial_{t}u+u\cdot\nabla u+\nabla\mathfrak{p}&=&\nabla\cdot\left(\bar{\tau} ^{(1)}-\bar{\tau}^{(2)}\right),\\ \partial_{t}\rho+\nabla\cdot\left(\rho u\right)&=&0,\\ \frac{3}{2}\partial_{t}P(t)+\frac{5}{2}P\big{(}\nabla\cdot u\big{)}&=&\nabla \cdot\left(\bar{\kappa}\frac{\nabla T}{2T^{2}}\right),\end{array}\right. \tag{4.1}\] for \(d=3\), where \(\bar{\kappa}(T)>0\) is the heat conductivity, \[\tau_{ij}^{(1)}:=\lambda\left(\partial_{i}u_{j}+\partial_{j}u_{i}-\frac{2}{3 }\delta_{ij}\partial_{i}u_{i}\right),\] \[\tau_{ij}^{(2)}:=\frac{\lambda^{2}}{P}\Big{(}\bar{K}_{1}[\big{(}\partial_{i} \partial_{j}T\big{)}-\frac{1}{3}\delta_{ij}\sum_{i}\partial_{i}^{2}T]+\frac{ \bar{K}_{2}}{T}[\big{(}\partial_{i}T\big{)}\big{(}\partial_{j}T\big{)}-\frac{ 1}{3}\delta_{ij}\sum_{i}(\partial_{i}T)^{2}]\Big{)}\] for some smooth function \(\lambda[T]>0\), the viscosity coefficient, and positive constants \(K_{1}\) and \(K_{2}\). To give the expressions of the transport coefficients define the quantities \[\mu(x,v):=\frac{\rho(x)}{\big{(}2\pi T(x)\big{)}^{3/2}}\exp\bigg{(}-\frac{ \left|v\right|^{2}}{2T(x)}\bigg{)}, \tag{4.2}\] \[\bar{\mathscr{A}}:=v\cdot\left(\left|v\right|^{2}-5T\right)\sqrt{\mu}\in R^{3 },\quad\mathscr{A}:=\mathscr{L}^{-1}\left[\bar{\mathscr{A}}\right]\in R^{3}, \tag{4.3}\] \[\bar{\mathscr{B}}=\bigg{(}v\otimes v-\frac{\left|v\right|^{2}}{3}\mathbf{1} \bigg{)}\sqrt{\mu}\in\mathbb{R}^{3\times 3},\quad\mathscr{B}=L^{-1}\bar{ \mathscr{B}}\in\mathbb{R}^{3\times 3},\] \[\bar{k}I:=\int_{R^{3}}\left(\mathscr{A}\otimes\bar{\mathscr{A}}\right)\mathrm{ d}v,\quad\lambda:=\frac{1}{T}\int_{R^{3}}\mathscr{B}_{ij}\bar{\mathscr{B}}_{ij} \ \ \text{for}\ \ i\neq j. \tag{4.5}\] where \(Lf\frac{1}{\sqrt{\mu}}=\mathscr{L}(\sqrt{\mu}f)\) and \(\mathscr{L}\) is the linearized Boltzmann operator around the local Maxwellian \(\mu\) with zero mean for hard sphere, \[\frac{\lambda^{2}}{P}\bar{K}_{1}=\frac{1}{T^{2}}\int dv\mathscr{B}_{ij}v_{i} \mathscr{A}_{j},\quad i\neq j,\] \[\frac{\lambda^{2}}{TP}\bar{K}_{2}=\frac{1}{T^{4}}\int dv\mathscr{B}_{ij}\Big{[} \Gamma(\mathscr{A}_{i},\mathscr{A}_{j})+v_{i}\sqrt{\mu}^{-1}\frac{\partial}{ \partial T}(\frac{\sqrt{\mu}}{T^{2}}\mathscr{A}_{j})\Big{]}\quad i\neq j\] where \(\Gamma\) is the collision operator in the Boltzmann equation. We want to compare the equations we get for the Hamiltonian particle system and the ones derived from Boltzmann, taking into account that the state equation in the second case is the one for the perfect gas \(P(\rho,T)=\rho T\), since the Boltzmann equation describes a rarefied gas. The continuity equation in kinetic theory is the same as for particle system and also the condition \(\nabla P=0\). The equation for the energy if we consider the state equation of a perfect case becomes: For a perfect gas \(e=\frac{3}{2}T\) and \[\frac{3}{2}\partial_{t}P(t)=-\frac{5}{2}P\nabla\cdot u+\nabla(k\nabla T).\] If the domain is a torus, \(\partial_{t}P(t)=0\) and we get \[\frac{5}{2}P\nabla\cdot u=\nabla(k\nabla T).\] If we compare the equation for the momentum we see that also in this case the structure of the equation is the same included the new thermal stress terms. Obviously, the transport coefficients are different and we are not able to compare them, since the Boltzmann equation is modelling a gas of hard sphere and for the particles we were considering a smooth potential. However, we notice that if we represent \(\mathscr{L}^{-1}\) as \[\mathscr{L}^{-1}f=\int_{0}^{\infty}dse^{-s\mathscr{L}}f(s)\] the transport coefficients \(\lambda\) and \(\bar{\kappa}\) are also in kinetic theory expressed as time correlations of currents \[\lambda=-\frac{1}{T}\int_{0}^{\infty}ds<(|v_{i}v_{j}-\frac{1}{3}|v|^{2}I)(s)(| v_{i}v_{j}-\frac{1}{3}|v|^{2}I)(0)>_{\mu}, \tag{4.6}\] \[\bar{\kappa}=-\int_{0}^{\infty}ds<\frac{1}{2}(|v|^{2}-5T)v_{i}(s)\frac{1}{2}(| v|^{2}-5T)v_{i}(0)>_{\mu}. \tag{4.7}\] Notice that \(\zeta\) in the kinetic theory is zero and we realise that also \(\omega_{1}\) and \(\omega_{2}\) are not present in kinetic theory. The expression of the transport coefficients as given by the Green-Kubo formulas are similar with the difference that in kinetic theory is present the linearized Boltzmann operator around the local Maxwellian equilibrium while for the particles the same role is played by the Liouville operator. Moreover, the averages are taken versus the local Maxwellian equilibrium with zero velocity \(\mu\) in kinetic theory and versus the _local_ Gibbs equilibrium \(G_{0}\) for particles. Moreover, in kinetic theory \(\lambda\) and \(\bar{\kappa}\) are well defined due to the hypocoericvity property of \(L\), while for the particle system we need ergodic and mixing properties of the Liouville operator \(\mathscr{L}\). On the other hand, the new transport coefficients \(K_{1}\) and \(K_{2}\) have two different contributions: one involving spatial correlations, which are not present in kinetic theory, and one involving space and double time correlations which are similar in this respect to the ones present in kinetic theory. To conclude, we remark that also in the Boltzmann case the entropy grows in time as showed by Bobylev [3]. Finally, we want to stress that the argument for the particles is completely formal and some rigourous results in the case of the incompressible Navier-Stokes equations can be obtained for stochastic systems of particles on the lattice [14],[15]. The model of stochastic term gas on the lattice in [2] could be useful for deriving the new equations. In the case of the kinetic theory the ghost effect equations were obtained in one-dimensional stationary cases, [1], [4], [12]. A recent advance has been obtained in the stationary case in a general domain for a rarefied gas in contact with a thermal reservoir with non-homogeneous temperature: the rigourous proof of the hydrodynamic limit [11][10]. ## Appendix A ### Transport coefficients We examine more closely the expression of the transport coefficients \(K_{1},K_{2}\) in (3.36) and first we compute \(Y_{1},Y_{2}\) in (3.17) and then the part \(Z_{i},i=1,2\) due to the term (3.21). \(\bullet\)_Computation of \(Y_{i},i=1,2\)._ \(Y_{2}(x,t)=\frac{\partial}{\partial T}\hat{\Phi}_{\alpha\beta}(x,t),\alpha\neq\beta\) is defined in (3.18). We have \[\hat{\Phi}_{\alpha\beta}:=\left\langle\bar{\Phi}_{\alpha\beta}\right\rangle_{G _{0}}=\left\langle G_{0}(x,t)\bar{\Phi}_{\alpha\beta}(x)\right\rangle\] where \[\bar{\Phi}_{\alpha\beta}(x)=\frac{1}{2}\varepsilon^{d}\sum_{i,j=1,i\neq j}^{N} \delta(x-x_{i})\partial_{\beta}V(\varepsilon^{-1}(x_{i}-x_{j}))[\xi_{i}^{\beta }-\xi_{j}^{\beta}][\xi_{i}^{\alpha}-\xi_{j}^{\alpha}]^{2}.\] We put \(\frac{\partial}{\partial T}\lambda_{0}^{\mu}=(\lambda_{0}^{\mu})^{\prime}\) and \(\frac{\partial^{2}}{\partial T^{2}}\lambda_{0}^{\mu}=(\lambda_{0}^{\mu})^{\prime\prime}\) for \(\mu=0,d+1\). Then, \[Y_{2}=\left\langle\frac{\partial G_{0}}{\partial T}\bar{\phi}_{\alpha\beta} \right\rangle=\Big{\langle}\sum_{k=1}^{N}z_{k}^{0}(\lambda_{0}^{0})^{\prime}( x_{k})\bar{\Phi}_{\alpha\beta}\Big{\rangle}_{G_{0}}-\Big{\langle}\sum_{k=1}^{N}z_{k}^ {0}(\lambda_{0}^{0})^{\prime}(x_{k})\Big{\rangle}_{G_{0}}\Big{\langle}\bar{ \Phi}_{\alpha\beta}\Big{\rangle}_{G_{0}}\] Now call \((\tilde{z}^{\mu})=z^{\mu}-\left\langle z^{\mu}\right\rangle_{G_{0}}\) and \(\tilde{\Phi}_{\alpha\beta}=\bar{\Phi}_{\alpha\beta}-\left\langle\bar{\Phi}_{ \alpha\beta}\right\rangle_{G_{0}}\). Then, \[Y_{2}=\Big{\langle}\sum_{k=1}^{N}\tilde{z}_{k}^{0}(\lambda_{0}^{0})^{\prime}( x_{k})\tilde{\Phi}_{\alpha\beta}\Big{\rangle}_{G_{0}}+\Big{\langle}\sum_{k=1}^{N} \tilde{z}_{k}^{d+1}(\lambda_{0}^{d+1})^{\prime}(x_{k})\tilde{\Phi}_{\alpha \beta}\Big{\rangle}_{G_{0}}.\] Let \(h(x)\) be a test function and compute \(\int dxh(x)Y_{2}(x)\). \[\int dxh(x)Y_{2}(x)=\varepsilon^{-d}\sum_{\mu=0,4}\int dxh(x)\int dy(\lambda_ {0}^{\mu})^{\prime}(y)\Big{\langle}\tilde{z}^{\mu}(y)\tilde{\Phi}_{\alpha \beta}(x)\Big{\rangle}_{G_{0}}.\] We change variable \(x=y+\varepsilon\zeta\) to absorb \(\varepsilon^{-d}\) and get \[\int dxh(x)Y_{2}(x)=\sum_{\mu=0,4}\int dyh(y)(\lambda_{0}^{\mu})^{\prime}(y) \int d\zeta\Big{\langle}\tilde{z}^{\mu}(0)\tilde{\Phi}_{\alpha\beta}(\zeta) \Big{\rangle}_{G_{0}}.\] In conclusion \[Y_{2}(y)=\sum_{\mu=0,4}(\lambda_{0}^{\mu})^{\prime}(y)\int d\zeta\Big{\langle} \tilde{z}^{\mu}(0)\tilde{\Phi}_{\alpha\beta}(\zeta)\Big{\rangle}_{G_{0}}.\] For \(\alpha\neq\beta\) \[Y_{1}=\frac{\partial^{2}\hat{\Phi}_{\alpha\beta}}{\partial T^{2}}=\frac{ \partial Y_{2}}{\partial T}=\sum_{\mu=0,4}(\lambda_{0}^{\mu})^{\prime\prime}(y) \int d\zeta\Big{\langle}\tilde{z}^{\mu}(0)\tilde{\Phi}_{\alpha\beta}(\zeta) \Big{\rangle}_{G_{0}}\] \[+\sum_{\mu,\nu=0,4}\int dy(\lambda_{0}^{\mu})^{\prime}(y)(\lambda_{0}^{\nu})^{ \prime}(y)\int d\zeta\int d\zeta^{\prime}\Big{\langle}\tilde{z}^{\mu}(0)\tilde{z} ^{\nu}(\zeta^{\prime})\tilde{\Phi}_{\alpha\beta}(\zeta)\Big{\rangle}_{G_{0}}.\] \(\bullet\)_Computation of \(Z_{i},i=1,2.\)_ Now we compute the contribution to the transport coefficients due to the term \[C:=\varepsilon^{-1}\Big{\langle}R_{1}^{s}\varepsilon^{d}\sum_{i=1}^{N}\sum_{k =1}^{d}\bar{w}_{i}^{\beta k}\partial_{x}f(x_{i})\Big{\rangle}_{G_{0}}.\] We use the expression of \(R_{1}^{s}\) given by (2.44) \[C=\varepsilon^{-1}\Big{\langle}G_{0}R_{1}^{s}\varepsilon^{d}\sum_{i=1}^{N} \sum_{k=1}^{d}\bar{w}_{i}^{\beta k}\partial_{k}f(x_{i})\Big{\rangle}\] \[=\varepsilon^{-2}\Big{\langle}\mathscr{L}^{*-1}\mathscr{L}^{*-1}[\mathscr{L}^ {*}\mathscr{L}^{*}G_{0}]\varepsilon^{d}\sum_{i=1}^{N}\sum_{k=1}^{d}\bar{w}_{ i}^{\beta k}]\partial_{k}f(x_{i})\Big{\rangle}\] \[=\varepsilon^{-1}\Big{\langle}\mathscr{L}^{*}[G_{0}\sum_{\ell=1}^{N}\sum_{\mu =0,4}\sum_{\gamma=1}^{d}\partial_{\gamma}\lambda_{0}^{\mu}(x_{\ell})w_{\ell}^ {\mu\gamma}]\mathscr{L}^{-1}\mathscr{L}^{-1}[\varepsilon^{d}\sum_{i=1}^{N} \sum_{k=1}^{d}\bar{w}_{i}^{\beta k}\partial_{k}f(x_{i})]\Big{\rangle}.\] We compute now \[A:=\varepsilon^{-1}[\mathscr{L}^{*}G_{0}]\sum_{\ell=1}^{N}\sum_{\mu=0,4}\sum_ {\gamma=1}^{d}\partial_{\gamma}\lambda_{0}^{\mu}(x_{\ell})w^{\mu\gamma}_{\ell }(x_{\ell})+\varepsilon^{-1}G_{0}\mathscr{L}^{*}[\sum_{\ell=1}^{N}\sum_{\mu=0,4}\sum_{\gamma=1}^{d}\partial_{\gamma}\lambda_{0}^{\mu}(x_{\ell})w^{\mu\gamma} (x_{\ell})]\] \[=G_{0}[\sum_{\ell=1}^{N}\sum_{\mu=0,4}\sum_{\gamma=1}^{d}\partial_{\gamma} \lambda_{0}^{\mu}\ell w_{\ell}^{\mu\gamma}][\sum_{s=1}^{N}\sum_{\mu=0,4}\sum_ {\nu=1}^{d}\partial_{\nu}\lambda_{0}^{\mu}(x_{s})w^{\mu\nu}_{s}]\] \[+\varepsilon^{-1}G_{0}\sum_{\ell=1}^{N}\sum_{\mu=0,4}\sum_{\gamma=1}^{d} \mathscr{L}^{*}[\partial_{\gamma}\lambda_{0}^{\mu}(x_{\ell})w^{\mu\nu}_{\ell}].\] The first term in the rhs of (A. 1.1) is \[\varepsilon\Big{[}\sum_{\mu=0,4}\sum_{\gamma=1}^{d}\varepsilon^{-d}\int dx( \lambda_{0}^{\mu})^{\prime}\sum_{\gamma=1}^{d}\partial_{\gamma}T(x)w^{\mu\gamma }(x)\quad\varepsilon^{-d}\int dy(\lambda_{0}^{\mu})^{\prime}\sum_{\nu=1}^{d} \partial_{\nu}T(y)w^{\mu\nu}(y)\] \[+2\varepsilon^{-d}\int dx(\lambda_{0}^{0})^{\prime}\sum_{\gamma=1}^{d} \partial_{\gamma}T(x)w^{0\gamma}(x)\quad\varepsilon^{-d}\int dy(\lambda_{0}^{ d+1})^{\prime}\sum_{\nu=1}^{d}\partial_{\nu}T(y)w^{d+1\nu}(y)\Big{]}.\] The contribution to \(C\) is \[\sum_{\mu=0,4}\sum_{\gamma,\nu=1}^{d}\varepsilon^{-d}\int dx(\lambda_{0}^{\mu})^{ \prime}\partial_{\gamma}T(x)\varepsilon^{-d}\int dy(\lambda_{0}^{\mu})^{\prime} \partial_{\nu}T(y)\] \[\times\int dz\partial_{k}f(z)\int_{0}^{\infty}ds\int_{0}^{\infty}dt\Big{\langle} w^{\mu\gamma}(x0)w^{\mu\nu}(y0)\bar{w}^{\beta k}(zs+t)\Big{\rangle}_{G_{0}}\] \[\times\sum_{k=1}^{d}\int dz\partial_{k}f(z)\int_{0}^{\infty}ds\int_{0}^{\infty }dt\Big{\langle}w^{0\gamma}(x0)w^{d+1\nu}(y0)\bar{w}^{\beta k}(zs+t)\Big{\rangle} _{G_{0}}.\] By the change of variables \(x=z+\varepsilon\xi,\quad y=z+\varepsilon\xi^{\prime},\quad s+t=\tau\) \[=\sum_{\mu=0,d+1}\sum_{\gamma,\nu,k=1}^{d}\int dz\partial_{k}f(z)(\lambda_{0}^ {\mu})^{\prime}\partial_{\gamma}T(z)(\lambda_{0}^{\mu})^{\prime}\partial_{ \nu}T(z)\] \[\times\int_{0}^{\infty}ds\int_{s}^{\infty}d\tau\int d\xi\int d\xi^{\prime} \Big{\langle}w^{\mu\gamma}(\xi\tau)w^{\mu\nu}(\xi^{\prime}\tau)\bar{w}^{\beta k }(00)\Big{\rangle}_{G_{0}}\] \[+2\sum_{\gamma,\nu,k=1}^{d}\int dz\partial_{k}f(z)(\lambda_{0}^{0})^{\prime} \partial_{\gamma}T(z)(\lambda_{0}^{d+1})^{\prime}\partial_{\nu}T(z)\] \[\times\int_{0}^{\infty}ds\int_{s}^{\infty}d\tau\int d\xi\int d\xi^{\prime} \Big{\langle}w^{0\gamma}(\xi\tau)w^{d+1\nu}(\xi^{\prime}\tau)\bar{w}^{\beta k }(00)\Big{\rangle}_{G_{0}}.\] These terms give rise in the equation for \(u^{\beta}\) to terms of the form \[-2\sum_{\gamma,\nu,k=1}^{d}\partial_{k}\Big{[}[(\lambda_{0}^{0})^{\prime} \partial_{\gamma}T(z)(\lambda_{0}^{d+1})^{\prime}\partial_{\nu}T(z)]\alpha_{ \gamma\nu\beta k}^{0d+1}\Big{]}\] and for \(\mu=0,d+1\) \[-\sum_{\gamma,\nu,k=1}^{d}\partial_{k}\Big{[}[(\lambda_{0}^{\mu})^{\prime} \partial_{\gamma}T(z)(\lambda_{0}^{\mu})^{\prime}\partial_{\nu}T(z)]\alpha_{ \gamma\nu\beta k}^{\mu\mu}\Big{]},\] where \[\alpha_{\gamma\nu\beta k}^{0d+1}:=\int_{0}^{\infty}ds\int_{s}^{\infty}d\tau \int d\xi\int d\xi^{\prime}\Big{\langle}w^{0\gamma}(\xi\tau)w^{d+1\nu}(\xi^{ \prime}\tau)\bar{w}^{\beta k}(00)\Big{\rangle}_{G_{0}}\] \[=a_{1}(t,z)\delta_{\beta k}\delta_{\gamma\nu}+a_{2}(tz)\delta_{\beta\nu} \delta_{\gamma k}+a_{3}(t,z)\delta_{\beta\gamma}\delta_{\nu k},\] \[\alpha_{\gamma\nu\beta k}^{00}:=\int_{0}^{\infty}ds\int_{s}^{\infty}d\tau \int d\xi\int d\xi^{\prime}\Big{\langle}w^{0\gamma}(\xi\tau)w^{0\nu}(\xi^{ \prime}\tau)\bar{w}^{\beta k}(00)\Big{\rangle}_{G_{0}}\] \[=b_{1}(t,z)\delta_{\beta k}\delta_{\gamma\nu}+b_{2}(tz)\delta_{\beta\nu} \delta_{\gamma k}+b_{3}(t,z)\delta_{\beta\gamma}\delta_{\nu k},\] \[\alpha^{d+1d+1}_{\gamma\nu\beta k}:=\int_{0}^{\infty}ds\int_{s}^{ \infty}d\tau\int d\xi\int d\xi^{\prime}\Big{\langle}w^{d+1\gamma}(\xi\tau)w^{d+ 1\nu}(\xi^{\prime}\tau)\bar{w}^{\beta k}(00)\Big{\rangle}_{G_{0}}\] \[\qquad\qquad\qquad=c_{1}(t,z)\delta_{\beta k}\delta_{\gamma\nu}+c _{2}(tz)\delta_{\beta\nu}\delta_{\gamma k}+c_{3}(t,z)\delta_{\beta\gamma}\delta _{\nu k}\] Since the potential is central we have \(a_{2}=a_{3}\), \(b_{2}=b_{3}\)\(c_{2}=c_{3}\)[28]. Now we compute the second term in (A. 1.1) \[B:=\varepsilon^{-1}G_{0}\sum_{\ell=1}^{d}\sum_{\mu=0,d+1}\sum_{\gamma=1}^{d} \mathscr{L}^{*}[\partial_{\gamma}\lambda^{\mu}_{0}(x_{\ell})w^{\mu\gamma}_{ \ell}].\] We have for \[\mu=0:\] \[\varepsilon^{-1}\sum_{\ell=1}^{d}\sum_{\mu=0,4}\sum_{\gamma=1}^{d}\mathscr{L} ^{*}[\partial_{\gamma}\lambda^{0}_{0}(x_{\ell})w^{0\gamma}_{\ell}]=\varepsilon ^{-d}\sum_{\gamma,\nu=1}^{d}\int dx\partial_{\nu\gamma}^{2}\lambda^{0}_{0}(x)w ^{\nu\gamma}(x).\] The corresponding term in the equation for \(w^{\beta}\) is \[-\sum_{\gamma,\nu,k=1}^{d}\nabla_{k}\Big{[}[(\lambda^{0}_{0})^{\prime}\partial _{\nu\gamma}^{2}T(z)+(\lambda^{0}_{0})^{\prime\prime}\partial_{\gamma}T(z) \partial_{\nu}T(z)]h^{0}_{\gamma\nu\beta k}\Big{]},\] with \[h^{0}_{\gamma\nu\beta k}=\int_{0}^{\infty}ds\int_{s}^{\infty}d\tau\int d\xi \Big{\langle}\bar{w}^{\nu\gamma}(\xi\tau)\bar{w}^{\beta k}(00)\Big{\rangle}_{G _{0}}=h_{1}(t,z)\delta_{\beta k}\delta_{\gamma\nu}+h_{2}(tz)\delta_{\beta\nu} \delta_{\gamma k}+h_{3}(t,z)\delta_{\beta\gamma}\delta_{\nu k}.\] The most difficult term is \[\mu=d+1:\] \[\sum_{\ell=1}^{N}\sum_{\gamma=1}^{d}\mathscr{L}^{*}[\partial_{\gamma}\lambda ^{d+1}_{0}(x_{\ell})w^{d+1\gamma}_{\ell}]:=\sum_{\ell=1}^{N}\sum_{\gamma=1}^{d }\mathscr{L}^{*}[g_{\gamma}(x_{\ell})w^{d+1\gamma}_{\ell}],\] where, for semplicity, we put \(g_{\gamma}(x)=\partial_{\gamma}\lambda^{d+1}_{0}(x)\). \[\mathscr{H}:=\sum_{\ell=1}^{N}\sum_{\gamma=1}^{d}\mathscr{L}^{*}[g_{\gamma}(x _{\ell})w^{d+1\gamma}_{\ell}]\] \[=-\sum_{\ell=1}^{N}\sum_{\gamma=1}^{d}\Big{\{}\varepsilon\sum_{k=1}^{d}v^{k}_{ \ell}\partial_{k}g_{\gamma}(x_{\ell})w^{d+1\gamma}_{\ell}+\sum_{i=1}^{N}\sum_{ k=1}^{d}v^{k}_{i}g_{\gamma}(x_{\ell})\frac{\partial}{\partial x^{k}_{i}}w^{d+1 \gamma}_{\ell}\Big{\}}\] (A. 1.2) The first term is of order \(\varepsilon\) and will appear in \(C\). The others has to be examined and we start with the second term in (A. 1.2) \[-\sum_{\ell=1}^{N}\sum_{\gamma=1}^{d}\sum_{i=1}^{N}\sum_{k=1}^{d}v^{k}_{i}g_{ \gamma}(x_{\ell})\frac{\partial}{\partial x^{k}_{i}}w^{d+1\gamma}_{\ell}\] \[=-\sum_{\ell=1}^{N}\sum_{\gamma=1}^{d}\sum_{k=1}^{d}g_{\gamma}(x_{\ell}) \Big{[}\frac{1}{2}\sum_{j}\sum_{\nu}(\partial_{k}\Psi^{\nu\gamma}(\varepsilon^{- 1}(x_{\ell}-x_{j}))(v_{j}^{k}-v_{\ell}^{k})\frac{1}{2}(v_{\ell}^{\nu}+v_{j}^{ \nu})+\sum_{i}\frac{1}{2}v_{\ell}^{\gamma}v_{i}^{k}\frac{\partial}{\partial x_ {i}^{k}}z_{\ell}^{d+1\gamma}\Big{]}.\] (A. 1.4) The first term will not give contribution to \(C\) because of the Gaussian integration on the velocities. Now we pass to the term in (A. 1.3) and add the last term in in (A. 1.4) \[\sum_{\ell=1}^{N}\sum_{\gamma=1}^{d}g_{\gamma}(x_{\ell})\sum_{i=1 }^{N}\Big{\{}\sum_{k=1}^{d}\sum_{j\neq i}\partial_{k}V(\varepsilon^{-1}(x_{i}- x_{j}))\frac{\partial}{\partial v_{i}^{k}}w_{\ell}^{d+1\gamma}+\frac{1}{2}v_{i}^ {k}v_{\ell}^{\gamma}\frac{\partial}{\partial x_{i}^{k}}z_{\ell}^{d+1\gamma} \Big{\}}\] \[= \sum_{\ell=1}^{N}\sum_{\gamma=1}^{d}g_{\gamma}(x_{\ell})\sum_{i=1 }^{N}\Big{\{}\frac{1}{2}\sum_{k=1}^{d}v_{i}^{k}v_{\ell}^{\gamma}\frac{ \partial}{\partial x_{i}^{k}}z_{\ell}^{d+1\gamma}+\sum_{k=1}^{d}\sum_{j\neq i} \partial_{k}V(\varepsilon^{-1}(x_{i}-x_{j}))\Big{[}z_{\ell}^{d+1}\frac{ \partial}{\partial v_{i}^{k}}v_{\ell}^{\gamma}\] \[+v_{\ell}^{\gamma}\frac{\partial}{\partial v_{i}^{k}}\frac{1}{2}|v _{\ell}|^{2}+\frac{1}{2}\sum_{s=1}^{N}\sum_{\nu=1}^{d}\Psi^{\nu\gamma}( \varepsilon^{-1}(x_{\ell}-x_{s})\frac{1}{2}\frac{\partial}{\partial v_{i}^{k} }[v_{\ell}^{\nu}+v_{s}^{\nu}])\Big{]}\Big{\}}.\] (A. 1.5) We begin to deal with the second term in the rhs of (A. 1.5) \[\mathscr{M}:=\sum_{\ell=1}^{N}\sum_{\gamma=1}^{d}g_{\gamma}(x_{\ell})z_{\ell }^{d+1}\sum_{j\neq\ell}\partial_{\gamma}V(\varepsilon^{-1}(x_{\ell}-x_{j})).\] By using the antisymmetry of the gradient of the potential under the exchange \(\ell\to j\) we get \[\mathscr{M}=\frac{1}{2}\sum_{\ell=1}^{N}\sum_{\gamma=1}^{d}\sum_{j\neq\ell,1} ^{N}[g_{\gamma}(x_{\ell})z_{\ell}^{d+1}-g_{\gamma}(x_{j})z_{j}^{d+1}]\partial _{\gamma}V(\varepsilon^{-1}(x_{\ell}-x_{j}))\] \[=\frac{1}{2}\sum_{\ell=1}^{N}\sum_{\gamma=1}^{d}\sum_{j\neq\ell,1}^{n}\Big{[}[ g_{\gamma}(x_{\ell})-g_{\gamma}(x_{j})]z_{\ell}^{d+1}+g_{\gamma}(x_{j})[z_{\ell}^{d+ 1}-z_{j}^{d+1}]\partial_{\gamma}V(\varepsilon^{-1}(x_{\ell}-x_{j}))\Big{]}\] \[=\frac{1}{2}\sum_{\ell=1}^{N}\sum_{\gamma=1}^{d}\sum_{j\neq\ell,1}^{N}\Big{[}[ g_{\gamma}(x_{\ell})-g_{\gamma}(x_{j})]z_{\ell}^{d+1}+\frac{1}{2}[g_{\gamma}(x_{j})+ g_{\gamma}(x_{\ell})[z_{\ell}^{d+1}-z_{j}^{d+1}]\partial_{\gamma}V( \varepsilon^{-1}(x_{\ell}-x_{j}))\Big{]}.\] As usual, we replace \([g_{\gamma}(x_{\ell})-g_{\gamma}(x_{j})]\) with \(\varepsilon\partial_{s}g_{\gamma}(x_{\ell})(\xi_{\ell}^{s}-\xi_{j}^{s})\) so that the first term is of order \(\varepsilon\) and will give contribution to the equation, while the second term is not of order \(\varepsilon\) and we have to show that does not give contribution. \[\mathscr{M}=\frac{1}{2}\sum_{\ell=1}^{N}\sum_{\gamma=1}^{d}\sum_{j\neq\ell,1} ^{N}\Big{[}\varepsilon\sum_{s=1}^{d}[z_{\ell}^{d+1}\partial_{s}g_{\gamma}(x_{ \ell})\Psi^{\gamma s}(\varepsilon^{-1}(x_{\ell}-x_{j}))\] \[+\frac{1}{4}[g_{\gamma}(x_{j})+g_{\gamma}(x_{\ell})][z_{\ell}^{d+1}-z_{j}^{d+1 }]\partial_{\gamma}V(\varepsilon^{-1}(x_{\ell}-x_{j}))\Big{]}\] (A. 1.6) In \(z_{k}^{d+1}=\frac{1}{2}\big{[}v_{k}^{2}+\sum_{j\neq i=1}^{N}V(\varepsilon^{-1}|x_{k }-x_{j}|)\) the second term \(\sum_{j\neq i=1}^{N}V(\varepsilon^{-1}|x_{k}-x_{j}|)\) is \[\sum_{\ell,j=1}^{N}-V(\varepsilon^{-1}(x_{j}-x_{t}))]\partial_{\gamma}V( \varepsilon^{-1}(x_{\ell}-x_{j}))[g_{\gamma}(x_{j})+g_{\gamma}(x_{\ell})]\] and will not give contribution to \(C\) since the term \(\sum_{t=1}^{N}[V(\varepsilon^{-1}(x_{\ell}-x_{t}))\) under the average on the position in \(C\) will not depend on \(\ell\) by isotropy. Then, the second line in (A. 1.6) becomes \[\frac{1}{2}\sum_{\ell=1}^{N}\sum_{\gamma=1}^{d}\sum_{j\neq\ell,1}^{n}\frac{1}{ 4}[g_{\gamma}(x_{j})+g_{\gamma}(x_{\ell})]\frac{1}{2}[|v_{\ell}|^{2}-|v_{j}|^{ 2}]\partial_{\gamma}V(\varepsilon^{-1}(x_{\ell}-x_{j})).\] This term will not give contribution to \(C\) because of the Gaussian integration on the velocities. Now, we compute the first and third terms term in (A. 1.5). \[\sum_{\ell=1}^{N}\sum_{\gamma=1}^{d}g_{\gamma}(x_{\ell})\sum_{i=1}^{N}\Big{\{} \sum_{k=1}^{d}\sum_{j\neq i}\partial_{k}V(\varepsilon^{-1}(x_{i}-x_{j}))v_{ \ell}^{\gamma}\frac{\partial}{\partial v_{i}^{k}}\frac{1}{2}|v_{\ell}|^{2}+ \frac{1}{2}v_{i}^{k}v_{\ell}^{\gamma}\frac{\partial}{\partial x_{i}^{k}}z_{ \ell}^{d+1\gamma}\] \[=\sum_{\ell=1}^{N}\sum_{k=1}^{d}g_{\gamma}(x_{\ell})\frac{1}{2}\bigg{\{}-2v_{ \ell}^{\gamma}v_{\ell}^{k}\sum_{j:j\neq\ell}^{1,N}\partial_{k}V(\varepsilon^{- 1}(x_{\ell}-x_{j}))+\sum_{j:j\neq\ell}^{1,N}v_{\ell}^{\gamma}\partial_{k}V( \varepsilon^{-1}(x_{\ell}-x_{j}))(v_{\ell}^{k}-v_{j}^{k})\bigg{\}}\] \[=-\sum_{\ell=1}^{N}\sum_{k=1}^{d}g_{\gamma}(x_{\ell})\frac{1}{2}\sum_{j:j\neq \ell}^{1,N}v_{\ell}^{\gamma}\partial_{k}V(\varepsilon^{-1}(x_{\ell}-x_{j}))(v _{\ell}^{k}+v_{j}^{k}).\] By using the antisymmetry of \(\partial_{k}V\) we get \[-\frac{1}{4}\sum_{\gamma=1}^{d}\sum_{\ell=1}^{N}[g_{\gamma}(x_{\ell})v_{\ell}^ {\gamma}-g_{\gamma}(x_{j})v_{j}^{\gamma}]\{\sum_{j:j\neq\ell}^{1,N}\sum_{k=1}^ {d}\partial_{k}V(\varepsilon^{-1}(x_{\ell}-x_{j}))(v_{\ell}^{k}+v_{j}^{k})\}\] \[=-\frac{1}{4}\sum_{\ell=1}^{N}\sum_{\gamma=1}^{d}\Big{[}\varepsilon v_{\ell}^{ \gamma}\sum_{\nu=1}^{d}\partial_{\nu}g_{\gamma}(x_{\ell})\sum_{j:j\neq\ell}^{1,N}\sum_{k=1}^{d}\Psi^{\nu k}(\varepsilon^{-1}(x_{\ell}-x_{j}))\frac{1}{2}(v_{ \ell}^{k}+v_{j}^{k})\Big{]}\] \[+g_{\gamma}(x_{j})[v_{\ell}^{\gamma}-v_{j}^{\gamma}]\Big{\{}\sum_{j:j\neq\ell }^{1,N}\sum_{k=1}^{d}\partial_{k}V(\varepsilon^{-1}(x_{\ell}-x_{j}))\frac{1}{2} (v_{\ell}^{k}+v_{j}^{k})\}.\] The first term is of order \(\varepsilon\) and will appear in \(C\) while the second term is zero after average on the velocities in \(C\). Finally, we discuss the last term in (A. 1.5). \[\sum_{\ell=1}^{N}\sum_{\gamma=1}^{d}g_{\gamma}(x_{\ell})\sum_{k=1}^{d}\sum_{j\neq i }\partial_{k}V(\varepsilon^{-1}(x_{i}-x_{j}))\frac{1}{2}\sum_{s=1}^{N}\sum_{ \nu=1}^{d}\Psi^{\nu\gamma}(\varepsilon^{-1}(x_{\ell}-x_{s})\frac{1}{2}\frac{ \partial}{\partial v_{i}^{k}}[v_{\ell}^{\nu}+v_{s}^{\nu}])\big{]}\] \[=\sum_{\ell=1}^{N}\sum_{\gamma=1}^{d}g_{\gamma}(x_{\ell})\frac{1}{4}\sum_{s=1}^ {N}\sum_{k=1}^{d}\sum_{j}[\partial_{k}V(\varepsilon^{-1}(x_{\ell}-x_{j}))+ \partial_{k}V(\varepsilon^{-1}(x_{s}-x_{j}))]\Psi^{k\gamma}(\varepsilon^{-1}( x_{\ell}-x_{s})\big{]}\] The first term is equal to \[=\frac{1}{8}\sum_{\ell,j=1}^{N}\sum_{\gamma,k=1}^{d}\sum_{s=1}^{N}\partial_{k} V(\varepsilon^{-1}(x_{\ell}-x_{j}))\{[g_{\gamma}(x_{\ell})\Psi^{k\gamma}( \varepsilon^{-1}(x_{\ell}-x_{s})-g_{\gamma}(x_{j})\Psi^{k\gamma}(\varepsilon^ {-1}(x_{j}-x_{s})]\] \[=\frac{1}{8}\sum_{\ell,j=1}^{N}\sum_{\gamma,k=1}^{d}\sum_{s=1}^{N}[\partial_{k} V(\varepsilon^{-1}(x_{\ell}-x_{j}))[g_{\gamma}(x_{\ell})-g_{\gamma}(x_{j})]\Psi^{k \gamma}(\varepsilon^{-1}(x_{\ell}-x_{s})\] \[+g_{\gamma}(x_{j}[\Psi^{k\gamma}(\varepsilon^{-1}(x_{\ell}-x_{s}))-\Psi^{k \gamma}(\varepsilon^{-1}(x_{j}-x_{s}))]\}\] The first term is of order \(\varepsilon\) and the second has the same symmetry properties as the term dealt with in (A. 1.7) and hence does not give contribution. The other term is \[\sum_{\ell,j=1}^{N}\sum_{\gamma,k=1}^{d}g_{\gamma}(x_{\ell})\frac{1}{4}\sum_{ s=1}^{N}\partial_{k}V(\varepsilon^{-1}(x_{s}-x_{j}))\Psi^{k\gamma}(\varepsilon^{-1 }(x_{\ell}-x_{s}))\] \[=\sum_{\ell,j=1}^{N}\sum_{\gamma,k=1}^{d}g_{\gamma}(x_{\ell})\sum_{s=1}^{N} \partial_{k}V(\varepsilon^{-1}(x_{s}-x_{j}))[\Psi^{k\gamma}(\varepsilon^{-1}( x_{\ell}-x_{s}))-\Psi^{k\gamma}(\varepsilon^{-1}(x_{\ell}-x_{j}))]\] and does not give contribution to \(C\) for the same reasons. Putting all the terms of order \(\varepsilon\) together in (A. 1.2),(A. 1.6) and (A. 1.9) we have that \[\varepsilon\varepsilon^{-d}\int dx\sum_{s,\gamma=1}^{d}\partial_{s}g_{\gamma}( x)(v^{\gamma}w^{d+1s})(x)+\varepsilon\varepsilon^{-d}\int dx\sum_{m,\gamma=1}^{d} \partial_{m}g_{\gamma}(x)\mathscr{A}^{\gamma m}(x)\] \[+\varepsilon\varepsilon^{-d}\sum_{s,\gamma=1}^{d}\int dx\partial_{s}g_{\gamma}( x)\mathscr{C}^{\gamma s}(x)+\varepsilon\varepsilon^{-d}\sum_{s,\gamma=1}^{d} \int dx\partial_{s}g_{\gamma}(x)\mathscr{N}^{\gamma s}(x),\] with \[\mathscr{C}^{\gamma s}(x)=\sum_{\ell,j=1}^{N}\delta(x_{\ell}-x)\frac{1}{4} \sum_{\nu,\gamma=1}^{d}\Psi^{\nu\gamma}(\varepsilon^{-1}(x_{\ell}-x_{j}))v_{ \ell}^{s}\frac{1}{2}[v_{\ell}^{\nu}+v_{j}^{\nu}],\] \[\mathscr{A}^{\gamma m}(x)=\sum_{\ell}\delta(x-x_{\ell})z_{\ell}^{d+1}\sum_{s} \Psi^{\gamma m}(\varepsilon^{-1}(|x_{\ell}-x_{s}|),\] \[\mathscr{N}^{\gamma s}(x)=\sum_{\ell}\delta(x-x_{\ell})\sum_{s,j}\partial_{m}V (\varepsilon^{-1}(x_{\ell}-x_{j}))\Psi^{s\gamma m}(\varepsilon^{-1}(|x_{\ell}- x_{s}|).\] The contribution to \(C\) is then \[\sum_{m,k,\gamma=1}^{d}\int dz\partial_{k}f(z)\Big{[}\partial_{m\gamma}^{2}( \lambda_{0}^{d+1})(z)\int_{0}^{\infty}ds\int_{s}^{\infty}d\tau\int d\xi\Big{<}( v^{\gamma}w^{d+1m}+\mathscr{A}^{\gamma m}(x))(\xi,\tau)\bar{w}^{\beta k}(0,0) \Big{>}_{G_{0}}\Big{]}\] with \[d_{\gamma\mu\beta k} :=\int_{0}^{\infty}ds\int_{s}^{\infty}d\tau\int d\xi\Big{<}([v^{ \gamma}w^{d+1m}+\mathscr{A}^{\gamma m}(x)](\xi,\tau)]\bar{w}^{\beta k}(0,0) \Big{>}_{G_{0}}\] \[=d_{1}(t,z)\delta_{\beta k}\delta_{\gamma\mu}+d_{2}(tz)\delta_{ \beta\mu}\delta_{\gamma k}+d_{3}(t,z)\delta_{\beta\gamma}\delta_{\mu k};\quad d _{2}=d_{3}\] and for the term \(\mathscr{C}\colon\varepsilon^{-d}\varepsilon\int dx\mathscr{C}^{\ s\gamma}(x) \partial_{s}g_{\gamma}(x)\) in the equation \[\partial_{k}[((\lambda_{0}^{d+1})^{\prime}\partial_{s\gamma}^{2}T+(\lambda_{0 }^{d+1})^{\prime\prime}\partial_{s}T\partial_{\gamma}T)\int_{o}^{\infty}d\tau ^{\prime}\int_{\tau^{\prime}}^{\infty}d\tau\int d\xi\Big{<}\bar{w}^{\beta k}(0,0)(\mathscr{C}^{\gamma s})(\xi,\tau),\] which becomes for \(\beta=k\): \[\sum_{\gamma=1}^{d}\partial_{k}[((\lambda_{0}^{d+1})^{\prime}\partial_{\gamma \gamma}^{2}T+(\lambda_{0}^{d+1})^{\prime\prime}\partial_{\gamma}T\partial_{ \gamma}T)g_{1}],\] with \[g_{1}:=\frac{1}{d}\sum_{\gamma=1}^{d}\int_{0}^{\infty}d\tau^{\prime}\int_{\tau ^{\prime}}^{\infty}d\tau\int d\xi\Big{<}\bar{w}^{\beta k}(0,0)(\mathscr{C}^{ \gamma\gamma})(\xi,\tau)\] \[for\quad\beta\neq k\qquad\qquad\partial_{k}[((\lambda_{0}^{d+1})^{\prime} \partial_{\beta k}^{2}T+(\lambda_{0}^{d+1})^{\prime\prime}\partial_{k}T \partial_{\beta}T)g_{2}],\] with \[g_{2}:=\int_{0}^{\infty}d\tau^{\prime}\int_{\tau^{\prime}}^{\infty}d\tau\int d \xi\Big{<}\bar{w}^{\beta k}(0,0)(\mathscr{C}^{\beta\beta})(\xi,\tau).\] and for the term \(\mathscr{N}\colon\varepsilon^{-d}\varepsilon\int dx\mathscr{C}N^{\ s\gamma}(x )\partial_{s}g_{\gamma}(x)\) in the equation for \(\beta\neq k\) is zero and for \(\beta=k\): \[\sum_{\gamma=1}^{d}\partial_{k}[((\lambda_{0}^{d+1})^{\prime}\partial_{\gamma \gamma}^{2}T+(\lambda_{0}^{d+1})^{\prime\prime}\partial_{\gamma}T\partial_{ \gamma}T)f_{1}],\] with \[f_{1}:=\frac{1}{d}\sum_{\gamma=1}^{d}\int_{0}^{\infty}d\tau^{\prime}\int_{\tau ^{\prime}}^{\infty}d\tau\int d\xi\Big{<}\bar{w}^{\beta k}(0,0)(\mathscr{N}^{ \gamma\gamma})(\xi,\tau)\] **Important remark** _This long calculation proves also that indeed \(C\) is of order \(1\) and hence the assumption on the definition of \(R_{1}^{s}\) is correct._ The transport coefficients in (3.36) in the stress thermal tensor \(\tau^{(2)}\) have two different contributions. We can write \(K_{i}=Y_{i}+X_{i},i=1,2\). We have already given the expression of \(Y_{i}\) and now we collect all the terms to give the expression of \(X_{i}\). \[X_{1}=\Big{[}[(\lambda_{0}^{0})^{\prime}(\lambda_{0}^{d+1})^{\prime}]4a_{2}(tz)+2b _{2}(\lambda_{0}^{0})^{\prime}(\lambda_{0}^{0})^{\prime}+2c_{2}(\lambda_{0}^{d+1 })^{\prime}(\lambda_{0}^{d+1})^{\prime}+(\lambda_{0}^{d+1})^{\prime\prime}[h_{2} +2d_{2}+g_{2}]\Big{]}\quad\beta\neq k,\] \[X_{2}=2(\lambda_{0}^{d+1})^{\prime}d_{2}+2(\lambda_{0}^{0})^{\prime}T(z)h_{2}+ (\lambda_{0}^{d+1})^{\prime}g_{2},\quad\beta\neq k.\] Moreover, there are also new contributions to \(\omega_{i}\). We write \(\omega_{i}=\bar{\omega}_{i}+\phi_{i},i=1,2\). We get \[\phi_{1}=[(\lambda_{0}^{0})^{\prime\prime}h_{1}+(\lambda_{0}^{d+1})^{\prime \prime}d_{1}+a_{1}(\lambda_{0}^{d+1})^{\prime}(\lambda_{0}^{0})^{\prime}+b_{1 }(\lambda_{0}^{0})^{\prime}(\lambda_{0}^{0})^{\prime}+c_{1}(\lambda_{0}^{d+1} )^{\prime}(\lambda_{0}^{d+1})^{\prime}+(\lambda_{0}^{d+1})^{\prime\prime}(g_{ 1}+f_{1}),\] \[\phi_{2}=(\lambda_{0}^{0})^{\prime}h_{1}+(\lambda_{0}^{d+1})^{\prime}d_{1}+( \lambda_{0}^{d+1})^{\prime}(g_{1}+f_{1}).\] ### Compatibility conditions We need to check that in the definitions of \(R_{i}\) we can apply \(\mathscr{L}^{-1}\) on the l.h.s. \(\bullet\)\(R_{1}^{a}:\quad\) The simplest case is the definition of \(R_{1}^{a}\) in (2.43). We need to know that \(\mathscr{L}^{*}G_{0}\) has zero projection on the null space. This is equivalent to prove \[\int dxz^{\mu}(x)\mathscr{L}^{*}G_{0}=0,\] since \(Z^{\mu}=\int dxz^{\mu}(x)\) are the total mass, the total momentum and the total energy and they are conserved by the dynamics, namely \(\mathscr{L}Z^{\mu}=0\). Hence for any observable \(\phi\)\(\int dxz^{\mu}(x)\mathscr{L}^{*}\phi=0\). We have \[\int dxz^{\mu}(x)\mathscr{L}^{*}G_{0}=\int dx\mathscr{L}z^{\mu}(x)G_{0}=\sum_ {\gamma=1}^{d}\partial_{\gamma}\Big{\langle}\varepsilon^{d}\sum_{i}\delta(x-x _{i})w_{i}^{\mu\gamma}]\Big{\rangle}_{G_{0}}.\] For \(\mu=0,d+1\), \(\int dxz^{\mu}(x)\mathscr{L}^{*}G_{0}\) is zero since \(\Big{\langle}v_{i}\Big{\rangle}_{G_{0}}=0\). For \(\mu=1\cdots d\) \[\int dxz^{\mu}(x)\mathscr{L}^{*}G_{0}=\sum_{\gamma=1}^{d}\partial_{\gamma} \Big{\langle}\varepsilon^{d}\sum_{i}w_{i}^{\mu\gamma}\Big{\rangle}_{G_{0}}= \sum_{\gamma=1}^{d}\delta_{\mu\gamma}\partial_{\gamma}\Big{\langle} \varepsilon^{d}\sum_{i}w_{i}^{\mu\mu}]\Big{\rangle}_{G_{0}}=\partial_{\mu}P=0\] since \(P\) is constant. \(\bullet\)\(R_{2}:\quad R_{2}\) is defined by (2.46). It is not difficult to see that the condition in this case amount to say that \(\rho\) and \(e\) are solutions of the continuity equation and the energy equation. We have the condition to be verified for each \(\mu\) \[0=\int dxz^{\mu}(x)[\mathscr{L}^{*}G_{0}g_{1}+\partial_{t}G_{0}]=\sum_{k=1}^{ d}\int dx\frac{u^{k}}{T}\Big{\langle}\mathscr{L}z^{\mu}(x)z^{k}\Big{\rangle}_{G_{0}}+ \int dx\partial_{t}\Big{\langle}z^{\mu}\Big{\rangle}_{G_{0}}.\] This is naturally true for \(\mu=1\cdots d\). Then, \(\mu=0:\) \[\int dx\sum_{k=1}^{d}\partial_{\mu}[\frac{u^{k}}{T}\Big{\langle}z^{\mu}z^{k} \Big{\rangle}_{G_{0}}]+\int dx\partial_{t}\Big{\langle}z^{\mu}\Big{\rangle}_{G _{0}}=\int dx[\partial_{\mu}(\rho u^{\mu})+\partial_{t}\rho]=0.\] (A. 2.1) For \(\mu=d+1\) we can add in the condition \(\mathscr{L}^{*}G_{0}R_{1}\) since we have already proven that it is orthogonal to the invariants. \[\mu=d+1:\] \[\int dxz^{d+1}(x)[\mathscr{L}^{*}G_{0}g_{1}+\partial_{t}G_{0}+\mathscr{L}^{*}G_{0} R_{1}]=\] \[\int dx\sum_{k=1}^{d}\partial_{\nu}[\frac{u^{k}}{T}\Big{\langle}w^{d+1\nu}z^{k} \Big{\rangle}_{G_{0}}]+\int dx\partial_{t}\Big{\langle}z^{d+1}\Big{\rangle}_{G _{0}}+\int dx\Big{\langle}w^{d+1\nu}G_{0}R_{1}\Big{\rangle}_{G_{0}}\] \[=\int dx[\partial_{\mu}((\rho e+P)u^{\mu})+\partial_{t}(\rho e)-\nabla(k\nabla T )]=0.\] \(\bullet\)\(R_{1}^{s}:\) The most difficult case is the definition of \(R_{1}^{s}\) in (2.44). We need to show that \(\mathscr{L}^{*}\mathscr{L}^{*}G_{0}\) has zero projection on the null. We can write \[\mathscr{L}^{*}\mathscr{L}^{*}G_{0}=\mathscr{L}^{*}[G_{0}\sum_{i=1}^{N}\sum_{ \gamma=1}^{d}\sum_{\mu=0,4}\partial_{\gamma}\lambda_{0}^{\mu}(x_{i})w_{i}^{ \mu\gamma}]=\varepsilon^{-d}\mathscr{L}^{*}[\sum_{\mu=0,4}\sum_{\gamma=1}^{d} \int dyG_{0}\partial_{\gamma}\lambda_{0}^{\mu}(y)w^{\mu\gamma}(y)]\Big{\rangle}.\] The condition to be satisfied is \[\varepsilon^{-d}\int dx\Big{\langle}z^{\alpha}(x)\mathscr{L}^{*}[\sum_{\mu=0,4}\sum_{\gamma=1}^{d}\int dyG_{0}\partial_{\gamma}\lambda_{0}^{\mu}(y)w^{\mu \gamma}(y)]\Big{\rangle}=0,\ \ \ \ for\ \alpha=0,\cdots,d+1.\] The lhs is \[\varepsilon^{-d}\int dx\Big{\langle}\mathscr{L}z^{\alpha}(x)[\sum_{\mu=0,4} \sum_{\gamma=1}^{d}\int dyG_{0}\partial_{\gamma}\lambda_{0}^{\mu}(y)w^{\mu \gamma}(y)]\Big{\rangle}\] \[=\varepsilon^{-d}\int dx\Big{\langle}\nabla\cdot w^{\alpha}(x)[\sum_{\mu=0,4} \sum_{\gamma=1}^{d}\int dyG_{0}\partial_{\gamma}\lambda_{0}^{\mu}(y)w^{\mu \gamma}(y)]\Big{\rangle}\] \[=\varepsilon^{-d}\int dx\sum_{\nu,\gamma=1}^{d}\sum_{\mu=0,4}\partial_{\nu} \int dy\partial_{\gamma}\lambda_{0}^{\mu}(y)\Big{\langle}w^{\alpha\nu}(x)w^{ \mu\gamma}(\xi)\Big{\rangle}_{G_{0}}\] \[=\sum_{\nu,\gamma=1}^{d}\sum_{\mu=0,4}\partial_{\nu}\int dy\partial_{\gamma} \lambda_{0}^{\mu}(y)\int d\xi\Big{\langle}w^{\alpha\nu}(0)w^{\mu\gamma}(\xi) \Big{\rangle}_{G_{0}}.\] We know that [28] \[\int d\xi\Big{\langle}w^{\alpha\nu}(0)w^{\mu\gamma}(\xi)\Big{\rangle}_{G_{0}}= 0\ \ \ if\,\alpha=1\cdots d,\ \ \ \mu=0,d+1;\ \ \ \int d\xi\Big{\langle}w^{0\nu}(0)w^{0\gamma}(\xi)\Big{\rangle}_{G_{0}}= \rho T\delta_{\nu\gamma};\] \[\int d\xi\Big{\langle}w^{d+1\nu}(0)w^{d+1\gamma}(\xi)\Big{\rangle}_{G_{0}}= \frac{T(\rho e+P)^{2}}{\rho}\delta_{\nu\gamma};\ \ \ \int d\xi\Big{\langle}w^{0\nu}(0)w^{d+1\gamma}(\xi)\Big{\rangle}_{G_{0}}=T(\rho e +P)\delta_{\nu\gamma},\] so that we have two conditions: \[\int dy[\partial_{\gamma}\lambda_{0}^{0}(y)\rho T(y)+\partial_{\gamma}\lambda _{0}^{d+1}T(\rho e+P)]=0,\] \[\int dy[\partial_{\gamma}\lambda_{0}^{0}(y)T(\rho e+P)(y)+\partial_{\gamma} \lambda_{0}^{d+1}\frac{T(e+P)^{2}}{\rho}]=0.\] Now we prove that \[\partial_{\gamma}\lambda_{0}^{0}(y)\rho(y)+\partial_{\gamma}\lambda_{0}^{d+1}( \rho e+P)=0,\] so that both conditions are satisfied. \[\rho\partial_{\gamma}\log z-P\partial_{\gamma}\beta=\] \[=\rho\frac{\delta(\log z)}{\delta P}\partial_{\gamma}P+\rho\frac{\delta(\log z )}{\delta\beta}\partial_{\gamma}\beta+\log z\frac{\delta\rho}{\delta P} \partial_{\gamma}P-(\rho e+P)\partial_{\gamma}\beta.\] Since \(P\) is constant we are left with \[[\rho\frac{\delta(\log z)}{\delta\beta}-(\rho e+P)]=0\] by thermodynamic relations. ### Galileian invariance We write (3.23)+second term in (3.26) as \[\sum_{\mu=1}^{d}\sum_{\gamma=1}^{d}\int dy\Big{[}\lambda_{1}^{\mu }(y)\partial_{\gamma}\lambda_{0}^{0}(y)\int d\xi\Big{\langle}z(\xi)^{\mu}w^{0 \gamma}(\xi)\mathscr{L}^{-1}\bar{w}^{\beta k}(0)\Big{\rangle}_{G_{0}}\] \[+\sum_{\mu=1}^{d}\sum_{\gamma=1}^{d}\int dy\Big{[}\lambda_{1}^{ \mu}(y)\partial_{\gamma}\lambda_{0}^{d+1}(y)\int d\xi\Big{\langle}z(\xi)^{\mu }w^{d+1\gamma}(\xi)\mathscr{L}^{-1}\bar{w}^{\beta k}(0)\Big{\rangle}_{G_{0}}\] (A. 3.1) \[-\sum_{\mu=1}^{d}\sum_{\ell=1}^{d}u^{\mu}\frac{1}{T^{2}}\int dy \partial_{\partial}T(y)]\int d\xi\Big{\langle}\bar{w}^{\mu l}(\xi)\mathscr{L} ^{-1}\bar{w}^{\beta k}(0)(0)\Big{\rangle}_{G_{0}}\Big{]}=0\] and we want to prove that (A. 3.1) is zero. We start from the identity, for any \(\gamma,\beta,k=1,\cdots,d\), \[\mathscr{D}:=\int d\xi\Big{\langle}w^{0\gamma}(\xi)\mathscr{L}^{-1}\bar{w}^{ \beta k}(0)\Big{\rangle}_{G_{0}}=0\] and define \(\mathscr{D}_{s}\) and \(\mathscr{D}_{s}^{1}\) below the same quantity where all the velocity \(v\) are changed to \(v-su\). We have also \[\frac{d}{ds}\mathscr{D}_{s}|_{s=0}=0.\] We compute the derivative \[\frac{d}{ds}\mathscr{D}_{s} =\int d\xi\Big{\langle}G_{0}\frac{d}{ds}(w^{0\gamma})(\xi) \mathscr{L}^{-1}\bar{w}^{\beta k}(0)\Big{\rangle}+\int d\xi\Big{\langle}\frac {d}{ds}G_{0}(w^{0\gamma})(\xi)\mathscr{L}^{-1}\bar{w}^{\beta k}(0)\Big{\rangle}\] \[+\int d\xi\Big{\langle}w^{0\gamma}(\xi)\frac{d}{ds}[\mathscr{L}^ {-1}]\bar{w}^{\beta k}(0)+\int d\xi\Big{\langle}w^{0\gamma}(\xi)\mathscr{L}^{ -1}[\frac{d}{ds}\bar{w}^{\beta k}(0)].\] We use the identity \[\frac{d}{ds}[\mathscr{L}^{-1}]=-\mathscr{L}^{-1}\frac{d}{ds}[\mathscr{L}] \mathscr{L}^{-1}.\] It is not difficult to see that the only surviving term evaluated in \(s=0\) is \[\int d\xi\Big{\langle}\frac{d}{ds}G_{0}(w^{0\gamma})(\xi)\mathscr{L}^{-1}\bar{w}^ {\beta k}(0)\Big{\rangle}=-\sum_{\mu=1}^{d}\frac{u^{\mu}}{T}\int d\xi\Big{\langle} G_{0}(z^{\mu}w^{0\gamma})(\xi)\mathscr{L}^{-1}\bar{w}^{\beta k}(0)\Big{\rangle}\] and this implies that the first term in (A. 3.1) is zero. Now we start from the identity for any \(\gamma,\beta,k=1,\cdots,d\) \[\mathscr{D}^{1}:=\int d\xi\Big{\langle}w^{d+1\gamma}(\xi)\mathscr{L}^{-1}\bar {w}^{\beta k}(0)\Big{\rangle}_{G_{0}}=0\] and \[\frac{d}{ds}\mathscr{D}^{1}{}_{s}|_{s=0}=0.\] We have \[\frac{d}{ds}\mathscr{D}^{1}{}_{s}|_{s=0} =-\sum_{\mu=1}^{d}u^{\mu}\int d\xi\Big{\langle}G_{0}(z^{\mu}w^{d+ 1\gamma})(\xi)\mathscr{L}^{-1}\bar{w}^{\beta k}(0)\Big{\rangle}+\Big{\langle} G_{0}\frac{d}{ds}(w^{d+1\gamma})(\xi)\mathscr{L}^{-1}\bar{w}^{\beta k}(0)\Big{\rangle}\] \[=\sum_{\mu=1}^{d}\Big{[}-\frac{u^{\mu}}{T}\int d\xi\Big{\langle} G_{0}(z^{\mu}w^{d+1\gamma})(\xi)\mathscr{L}^{-1}\bar{w}^{\beta k}(0)\Big{\rangle}-u ^{\mu}\Big{\langle}G_{0}(w^{\mu\gamma})(\xi)\mathscr{L}^{-1}\bar{w}^{\beta k}( 0)\Big{\rangle}\Big{]}=0.\] (A. 3.2) The sum of the last two terms in (A. 3.1) is \[\sum_{\mu=1}^{d}\Big{[}-\int dy\frac{u^{\mu}}{T}(y)\frac{1}{T^{2} }\partial_{\gamma}T(y)\int d\xi\Big{\langle}z(\xi)^{\mu}w^{d+1\gamma}(\xi) \mathscr{L}^{-1}\bar{w}^{\beta k}(0)\Big{\rangle}_{G_{0}}\] (A. 3.3) \[-u^{\mu}\frac{1}{T^{2}}\int dy\partial_{l}T(y)]\int d\xi\Big{\langle} \bar{w}^{\mu l}(\xi)\mathscr{L}^{-1}\bar{w}^{\beta k}(0)(0)\Big{\rangle}_{G_{ 0}}\Big{]}\] (A. 3.4) and is equal to zero by using the identity (A. 3.2).
2308.00789
Utilization of Additive Manufacturing for the Rapid Prototyping of C-Band RF Loads
Additive manufacturing is a versatile technique that shows promise in providing quick and dynamic manufacturing for complex engineering problems. Research has been ongoing into the use of additive manufacturing for potential applications in radiofrequency (RF) component technologies. Here we present a method for developing an effective prototype load produced out of 316L stainless steel on a direct metal laser sintering machine. The model was tested within simulation software to verify the validity of the design. The load structure was manufactured utilizing an online digital manufacturing company, showing the viability of using easily accessible tools to manufacture RF structures. The produced load was able to produce an S$_{11}$ value of -22.8 dB at the C-band frequency of 5.712 GHz while under vacuum. In a high power test, the load was able to terminate a peak power of 8.1 MW. Discussion includes future applications of the present research and how it will help to improve the implementation of future accelerator concepts.
Garrett Mathesen, Charlotte Wehner, Julian Merrick, Bradley Shirley, Ronald Agustsson, Robert Berry, Amirari Diego, Emilio A. Nanni
2023-08-01T19:00:23Z
http://arxiv.org/abs/2308.00789v1
# Utilization of Additive Manufacturing for the Rapid Prototyping of C-Band RF Loads ###### Abstract Additive manufacturing is a versatile technique that shows promise in providing quick and dynamic manufacturing for complex engineering problems. Research has been ongoing into the use of additive manufacturing for potential applications in radiofrequency (RF) component technologies. Here we present a method for developing an effective prototype load produced out of 316L stainless steel on a direct metal laser sintering machine. The model was tested within simulation software to verify the validity of the design. The load structure was manufactured utilizing an online digital manufacturing company, showing the viability of using easily accessible tools to manufacture RF structures. The produced load was able to produce an S\({}_{11}\) value of -22.8 dB at the C-band frequency of 5.712 GHz while under vacuum. In a high power test, the load was able to terminate a peak power of 8.1 MW. Discussion includes future applications of the present research and how it will help to improve the implementation of future accelerator concepts. _Keywords_ - direct metal laser sintering, RF load, C-band ## I Introduction Advances in additive manufacturing (AM) techniques have led to increased research efforts on its applicability to various engineering challenges and wider adoption within industry. Of particular interest is the ability of the use of AM technology to simplify the manufacturing process of typically complex parts. This can be seen through the heavy research being performed in the aerospace, automotive, and biomedical industries [1]. Research is also being performed to develop engineered solutions for RF components, especially ones that require complex geometries such as in the development of terahertz RF structures [2] and X-band klystron and terminating load structures [3, 4]. The following paper continues this research by investigating the application of AM techniques to the creation of a C-band load. A background is given on how additive manufacturing works and what design constraints it adds. Motivation for the paper includes the potential future applications which will require the rapid development of complex high power loads. A methodology is outlined describing how the simulations and solid models were developed. Results of these simulations along with the results from tests of the realized spiral load (seen in Fig. 1) are shown and discussed. ### _Background_ The process of additive manufacturing can be simplified down to the core functionality of laying, binding, or solidifying layers of a given material on top of one another until a programmed design is produced. This process can be realized in various different mechanical methods, depending mainly on the material chosen. Common AM materials include polymers, ceramics, metals and composite materials. Within the realm of metal additive manufacturing (MAM) there are two main Fig. 1: Labelled image of printed spiral load, with interior structure shown in insert (A). Fig. 2: Diagram showing the common components of a DMLS machine. methods utilized for realizing MAM parts; direct energy deposit (DED) and powder bed fusion. DED utilizes a wire or powder feeder that allows for the stock to be melted by an energy source as the material is extruded. The energy source can be a laser, arc-welder, or electron beam depending on the implementation chosen. This method results in relatively cheap and quickly produced parts, but has the limitation of having low resolution and poor layer adhesion. The low resolution produced also inherently limits the complexity of the possible parts produced [1]. Powder bed fusion, specifically a variety of the method known as direct metal laser sintering (DMLS), operates by utilizing a laser to sinter the metal rather than using a heat source to melt the metal particles together. The typical DMLS machine will operate by first raising the feeder bed so that a recoater blade can push the upper layer of the metal particles can coat the build platform. Any excess powder that does not cover the build plate will be pushed into an excess bin on the opposite side of the build plate from the feeder bin. A laser will then sinter the particles together in the desired pattern for the given layer. The build plate will then lower, another layer of particles will be coated on top of the last layer, and the process will repeat until the part is finalized. Fig. 2 shows how these components are typically laid out. The lack of binding polymer necessary for other powder bed fusion methods also allows for higher resolution parts to be created [1]. One of the major drawbacks of the DMLS method is that it can be high cost and the longer print times compared to other methods [5]. During the design phase, considerations were made with regard to the general limitations of current DMLS technologies and how AM processes would affect the printed design. A driving factor of the solid model design due to the AM process was the issue of internal support materials. Internal support material may be added in the event that an internal cavity has an excessive overhang due to the layer-by-layer nature of AM. Unlike external support material, internal supports are impractical if not impossible to remove once they are printed causing unexpected performance when compared with simulated performance. While limitations with DMLS exist, these limitations are reflected in most other forms of additive manufacturing. DMLS also provides many advantages, as previously mentioned, that lends its use over others methods such as DED. Furthermore, theses identified drawbacks were mitigated through implementable design principles. ### _Motivation_ Previous research has been performed in the use of AM to develop components for accelerator concepts. Of particular interest were interim reports from research performed by CERN demonstrating the way in which different materials and designs affected the performance of AM X-band load structures. This research showed promising results within the X-band range, with simulations showing the optimized waveguide geometry in Ti-6Al-4V performing -39.07 dB in the S\({}_{11}\) at a frequency of 11.9942 GHz [4]. Several variations of the design were produced in different metals such as Ti-6Al-4V and 316L stainless steel. Of the eight models produced in preliminary findings, two of them were manufactured in 316L stainless steel and performed around -20 dB in the S\({}_{11}\) at 11.9942 GHz [6]. These results showed the potential for further development in the utilization of this method for potential use within future collider and accelerator concepts. The Cool Copper Collider (C\({}^{3}\)) is a proposed lepton-collider Higgs factory [7, 8] which has a planned center C-band frequency of 5.712 GHz [9]. By using C-band over the more traditional S- and UHF-band seen in many high energy colliders currently, it greatly reduces the necessary size to obtain TeV-scale accelerators [10]. By utilizing additive manufacturing in the production process, it will allow for rapid and flexible manufacturing of the large number of terminating loads that will be necessary for such a collider. A third party, prototype manufacturing company was used for fabrication of the MAM parts of the load. This was done to demonstrate the practicality of using standard manufacturing pipelines for creation high power RF components. As DMLS machines are expensive to procure and require additional time and money for maintenance, using a third-party company with DMLS as part of their standard offerings greatly reduced the barrier to entry for the research performed. ## II Methodology The design for the load structure was based on the limitations of the chosen manufacturing method and two main design principles. DMLS is a flexible process, but is mainly limited by the necessity of support material and the size of the build platform. These limitations had to be considered throughout the design process. The first design principle that was implemented was designing the entire assembly to be produced on a standard DMLS machine. This limited the necessity to consider components that might complicate the design through interfacing between additive and traditionally machined components. The second design principle was ensuring that the designed load was broadband. This meant that there would be no resonant structures, which typically requires more precise control of the surface imperfections. All simulations were performed within Ansys High-Frequency Structure Simulator (HFSS) software. Simulations were performed to observe the change each major component of the design (taper, spiral, pump-out hardware) had on the the overall performance. Initial designs were based off of those done in the X-band [4] and modified to better match the frequency range of C-band. The goal of these models was to achieve a S\({}_{11}\) value of -20 dB or lower at 5.712 GHz with a wide bandwidth. The bandwidth was necessary to avoid problems due to manufacturing and tuning of the structure. The initial model generated was a straight line, non-standard waveguide without any taper to a standard WR-187 waveguide. This was done to ensure that the non-standard waveguide model allowed for broadband transmission of an arbitrary signal around the C-Band range. Tapers were used to transition from the standard WR-187 wavefront to the non-standard waveguide. Two different tapers were tested in the height dimension, a linear taper, and a sinusoidal taper. A simple linear expansion from the narrower port width to the wider waveguide width was designed. Equation (1) defines the sinusoidal profile using the given variables from Table I to show both the linear and sinusoidal taper in Fig. 3. \[x(z)=\frac{h_{wr}+h_{wg}}{2}+\frac{h_{wr}-h_{wg}}{2}\cos(\frac{\pi}{l_{t}}z)\,\ z \in(0,l_{t}) \tag{1}\] Once the models for the straight waveguide without any taper and the two with the linear and sinusoidal tapers were developed, models for the spiraled versions of them were generated. The straight portion of the waveguide was modelled by sweeping a rectangular face along a line defined by (2). Following the spiral equation of the straight waveguide, equations had to be developed for the two taper versions. The interior face of each taper was curved around the inner spirals along the line defined by (2) to match the straight models. The outer faces of the tapers had to be separately defined to maintain similar behavior as the straight line model. Equation (4) shows the modified version of the linear taper in the spiral form and (6) show the modified version of (1) in the spiral form. \[\begin{cases}x(t)=(R+\alpha t)\sin(2\pi t)\\ y(t)=(R+\alpha t)\cos(2\pi t)\end{cases}\ \ \ t\in(0,t_{l}) \tag{2}\] \[\alpha=h_{wg}+g \tag{3}\] \[\begin{cases}x(t)=(R+h_{wg}+(\alpha+\beta)t)\sin(2\pi t)\\ y(t)=(R+h_{wg}+(\alpha+\beta)t)\cos(2\pi t)\end{cases}\ \ \ t\in(0,t_{t}) \tag{4}\] \[\beta=\frac{h_{wr}-h_{wg}}{t_{t}} \tag{5}\] \[\begin{cases}x(t)=(R+h_{wg}+\kappa(t)\frac{t}{t_{t}})\sin(2\pi t)\\ y(t)=(R+h_{wg}+\kappa(t)\frac{t}{t_{t}})\cos(2\pi t)\end{cases}\ \ \ t\in(0,t_{t}) \tag{6}\] \[\kappa(t)=\left(\frac{h_{wg}-h_{wr}}{2}\right)\cos\left(\pi\frac{t}{t_{t}} \right)+\frac{h_{wg}}{4}+\frac{h_{wr}}{2}+gt_{t} \tag{7}\] Equations (4)-(7) define the x and y coordinates that define the lines at \(z(t)=0\), but must add (8) to add in the width taper to the sinusoidal and linear height taper \[z(t)=\frac{w_{wg}}{2}-\left(\frac{w_{wg}-w_{wr}}{2t_{t}}\right)t. \tag{8}\] Fig. 4 shows the way in which the various parameters of the spiral waveguide model was defined within HFSS. These values were modified slightly throughout the initial simulation runs to optimize the load performance. Table II, along with \(w_{wg}\) and \(h_{wg}\) from Table I, define the parameters that were found to balance performance and print-ability of the spiral load structure. The last development in the model was to add the pump-out hardware into the vacuum space model. This was an important step as it would allow the results of the simulation to more closely match the real-world performance of the load. It would \begin{table} \begin{tabular}{|c|c|c|c|} \hline Variable Name & Symbol & Value & Units \\ \hline Port Width & \(w_{wr}\) & 47.55 \\ Port Height & \(h_{wr}\) & 22.15 \\ Waveguide Width & \(w_{wg}\) & 60 \\ Waveguide Height & \(h_{wg}\) & 2 \\ Taper Length & \(l_{wg}\) & 400 \\ Waveguide Length & \(l_{wg}\) & 7000 \\ \hline \end{tabular} \end{table} TABLE I: Straight line model dimension definitions. Fig. 4: Definition of dimensions shown in Table II. Fig. 3: Definition of dimensions shown in Table I along with the profiles of the different tapers. also have to accommodate the need to remove powder from the interior of the load after the printing process. The model shown in Fig. 5 includes the pump out volume and WR-187 extension so that there was room for a flange to be added. The design shown also minimizes the inherent reflections caused by the slits and allows for the model to be split in two so that the final printed model is rendered in two halves. This allows for issues related to internal support material and powder removal to be mitigated. ## III Results ### _Simulation Results_ Using the straight waveguide model without any taper to a WR-187 port, it was then possible to compare the performance of the two different taper designs and dimensions shown in the Methodology section. Adding these tapers to the model and comparing it to the simulation run without the taper is shown in Fig. 6. This plot shows that there seems to be very little difference between the two taper designs in the straight load model. It shows slightly worse performance in the S\({}_{11}\) data when the taper is added when observing the changes at the desired center frequency of the load. This was expected to occur and can be explained by the additional reflections caused by the taper geometries. This is mainly due to the impedance mismatch from one side to another. It required further investigation into the potential effects that the tapers would have once the model was moved from a straight model to a spiral model to make a determination on what taper to utilize in the design. Adding in the effect of the spiral load led to the ability to differentiate the effects of the sinusoidal taper and the linear taper. Fig. 7 shows a plot of these differences. The sinusoidal taper produces an S\({}_{11}\) value of -22.7 dB at the desired frequency of 5.712 GHz. It performs better than the linear taper method (-19.5 dB) and similarly to the performance had there been no taper method utilized (-21.9 dB). For these reasons, it was decided to move on with the sinusoidal taper method in future simulation models as it showed better performance than that of the linear taper in the S\({}_{11}\) plot. Following the use of the spiral load model, the final pump-out model with all of the additional vacuum space designed for pump-out could be simulated. This was done to show the potential effects of the additional pump-out model might have on the reflections. This difference between the non-pump-out spiral model and model that added the pump-out hardware can be seen in Fig. 8. The plot shows a slight variation of the S\({}_{11}\) data across the frequency sweep. At the desired frequency of 5.712 GHz, the plots show a close match, with the pump-out model showing an S\({}_{11}\) value of -21.2 dB and the model without showing a response of -21.7 dB. This shows that the designed pump-out does not cause a significant negative impact to the performance of the spiral load. Fig. 5: Final spiral load vacuum model and pump-out slits within the HFSS simulation software. Fig. 8: Plot containing a comparison of the model with and without the pump-out volume. Fig. 6: Plot containing comparison of various taper methods on a straight waveguide model. Fig. 7: Plot containing comparison of various taper methods on a spiral waveguide model. ### _Printed Model Results_ Before the welding of the two halves of the spiral load, a cold test was performed for initial validations of the simulations and to verify that there were no unexpected results due to the printing process. The data for all of the cold tests was recorded on an Agilent N5241A vector network analyzer (VNA) which was calibrated using the TLR (thru, line, reflect) calibration method. The initial cold test was performed by inserting the alignment pins and clamping the two halves of the spiral load together. The spiral load was then connected to the VNA, a photograph of which can be seen in Fig. 9. The results from this initial cold test, before any welding or machining occurred can be seen in Fig. 10. These results were promising, with the initial cold test showing that at a frequency of 5.712 GHz, the load was able to perform much better than expected according to simulations with an S\({}_{11}\) value of -38.7 dB. This result at the specific frequency is due to the fringing response being shifted and the overall power absorption is more consistent with the simulations. Following the initial cold test, the two halves were welded together and the instrumentation flange was kept to perform further tests on the load. Welding the two halves together had a significant effect on the S\({}_{11}\) curve, as shown in Fig. 11. This resulted in an S\({}_{11}\) value of -26.8 dB. While this was a significant difference at the specific frequency, the overall power absorption across the spectrum is consistent. So that the same printed load was used throughout all tests performed, the spiral load underwent the process of removing the instrumentation flange and high vacuum interfaces were added. Following this process, the spiral load was cold tested again under atmospheric conditions to see if there was any shift in the response due to additional welding and brazing processes. Fig. 12 shows the plot that resulted from this process under atmospheric conditions. It can be seen that at the center frequency of 5.712 GHz the S\({}_{11}\) value shifted up slightly from the previous cold test to a value of -25.1 dB. The load was then attached to a vacuum pump and an RF window was used to test the response while under vacuum conditions. A photograph of the bench setup is shown in Fig. 13 and the results of the test are shown in Fig. 14. Adding the RF window and pumping the spiral load down to vacuum decreased the performance slightly to an S\({}_{11}\) value of -22.8 dB at the center frequency of 5.712 GHz. The spiral load was sent to Radiabeam following all of the cold tests performed at SLAC. This was done to observe how the spiral load would perform while under high power loading. The load was first conditioned to prevent breakdown and damage to the spiral load structure. Conditioning began with a 200 ns pulse width at 47 kW of power, with a repetition Fig. 11: Plot of the S\({}_{11}\) curve of the spiral load following the welding of the two halves. Fig. 12: Plot of the S\({}_{11}\) response following the addition of the vacuum-rated hardware and cleaning of the load. This was recorded while the load was under atmospheric conditions. Fig. 10: Plot of the S\({}_{11}\) curve of the spiral load during the initial cold test. Fig. 9: Photograph of initial cold test setup. rate of 1 Hz. The power and rep rate were then gradually stepped up to a rate of 20 Hz and maximum power of 8 MW. The pulse width was then increased once the load had been conditioned at the 200 ns pulse width to a pulse width of 400, 700, and finally 1000 ns. Once the load had been conditioned, the load was able to terminate a peak power of 8.1 MW with a repetition rate of 20Hz when the pulse width was set to 700 ns. Fig. 15 shows the forward and reflected signal from the load during testing. During the high power testing, the vacuum was able to stay stable at \(3\times 10^{-7}\) Torr when under a pulse width of 400 ns. The temperature of the spiral load, as measured by a thermocouple attached to the exterior of the load, showed a temperature increased from 22.8 \({}^{\circ}\)C to 27.9 \({}^{\circ}\)C over the testing period. When the load was tested with the 1000 ns pulse width, heating increased significantly. A peak temperature of 43.5 \({}^{\circ}\)C was present on the exterior of the load and the vacuum increased from nominal of \(3\times 10^{-7}\) Torr to \(1.3\times 10^{-6}\) Torr causing testing at the 1000 ns pulse width to be paused. As the load was only passively cooled by the ambient atmosphere, a cooling fan was added and assisted in bringing down the peak temperatures to 34.9 \({}^{\circ}\)C and stabilized the pressure to \(5.6\times 10^{-7}\) Torr. ## IV Discussion The results show the feasibility of utilizing additive manufacturing to produce high-power C-band load structures. Using simulations in the initial development of the spiral load assisted in the ability to develop an operational solution. Two of the concerns in using simulations when compared to the manufactured model is the effect of slight misalignment (\(<\)0.1mm) of the two halves and the exact location and height of the surface roughness. The slight misalignment was due to slightly different shrinkage rates seen in the two halves leading to a discontinuity in the sidewalls. The surface roughness of the interior of the load is able to be approximately simulated but some of the regions of the load have a rougher surface than others, dependent on the angle that it was placed at within the DMLS machine. While some of these issues were present, it seems that they did not contribute to reduced performance and may have actually increased the performance of the load. This is mainly due to the additional resistance and surface area provided by the load due to the surface roughness. By utilizing additive manufacturing over traditional machining practices, the surface imperfections inherent in the process of manufacturing led to increased performance. As can be seen in the results, even after cleaning of the load, the S\({}_{11}\) at the design frequency is lower than the expected value based on the simulations performed. While the load was able to terminate a peak power of 8.1 MW, testing on the load had to be stopped due to concerns over the stability of the vacuum pressure and the temperature on the external surface of the load. The increase in heat on the exterior following the finishing of testing suggested that there was a fairly significant amount of heat being produced on the interior walls of the spiral load, especially in the outer turns. This signifies that much of the power loss is concentrated in the first two or three turns of the spiral load. While is is known that the additional turns increased the overall performance, this concentration in surface loss led to uneven heating in the load and a reduction in the longevity of the performance of the load Fig. 14: Plot of the S\({}_{11}\) curve during the cold test performed under vacuum conditions. The bandwidth is reduced due to the narrower bandwidth of the RF window. Fig. 13: Image showing the bench setup of the spiral load when tested under vacuum conditions. Fig. 15: Screen capture of oscilloscope during testing with 8.1 MW power input at a 20 Hz rep rate and 700 ns pulse width. as vacuum pressure suffered. Future iterations of this design will need to include an integrated cooling system, an optimized waveguide shape to even out the surface loss concentrations, or both. These additions would allow for higher powers to be terminated, thus expanding the application of utilizing such processes. The produced research shows the use of additive manufacturing techniques for the development of C-band load structures. The ability for additive manufacturing processes to adapt easily to design needs of the finalized structure allows greater flexibility in the manufacturing process. With the basic spiral load having proved the ability to perform at high powers allows for the advancement and further optimization for the specific application of the load. The results have shown promise in the future use of this technique in the C\({}^{3}\) concept.
2303.04845
Smoothed Analysis of Sequential Probability Assignment
We initiate the study of smoothed analysis for the sequential probability assignment problem with contexts. We study information-theoretically optimal minmax rates as well as a framework for algorithmic reduction involving the maximum likelihood estimator oracle. Our approach establishes a general-purpose reduction from minimax rates for sequential probability assignment for smoothed adversaries to minimax rates for transductive learning. This leads to optimal (logarithmic) fast rates for parametric classes and classes with finite VC dimension. On the algorithmic front, we develop an algorithm that efficiently taps into the MLE oracle, for general classes of functions. We show that under general conditions this algorithmic approach yields sublinear regret.
Alankrita Bhatt, Nika Haghtalab, Abhishek Shetty
2023-03-08T19:25:57Z
http://arxiv.org/abs/2303.04845v1
# Smoothed Analysis of Sequential Probability Assignment ###### Abstract We initiate the study of smoothed analysis for the sequential probability assignment problem with contexts. We study information-theoretically optimal minmax rates as well as a framework for algorithmic reduction involving the _maximum likelihood estimator_ oracle. Our approach establishes a general-purpose reduction from minimax rates for sequential probability assignment for smoothed adversaries to minimax rates for transductive learning. This leads to optimal (logarithmic) fast rates for parametric classes and classes with finite VC dimension. On the algorithmic front, we develop an algorithm that efficiently taps into the MLE oracle, for general classes of functions. We show that under general conditions this algorithmic approach yields sublinear regret. ## 1 Introduction Sequential probability assignment -- also known as online learning under the logarithmic loss -- is a fundamental problem with far-reaching impact on information theory, statistics, finance, optimization, and sequential decision making [Rissanen, 1983, 1984, Cover, 1991, Feder et al., 1992, Xie and Barron, 1997, Merhav and Feder, 1998, Xie and Barron, 2000, Yang and Barron, 1999, Jiao et al., 2013, Orabona and Pal, 2016, Foster et al., 2018]. In recent years, methods for incorporating contexts or side information into sequential probability assignment have gained much attention [Rakhlin and Sridharan, 2015, Fogel and Feder, 2017, 2018, Foster et al., 2018, Bhatt and Kim, 2021, Bilodeau et al., 2021, Wu et al., 2022a], in part due to their newly forged connection to sequential decision making applications, the contextual bandit problem, and learning in Markov Decision Processes (MDPs) (see e.g. [Foster and Krishnamurthy, 2021] and [Foster et al., 2021a]). In this setting, a forecaster who has access to historical data \(x_{1:t-1},y_{1:t-1}\) consisting of contexts \(x_{\tau}\) (e.g., day \(\tau\)'s meteorological information) and the outcomes \(y_{\tau}\in\{0,1\}\) (e.g., whether \(\tau\) was a rainy day) wishes to predict \(y_{t}\) given a new context \(x_{t}\). The forecaster uses a _probability assignment_\(p_{t}\) to estimate the probability of \(y_{t}=1\) outcome and incurs the logarithmic loss, i.e., \(-\log p_{t}(y_{t})\), which rewards the forecasters for having assigned high probability to the realized outcome. The goal of the forecaster is to suffer low _regret_ against a chosen reference class of predictors. A large body of prior work on sequential probability assignment with contexts has focused on settings where contexts are presented i.i.d. from an unknown distribution (see [Fogel and Feder, 2017, Bhatt and Kim, 2021, Bilodeau et al., 2021, Wu et al., 2022a] and the references within); this problem is also referred to as conditional density estimation. In these cases, sequential probability assignment is known to enjoy small regret for several reference classes such as Vapnik-Chervonenk (VC) classes. On the other hand, attempts to consider context distributions that evolve unpredictably and adversarially have faced strong impossibility results even for simple reference classes of predictors. For example, for the reference class of simple one-dimensional thresholds assigning \(p_{t}=\theta_{0}\mathds{1}\{x_{t}\leq a\}+\theta_{1}\mathds{1}\{x_{t}>a\}\) for \(a\in[0,1]\), regret is bounded by \(O(\log T)\) in the i.i.d. case (Fogel and Feder, 2017) but is lower bounded by \(\Omega(T)\) when the sequence of contexts is chosen adversarially (folklore e.g. Littlestone (1988)). In the face of the increasing need to adapt to evolving contexts in modern applications, these impossibility results indicate that new models of adversarial behavior must be considered for obtaining rigorous guarantees that guide the design of sequential probability assignment in practical applications. In recent years, _smoothed analysis_ of adaptive adversaries (Haghtalab et al., 2021; Rakhlin et al., 2011) has emerged as a framework for going beyond the worst-case adversaries while making minimal assumptions about the adaptive process that generates a sequence. In this setting, contexts are chosen from an evolving sequence of so-called \(\sigma\)-smooth distributions, whose density is bounded above by \(1/\sigma\) times that of a base measure (such as the uniform distribution). Remarkably, these methods, established by Haghtalab et al. (2021) for 0-1 loss and extended to regression by Haghtalab et al. (2022), Block et al. (2022), have established performance guarantees for the sequential prediction problem that matches the optimal performance in the i.i.d setting. This raises the question as to whether the _sequential probability assignment_ problem may similarly enjoy improved minmax regret bounds for smoothed adaptive sequences. Beyond minmax rates, an important feature of probability assignment and, its analogue, density estimation is the availability of fundamental and natural estimation techniques such as _maximum likelihood estimation (MLE)_. For i.i.d. sequences, under general conditions, MLE is known to be optimal asymptotically and often serves as a starting point for designing more sophisticated estimators. Going beyond i.i.d. sequences, we ask whether MLE can be made to achieve good statistical behavior on adaptive sequences. More generally, algorithmic perspective is increasingly important for the sequential probability assignment problem and its applications to contextual bandits and reinforcement learning (where algorithm design is as fundamental a consideration as minmax rates (Agarwal et al., 2014; Simchi-Levi and Xu, 2022; Foster and Rakhlin, 2020; Langford et al., 2007; Foster et al., 2021)). In this space, _oracle-efficient_ sequential decision making algorithms that repurpose existing offline algorithmic routines have received special interest (Kalai and Vempala, 2005; Dudik et al., 2020; Wang et al., 2022; Kakade et al., 2007; Simchi-Levi and Xu, 2022). Here again, recent progress on smoothed analysis for sequential prediction with 0-1 loss and regression loss (Haghtalab et al., 2021; Block et al., 2022) has shown promise in bridging the computational and information-theoretical gaps between what is obtainable in the i.i.d. case and for smoothed adaptive sequences. In this paper, we initiate the study of smoothed analysis for sequential probability assignment and seek to understand fundamental information-theoretic limits on the _minmax regret_ as well as design _natural and oracle-efficient algorithms_ for this problem. Additionally, we investigate whether in the smoothed analysis setting, maximum likelihood estimation can efficiently address sequential probability assignment while achieving small regret. To the best of our knowledge, our work is the first to consider oracle-efficient algorithms (and particularly the MLE) for the sequential probability assignment problem. ### Main results **Reduction to transductive learning.** Our first main result is a reduction from regret bounds against a smoothed adversary to regret bounds against (a generalized version of) a transductive adversary. That is, we show that the minimax regret in the smoothed analysis setting is upper bounded by the minimax regret in the setting where a set of contexts is provided to the learner and the adversary is constrained to picking the contexts from this set. For \(\mathcal{F}\), a class of hypotheses mapping contexts to \([0,1]\), let us define the minmax regret in the transductive case over \(T\) times steps when a context set of size \(M\) is provided to the learner to be \(\underline{\mathcal{R}}_{T}^{M}(\mathcal{F})\). We establish in Theorem3.1 that for all \(\sigma\)-smooth sequences the minmax regret \(\mathcal{R}_{T}(\mathcal{F},\sigma)\) satisfies, for any \(k>1\), \[\mathcal{R}_{T}(\mathcal{F},\sigma)\leq\underline{\mathcal{R}}_{T}^{kT}( \mathcal{F})+T^{2}(1-\sigma)^{k}.\] Furthermore, in Theorem3.4 we upper bound \(\underline{\mathcal{R}}_{T}^{kT}(\mathcal{F})\) by connecting the worst case adversarial regret in this setting to the scale-sensitive VC dimension of \(\mathcal{F}\) which is a prototypical offline complexity of the class. Our results obtain a logarithmic dependence on \(1/\sigma\). In particular, in Corollary3.4.1, we show that for VC classes (and parametric classes) the regret is bounded by \(\mathcal{R}_{T}(\mathcal{F},\sigma)\leq O\left(d\log\left(\frac{T}{\sigma} \right)\right)\), where \(d\) is the VC dimension of class \(\mathcal{F}\). **Efficient Reduction from Sequential Probability Assignment to MLE.** Our second contribution is initiating the study of oracle-efficient algorithm design for sequential probability assignment. In particular, for small alphabet size, we design a natural algorithm (Algorithm1) that efficiently uses an MLE oracle and achieves sublinear regret in the smoothed setting. Our Theorem4.1 gives a general regret bound in terms of the statistical complexity of the class \(\mathcal{F}\) and the smoothness parameter \(\sigma\). For VC classes, this achieves regret rate of \(T^{4/5}\sqrt{\frac{d}{\sigma}}\). To the best of our knowledge, this is the first _oracle-efficient_ algorithm and analysis of the follow-the-perturbed-leader style algorithms for the logarithmic loss. **Probability assignment for VC classes.** For VC classes \(\mathcal{F}\), we explicitly construct sequential probability assignments and establish their regret guarantees in the smoothed setting. That is, we construct a probability assignment based on a Bayesian mixture over \(\mathcal{F}\) that satisfies \(\mathcal{R}_{T}(\mathcal{F},\sigma)\leq Cd\log\left(\frac{T}{\sigma}\right)\) where \(d\) is the VC dimension of class \(\mathcal{F}\). While this approach is not oracle-efficient, it indeed achieves regret bound with optimal dependence on \(T\) and \(\sigma\). This motivates a natural direction for future work as to whether such mixture-based methods can be implemented oracle efficiently or if there is a tradeoff between the regret and the computational complexity of sequential probability assignment. ## 2 Preliminaries Let \(\mathcal{X}\) be a set of _contexts_ and \(\mathcal{Y}=\{0,1\}\). Then, the problem being studied entails a sequential game where at each timestep \(t\), based on the history of contexts \(x_{1:t}:=(x_{1},\ldots,x_{t})\) where \(x_{i}\in\mathcal{X}\) and associated bits \(y_{1:t-1}\in\{0,1\}^{t-1}\), the player must assign a probability \(q(\cdot|x_{1:t},y_{1:t-1})\) to what the upcoming bit \(y_{t}\) will be. Once the bit \(y_{t}\) is revealed (possibly in an adversarial fashion) the player incurs loss \(-\log q(y_{t}|x_{1:t},y_{1:t-1})\) and the game proceeds to the next step. For a _hypothesis class_\(\mathcal{F}\subset\{\mathcal{X}\to[0,1]\}\), the associated regret for a probability assignment strategy \(\mathscr{Q}=\{q(\cdot|x_{1:t},y_{1:t-1})\}_{t=1}^{n}\) for a fixed \(x_{1:T},y_{1:T}\) is \[\mathcal{R}_{T}(\mathcal{F},x_{1:T},y_{1:T},\mathscr{Q})=\sum_{t=1}^{T}\log \frac{1}{q(y_{t}|x_{1:t},y_{1:t-1})}-\inf_{f\in\mathcal{F}}\sum_{t=1}^{T}\log \frac{1}{p_{f}(y_{t}|x_{t})} \tag{1}\] where \(p_{f}(1|x_{t})=f(x_{t})\); i.e. the function \(f\) assigns probability \(\operatorname{Bern}(f(x_{t}))\) to the upcoming bit given the context \(x_{t}\). Our statistical results apply to a general loss function \(\ell\) and general actions of the learner \(a_{t}\). For a set of inputs \(\left\{\left(x_{i},y_{i}\right)\right\}_{i=1}^{T}\) specified by the adversary and a set of actions \(\left\{a_{i}\right\}_{i=1}^{T}\) of the learner, regret is defined by \[\mathcal{R}_{T}(\mathcal{F},x_{1:T},y_{1:T},a_{1:T})=\sum_{t=1}^{T}\ell(a_{t}, (x_{t},y_{t}))-\inf_{f\in\mathcal{F}}\sum_{t=1}^{T}\ell(f(x_{t}),(x_{t},y_{t})),\] where for log-loss \(a_{t}=q(\cdot|x_{1:t},y_{1:t-1})\); so that the action is a probability mass function (pmf) over \(\left\{0,1\right\}\). The regret in (1) is often studied under various adversary models; i.e. various different probabilistic assumptions (or lack thereof) on the model generating \(x_{t}\) and \(y_{t}\). In this work, we consider worst-case \(y_{t}\) (in contrast to the _realizable_ setting where \(Y_{t}\sim\text{Bern}(f^{*}(x_{t}))\) for a fixed unknown \(f^{*}\in\mathcal{F}\)) and \(X_{t}\sim\mathcal{D}_{t}\) where \(\mathcal{D}_{t}\)s form an adaptive sequence of smooth distributions. **Definition 2.1** (Smooth distribution and adversary [15]).: Consider a fixed and known base distribution \(\mu\) on \(\mathcal{X}\) (such as the uniform distribution if \(\mathcal{X}\) supports it). A distribution \(\mathcal{D}\) on \(\mathcal{X}\) is said to be \(\sigma\)-smooth if for all measurable sets \(A\subseteq\mathcal{X},\mathcal{D}(A)\leq\frac{\mu(A)}{\sigma}\). We denote the set of all \(\sigma\)-smooth distributions by \(\Delta_{\sigma}\left(\mu\right)\). An adversary, characterized by a joint distribution \(\mathscr{D}\) with \(X_{t}\sim\mathcal{D}_{t}\) (where \(\mathcal{D}_{t}\) may possibly depend on the history) is said to be a \(\sigma\)-smooth adaptive adversary if \(\mathcal{D}_{t}\in\Delta_{\sigma}\left(\mu\right)\) for all \(t\in\left\{1,\ldots,T\right\}\). The minmax regret for \(\sigma\)-smooth adaptive adversaries is then given by \[\mathcal{R}_{T}(\mathcal{F},\sigma)=\inf_{\mathscr{D}}\sup_{\sigma\text{- smoothed }\mathscr{D}}\mathbb{E}_{X_{1:T}\sim\mathscr{D}}\left[\max_{y_{1:T}}\mathcal{R}_{ T}(\mathcal{F},X_{1:T},y_{1:T},\mathscr{D})\right],\] where \(\mathscr{D}\) is the set of all probability assignment strategies. We are particularly interested in how geometric properties of the function class \(\mathcal{F}\) affect \(\mathcal{R}_{T}(\mathcal{F},\sigma)\). There are several notions of covering numbers and combinatorial dimensions that quantify the "richness" and complexity of a class, but the _scale-sensitive VC dimension_ will be of particular interest to us and is invoked in our results. **Definition 2.2** (Scale-sensitive VC dimension).: Let \(\mathcal{F}\) be a function class. For any \(\alpha>0\) and points \(x_{1},\ldots,x_{m}\in\mathcal{X}\), we say that \(\mathcal{F}\) shatters the set \(x_{1},\ldots,x_{m}\) at scale \(\alpha\) if there exist \(s_{1}\ldots s_{m}\in\mathbb{R}\) such that for each \(\epsilon\in\left\{-1,1\right\}^{n}\) there exists a function \(f\in\mathcal{F}\) such that \(\epsilon_{i}\left(f(x_{i})-s_{i}\right)\geq\frac{\alpha}{2}.\) The scale sensitive VC dimension at scale \(\alpha\) of \(\mathcal{F}\), denoted by \(\text{VC}\left(\mathcal{F},\alpha\right)\) is defined as the largest \(m\) such that there is a set of \(m\) points \(x_{1},\ldots,x_{m}\in\mathcal{X}\) such that \(\mathcal{F}\) shatters the set at scale \(\alpha\). The (traditional) VC dimension of a binary class \(\mathcal{F}\) is defined as \(\text{VC}\left(\mathcal{F}\right)=\lim_{\alpha\to 0^{+}}\text{VC}\left( \mathcal{F},\alpha\right)\). Throughout, we use the following result of Haghtalab et al. (2021) about \(\sigma\)-smooth distributions. This results aids us in reduction from smoothed learning to transductive learning. **Theorem 2.1** (Coupling Lemma of Haghtalab et al. (2021)).: _Let \(\mathscr{D}_{\sigma}\) be an adaptive sequence of \(t\)\(\sigma\)-smooth distributions on \(\mathcal{X}\). There is a coupling \(\Pi\) such that_ \[(X_{1},Z_{1,1},\ldots,Z_{1,K},\ldots,X_{t},Z_{t,1},\ldots,Z_{t,K})\sim\Pi\] _satisfy that:_ 1. \(X_{1},\ldots,X_{t}\) _is distributed according_ \(\mathscr{D}_{\sigma}\)_,_ 2. _For every_ \(j\leq t\)_,_ \(\{Z_{i,k}\}_{i\geq j,k\in[K]}\) _are uniformly and independently distributed on_ \(\mathcal{X}\)_, conditioned on_ \(X_{1},\ldots,X_{j-1}\)_._ 3. _For any_ \(t\)_, with probability at least_ \(1-(1-\sigma)^{K}\)_,_ \(X_{t}\in\{Z_{t,k}\}_{k=1:K}\)_._ General reduction to transductive learning In this section, we will consider the minimax regret for the smoothed online learning game with respect to the loss function1\(\ell\) against a general class of functions \(\mathcal{F}\). In Section3.1, we will show that the minimax regret can be reduced to the minimax regret for a version of transductive learning with respect to the same loss function and class of functions. In Section3.2, we give general upper bounds for the transductive setting. There is a subtle but important difference between reduction that directly involve regret compared to recent efforts (such as Haghtalab et al. (2021), Block et al. (2022), Haghtalab et al. (2022)) using reductions between proxies of regret, such as covering numbers and sequential complexities. This is particularly important for log loss since its complexity is not captured by covering numbers. We discuss this point further in Section3.2.2. Footnote 1: The reduction works for general loss functions with the property that the worst-case regret for horizon \(T\) is bounded by \(T\), but one can think about \(\ell\) being the log-loss throughout this section for concreteness. ### Regret-to-regret Reduction We work with a general loss function \(\ell\) and general actions of the learner \(a_{t}\). We note that, we can write the minmax value of the smoothed setting in extensive form as \[\mathcal{R}_{T}(\mathcal{F},\sigma)=\sup_{\mathcal{D}_{1}\in \Delta_{\sigma}(\mu)}\mathbb{E}_{X_{1}\sim\mathcal{D}_{1}}\inf_{a_{1}}\sup_{y _{1}} \sup_{\mathcal{D}_{2}\in\Delta_{\sigma}(\mu)}\mathbb{E}_{X_{2}\sim \mathcal{D}_{2}}\inf_{a_{2}}\sup_{y_{2}}\ldots \tag{2}\] \[\ldots\sup_{\mathcal{D}_{T}\in\Delta_{\sigma}(\mu)}\mathbb{E}_{X _{T}\sim\mathcal{D}_{T}}\inf_{a_{T}}\sup_{y_{T}}\mathcal{R}(\mathcal{F},X_{1: T},y_{1:T},a_{1:T}).\] In order to bound this, we consider a generalization of the notion of online learning that is referred to as transductive learning. In this setting, at the start of the interaction the adversary chooses a set of contexts \(X=\{X_{i}\}_{i=1}^{M}\) for some \(M\geq T\) and provides this to the player. The game proceeds as before with the adversary picking \((x_{t},y_{t})\) at time \(t\) and the learner picking an action \(a_{t}\) and suffering a loss \(\ell(a_{t},(x_{t},y_{t}))\). However, the adversary is now constrained to pick \(x_{t}\in X\) at all times \(t\). We can then define the minmax regret indexed by \(X\) as \[\underline{\mathcal{R}}_{T}(\mathcal{F},X):=\left[\max_{x_{1}\in X }\inf_{a_{1}}\sup_{y_{1}}\ldots\max_{x_{T}\in X}\inf_{a_{T}}\sup_{y_{T}} \mathcal{R}(\mathcal{F},x_{1:T},y_{1:T},a_{1:T})\right].\] Furthermore, define the worst-case transductive learning regret for sets of size \(M\) as \(\underline{\mathcal{R}}_{T}^{M}(\mathcal{F})=\max_{X\subseteq\mathcal{X},|X|= M}\underline{\mathcal{R}}_{T}(\mathcal{F},X).\) In the following theorem, we show that the regret against \(\sigma\)-smoothed adversaries is bounded by the regret in the transductive learning setting when the set of contexts is drawn from the base distribution \(\mu\). **Theorem 3.1**.: _Let \(\mathcal{F}\) be any class of functions from \(\mathcal{X}\) to \(\mathbb{R}\) and let \(\sigma\in(0,1]\). Then, for any \(T\) and \(k\), we have_ \[\mathcal{R}_{T}(\mathcal{F},\sigma)\leq\underline{\mathcal{R}}_{T}^{kT}( \mathcal{F})+T^{2}(1-\sigma)^{k}.\] Proof.: In order to obtain an upper bound on \(\mathcal{R}_{T}(\mathcal{F},\sigma)\) in terms of \(\underline{\mathcal{R}}_{T}^{kT}(\mathcal{F})\) for some \(k\), we will consider (2) and proceed inductively. The main idea is to note that since \(\mathcal{D}_{i}\) is \(\sigma\)-smoothed, conditioned on the history thus far, we can invoke the coupling lemma given in Theorem2.1. For the sake of illustration, first consider the simple case of \(T=1\). Let \(X_{1},Z_{1}\ldots Z_{k}\) denote the coupling alluded to in Theorem2.1. Recall that \(X_{1}\sim\mathcal{D}_{1}\) and \(Z_{1:k}\sim\mu^{k}\). Defining the event \(E_{1}:=\{X_{1}\in Z_{1:k}\}\), we have \[\mathcal{R}_{1}(\mathcal{F},\mathscr{D})=\mathbb{E}_{X_{1}\sim \mathcal{D}_{1}}\inf_{a_{1}}\sup_{y_{1}}\mathcal{R}_{1}(\mathcal{F},X_{1},y_{1},a_{1})\] \[=\mathbb{E}_{X_{1},Z_{1:k}}\left[\inf_{a_{1}}\sup_{y_{1}}\mathcal{R}_ {1}(\mathcal{F},X_{1},y_{1},a_{1})\right]\] \[=\mathbb{E}_{X_{1},Z_{1:k}}\left[\mathds{1}\{E_{1}\}\inf_{a_{1}} \sup_{y_{1}}\mathcal{R}_{1}(\mathcal{F},X_{1},y_{1},a_{1})\right]\] \[\qquad\qquad+\mathbb{E}_{X_{1},Z_{1:k}}\left[\mathds{1}\{E_{1}^{C }\}\inf_{a_{1}}\sup_{y_{1}}\mathcal{R}_{1}(\mathcal{F},X_{1},y_{1},a_{1})\right]\] \[\leq\mathbb{E}_{X_{1},Z_{1:k}}\left[\mathds{1}\{E_{1}\}\inf_{a_{1 }}\sup_{y_{1}}\mathcal{R}_{1}(\mathcal{F},X_{1},y_{1},a_{1})\right]+\mathbb{P }(E_{1}^{C}) \tag{3}\] \[\leq\mathbb{E}_{Z_{1:k}}\left[\max_{X_{1}\in Z_{1:k}}\inf_{a_{1}} \sup_{y_{1}}\mathcal{R}_{1}(\mathcal{F},X_{1},y_{1},a_{1})\right]+(1-\sigma)^ {k}\] (4) \[=\underline{\mathcal{R}}_{T}^{kT}(\mathcal{F})+(1-\sigma)^{k}, \tag{5}\] where (3) uses that \(\inf_{a_{1}}\sup_{y_{1}}\mathcal{R}_{1}(\mathcal{F},X_{1},y_{1},a_{1})\leq 1\)2, (4) follows by the coupling lemma and (5) follows from the definition of transductive learning regret. Footnote 2: Note that this holds for the log-loss by using a trivial strategy of using a uniform probability assignment at each step. The next step is to generalize this to arbitrary \(T\). The key aspect that makes this possible is that for all \(t\leq T\), we have \(D_{t}\in\Delta_{\sigma}\left(\mu\right)\), even conditioned on the past, allowing us to apply the coupling lemma. Furthermore, we need that \(\mathcal{R}_{T}\leq T\) for arbitrary sequences which is indeed guaranteed for reasonable losses such as the log-loss as noted above. We defer the full proof to Appendix B. Theorem 3.1 shows that we can reduce the problem of evaluating the minimax regret for smoothed adversaries to evaluating the minimax regret for transductive learning. Note that the second term \((1-\sigma)^{k}\leq e^{-k\sigma}\) and thus, in order to get bounds that are sublinear one needs to consider \(k=c\log T/\sigma\) for an appropriate absolute constant \(c\). As we will see in the next section, this leads to logarithmic dependence on \(\sigma^{-1}\). Moreover, by Theorem 3.1 and since \(\mathbb{E}_{X\sim\mu}\mathcal{\bar{T}}_{T}(\mathcal{F},X)\leq\mathcal{R}_{T}( \mathcal{F},\sigma)\), we can see that the transductive learning regret exactly captures the smoothed regret up to \(\operatorname{polylog}\left(\frac{T}{\sigma}\right)\) factors. ### Bounds for Transductive Learning In this section, we discuss ways to upper bound transductive learning regret \(\underline{\mathcal{R}}_{T}^{M}(\mathcal{F})\) so as to achieve bounds on \(\mathcal{R}_{T}(\mathcal{F},\sigma)\) via Theorem 3.1. #### 3.2.1 Using Covering Numbers One of the approaches common in online learning is to characterize the regret in terms of geometric properties (such as covering numbers) of the function class \(\mathcal{F}\). The notion of covering required varies depending on the loss function and the stochastic properties of the data--typically completely adversarial problems require stronger notions of sequential coverings (Rakhlin et al., 2015a, b) while for stochastic problems usually weaker _offline coverings_ suffice. In our smoothed case, we show that the offline complexity notion of scale-sensitive VC dimension as defined in Definition 2.2 is adequate. Similar ideas were considered for the case of regression and convex Lipshitz losses in Haghtalab et al. (2022), Block et al. (2022). Let us first define the notion of approximation according to which a cover will be constructed; we will consider a pointwise approximation. This notion is similar to the notion of global sequential covering in Wu et al. (2022). **Definition 3.1**.: Let \(\mathcal{F}\) be a function class. A set of functions \(\tilde{\mathcal{F}}\) is said to be a \(\epsilon\)-covering of \(\mathcal{F}\) if for any \(f\in\mathcal{F}\) there exists \(g\in\tilde{\mathcal{F}}\) such that \(\sup_{x\in\mathcal{X}}|f(x)-g(x)|\leq\epsilon\). We will use \(\mathcal{N}\left(\mathcal{F},\epsilon\right)\) to denote the size of the minimal \(\epsilon\)-covering of \(\mathcal{F}\). Note that while the metric in Definition 3.1 is quite stringent, using this cover in the transductive learning case requires us to only consider function classes with _bounded domain size_. We capture this using the following theorem. **Theorem 3.2** (Upper bound on transductive learning).: _Let \(\mathcal{F}\) be a function class and \(\epsilon>0\). Then,_ \[\underline{\mathcal{R}}_{T}^{kT}(\mathcal{F})\leq\inf_{\epsilon}\left\{\sup_{ Z\subset\mathcal{X}\setminus\mathcal{Z}|=kT}\log\mathcal{N}\left(\mathcal{F}|_{Z}, \epsilon\right)+2\epsilon T\right\},\] _where \(\mathcal{F}|_{Z}\) is the projection of hypothesis class \(\mathcal{F}\) on the set \(Z\)._ The proof of this theorem follows from relating transductive learning to the worst sequential prediction on a finite set of points using formalism presented in Wu et al. (2022). The proof of this theorem is deferred to the Appendix C. Next, we recall that the covering number \(\mathcal{N}(\mathcal{F},\epsilon)\) is bounded as a function of the scale sensitive VC dimension of the class \(\mathcal{F}\) and the number of points in the domain. **Theorem 3.3** (Rudelson and Vershynin (2006)).: _There exist universal constants \(c,C\) such that for all \(\alpha>0\), any function class \(\mathcal{F}\) defined on a finite set \(\mathcal{X}\), and \(\epsilon>0\), we have_ \[\log\mathcal{N}(\mathcal{F},\epsilon)\leq C\cdot\mathrm{VC}(\mathcal{F},c \alpha\epsilon)\log^{1+\alpha}\left(\frac{C|\mathcal{X}|}{\mathrm{VC}(\mathcal{ F},c\epsilon)\epsilon}\right).\] Finally, putting together Theorem 3.1, Theorem 3.2 and Theorem 3.3 we get the following. **Theorem 3.4** (Minimax smoothed regret and scale-sensitive VC dimension).: \[\mathcal{R}_{T}(\mathcal{F},\sigma)\leq\inf_{k,\alpha,\epsilon>0}\left\{C \cdot\mathrm{VC}(\mathcal{F},c\alpha\epsilon)\log^{1+\alpha}\left(\frac{CkT}{ \mathrm{VC}(\mathcal{F},c\epsilon)\epsilon}\right)+2\epsilon T+T^{2}\left(1- \sigma\right)^{k}\right\}.\] We can instantiate the bound in Theorem 3.4 in terms of \(T\) and \(\sigma\) for two particularly interesting cases: when \(\mathrm{VC}(\mathcal{F},\epsilon)\) scales as \(d\log\left(1/\epsilon\right)\) (often referred to as parametric classes) and when \(\mathrm{VC}(\mathcal{F},\epsilon)\) scales as \(\epsilon^{-p}\) (often referred to as nonparametric classes). A canonical example of the former are _VC classes_; for a class with VC dimension \(d\), \(\mathrm{VC}(\mathcal{F}^{\mathrm{VC}},\epsilon)=Cd\log\left(\frac{1}{\epsilon}\right)\) (see for example (Vershynin, 2018, Theorem 8.3.18)). A canonical example of the latter are functions of bounded variation, \(\mathcal{F}^{\mathrm{BV}}\) which have \(\mathrm{VC}(\mathcal{F}^{\mathrm{BV}},\epsilon)=\frac{C}{\epsilon}\)(see for example (Musayeva, 2020, Bartlett et al., 1997)). This class is known to have unbounded sequential covering numbers (Rakhlin et al., 2010) and therefore is not learnable with a worst-case adversary--this can be seen as a simple consequence of the fact that \(\mathcal{F}^{\mathrm{BV}}\) contains all one-dimensional thresholds. **Corollary 3.4.1** (Rates for parametric and nonparametric classes).: _If \(\operatorname{VC}(\mathcal{F},\epsilon)=d\log\left(1/\epsilon\right)\), then for a large enough \(T\)_ \[\mathcal{R}_{T}(\mathcal{F},\sigma)\leq O\left(d\cdot\operatorname{poly}\log \left(\frac{T}{\sigma}\right)\right).\] _If \(\operatorname{VC}(\mathcal{F},\epsilon)=\epsilon^{-p}\), then_ \[\mathcal{R}_{T}(\mathcal{F},\sigma)\leq O\left(T^{\frac{p}{p+1}}\cdot \operatorname{poly}\log\left(\frac{T}{\sigma}\right)\right).\] In particular, note that Corollary 3.4.1 shows that for VC classes \(\mathcal{R}_{T}(\mathcal{F}^{\operatorname{VC}},\sigma)=\widetilde{O}\left(d \cdot\log\left(\frac{T}{\sigma}\right)\right)\) (tight, see also concurrent work (Wu et al., 2023) for a similar bound) and for functions of bounded variation \(\mathcal{R}_{T}(\mathcal{F}^{\operatorname{BV}},\sigma)=\widetilde{O}\left( \sqrt{T}\log\left(\frac{1}{\sigma}\right)\right)\); note that the minmax rates for the worst-case adversary scale as \(\Omega(T)\) for both these cases. We note that the above bound may be loose for general nonparametric classes, but should be improvable using a multiscale (chaining) version of Theorem 3.4 but we do not focus on this here. Though the above results give satisfactory bounds in the minimax sense for many classes \(\mathcal{F}\) of interest, it is useful to consider explicit constructions of probability assignment rules. For the case of finite VC dimension, we give an explicit probability assignment rule by considering a discretization of the class and using a mixture probability assignment rule. In particular, this strategy (denoted by \(\mathscr{Q}^{\operatorname{VC}}\)) yields optimal regret \(\mathcal{R}_{T}(\mathcal{F}^{\operatorname{VC}},\sigma,\mathscr{Q}^{ \operatorname{VC}})\leq O\left(d\log\left(\frac{T}{\sigma}\right)\right)\). For the formal statements, proofs and detailed discussion see Appendix D. #### 3.2.2 Examples without Covering numbers This reduction approach to characterizing minmax regret in the log-loss is interesting since all previous approaches have used covering numbers of some kind--either sequential covering numbers or stronger notions of global covering. However, in stark contrast to the \(0/1\) loss and several other loss functions (Rakhlin et al., 2015), covering numbers cannot capture the minmax regret for the log-loss, at least in the adversarial case. Consider the following class of functions on context set \(\mathcal{X}=\mathbb{B}_{2}\) (where \(\mathbb{B}_{2}\) denotes the unit \(\ell_{2}\) Euclidean ball) \[\mathcal{F}^{\operatorname{Lin}}:=\left\{x\mapsto\frac{\langle x,w\rangle+1}{ 2}\bigg{|}w\in\mathbb{B}_{2}\right\}.\] For this class, Rakhlin and Sridharan (2015) construct a follow-the-regularized leader (FTRL) based algorithm achieving regret \(O(\sqrt{T})\). However, Bilodeau et al. (2020) show an upper bound on the regret in terms of sequential covering numbers which is not improvable in general--this shows that sequential covering numbers are not adequate to capture the minmax regret rates for the log-loss. Wu et al. (2022) further consolidate this by considering the following class, closely resembling \(\mathcal{F}^{\operatorname{Lin}}\), \[\mathcal{F}^{\operatorname{AbsLin}}:=\left\{x\mapsto\left|\langle x,w\rangle \right|\bigg{|}w\in\mathbb{B}_{2}\right\}.\] Wu et al. (2022, Example 2, Theorem 6) establish that the minmax regret for \(\mathcal{F}^{\operatorname{AbsLin}}\) is \(\widetilde{\Theta}(T^{2/3})\), demonstrating the surprising fact that by a simple linear transformation of the hypothesis class (which does not change its covering number) one can obtain minmax rates that differ by a polynomial factor! On the other hand, our reduction-based approach bypasses the need for using any covering based arguments and therefore would lead to tight (at least up to \(\operatorname{poly}\log(T/\sigma)\)) rates. We remark that exact characterizations of the minmax regret with log-loss (often referred to as the minmax redundancy in the information theory literature) in the no-context (adversarial) case is most often calculated by studying the so-called _stochastic complexity_ of the class \(\mathcal{F}\)[Rissanen, 1996]. This can be extended to (worst-case) transductive learning with contexts \(x_{1},\ldots,x_{T}\); in this case the minmax optimal regret for a fixed horizon is achieved by the normalized maximum likelihood (NML) probability assignment [Shtar'kov, 1987], and can be expressed as \[\underline{\mathcal{R}}_{T}^{T}(\mathcal{F})=\max_{x_{1:T}}\log\left(\sum_{y_{ 1:T}\in\{0,1\}^{T}}\max_{f\in\mathcal{F}}\prod_{t=1}^{T}p_{f}(y_{t}|x_{t}) \right).\] This expression has been evaluated previously for online logistic regression [Jacquet et al., 2021] and more general hypothesis classes [Wu et al., 2022b]. It is an intriguing question to understand what properties of \(\mathcal{F}\) the stochastic complexity depends on, given that the above examples illustrates that covering numbers do not capture it. Our reduction provides a technique to use such fine-grained understanding of the regret to directly lift the bounds to the more general smoothed adversary setting. ## 4 Oracle-Efficient Smoothed Sequential Probability Assignment In the previous section, we consider a purely statistical perspective on the minimax value of the sequential probability assignment problem for smoothed adversaries. In this section, we will focus on an algorithmic perspective and design an algorithm that is efficiently implemented using calls to an MLE oracle. We will focus on the setting when the base measure is the uniform measure on the input space \(\mathcal{X}\) and the label space \(\mathcal{Y}=\{0,1\}\). In this setting, we are given access to an oracle OPT which given a data set \(S=\{x_{i},y_{i}\}_{i=1}^{m}\) outputs a hypothesis that minimizes the loss on \(S\). That is, \[\mathrm{OPT}\left(S\right)=\operatorname*{argmin}_{h\in\mathcal{F}}\frac{1}{m }\sum_{i=1}^{m}\ell\left(h,(x_{i},y_{i})\right).\] In the context of the logarithmic loss, this corresponds to maximum likelihood estimation. Most of the analysis holds for a general loss function \(\ell\) (with regret scaling appropriate bounds on the values and derivatives), but for clarity one can think of \(\ell\) as the log-loss. In particular, Algorithm 1 is written for the log-loss. The main framework we work in is the follow-the-perturbed-leader (FTPL) framework. Here, our algorithm uses the oracle on a data set consisting of the historical samples and a set of hallucinated samples. The hallucinated samples are intended to "stabilize" the predictions of the algorithm. This gives us a probability assignment \(\underline{\mathscr{Q}}^{\mathrm{FTPL}}=\{q^{\mathrm{FTPL}}(\cdot|x_{1:t},y _{1:t-1})\}_{t=1}^{T}\). In order to state the regret bound, we need the following notions. For any class \(\mathcal{F}\), define the truncated class \(\mathcal{F}_{\alpha}\) as \(\mathcal{F}_{\alpha}=\left\{f_{\alpha}:f_{\alpha}(x)=\frac{f(x)+\alpha}{1+2 \alpha}\text{ where }f\in\mathcal{F}\right\}.\) We will also need the notion of Rademacher complexity \(\mathrm{Rad}\left(\mathcal{F},T\right)=\sup_{X\subset\mathcal{X}_{i}|X|=T} \mathbb{E}_{\epsilon}\left[\sup_{f\in\mathcal{F}}\frac{1}{T}\sum_{x\in X} \epsilon_{x}f\left(x\right)\right].\) **Theorem 4.1** (Main Regret Bound).: _For any hypothesis class \(\mathcal{F}\) and parameters \(n,\alpha\), we have that the regret of Algorithm 1 for \(\sigma\)-smoothed adversaries is bounded as_ \[\mathcal{R}_{T}(\mathcal{F},\sigma,\underline{\mathscr{Q}}^{\mathrm{FTPL}}) \leq n\log\left(\frac{1}{\alpha}\right)+\alpha T+T\sqrt{\log\left(\frac{1}{ \alpha}\right)\cdot\frac{1}{\sigma n}}\] \[+T\cdot\inf_{m\leq n}\left\{\frac{1}{\alpha}\mathrm{Rad}\left(\mathcal{F}_{ \alpha},n/m\right)+\frac{n\left(1-\sigma\right)^{m}\log\left(1/\alpha\right)}{m }+e^{-n/8}\right\}.\] We will instantiate this bound for case when the class \(\mathcal{F}\) has bounded VC dimension. For such classes, it is known that the Rademacher complexity is bounded. We state this in the following corollary. **Corollary 4.1.1**.: _Let \(\mathcal{F}\) be a hypothesis class such that the Rademacher complexity is bounded as \(\mathrm{Rad}\left(\mathcal{F}_{\alpha},T\right)=cT^{-\omega}\), then we have \(\mathcal{R}_{T}(\mathcal{F},\sigma,\mathscr{Q}^{\mathrm{FTPL}})\leq T^{\frac{ 2}{2+\omega}}\sqrt{\frac{1}{\sigma}}\cdot\mathrm{poly}\log\left(\frac{Td}{ \sigma}\right)\)._ Note that, in particular, for VC classes we have \(\mathcal{R}_{T}\left(\mathcal{F}^{\mathrm{VC}},\sigma\right)\) scales as \(T^{4/5}\). Improving this to achieve the minimax rate discussed in Corollary 3.4.1 is an interesting open question. _Remark_.: A slightly improved regret scaling as \(T^{3/4}\) can be achieved by assuming access to an oracle that can optimize a mixed objective function involving the log-loss and signed sum of the functions in class. However, these oracles do not have the natural interpretation in terms of maximum likliehood estimation. ### Analysis The main challenge in designing algorithms in the follow-the-perturbed-leader framework is designing the distribution of the hallucinated samples so as to balance the tradeoff between the "stability" of the algorithm i.e. how little the algorithm changes its prediction from time step to time step, and the "perturbation" i.e. how much the addition of the hallucinated samples changes the prediction of the algorithm from the outputting the best hypothesis on the historical samples. This is captured by the following lemma. **Lemma 4.2** (Follow the Perturbed Leader bound).: _Let \(\ell\) be a convex loss function, and let \(\mathcal{F}\) be a hypothesis class. Let \(\mathcal{D}_{t}\) denote the distribution of the adversary at time \(t\) and let \(\mathcal{Q}_{t}\) denote the distribution of the hypothesis \(h_{t}\) output by Algorithm 1. Then, we have that the regret of Algorithm _1 (where we use \(\widetilde{s}_{t}=(\widetilde{x}_{t},\widetilde{y}_{t})\) to denote a hallucinated data point)_3 _is bounded by_ Footnote 3: With some abuse of notation, we consider \(\mathcal{D}_{t}\) to be over \(\mathcal{X}\times\{0,1\}\). \[\sum_{i=1}^{T}\underbrace{\mathbb{E}}_{s_{t}\sim\mathcal{D}_{t}}\left( \underset{h_{t}\sim\mathcal{Q}_{t}}{\mathbb{E}}[\ell(h_{t},s_{t})]-\underset {\text{Stability}}{\mathbb{E}}[\ell(h_{t+1},s_{t})]\right)+\underbrace{\mathbb{ E}}_{s_{t},s^{\prime}_{t}\sim\mathcal{D}_{t};R^{(t+1)}}[\ell(h_{t+1},s^{\prime}_{t})- \ell(h_{t+1},s_{t})].\] _where \(h^{*}=\operatorname{argmin}_{h\in\mathcal{F}_{\alpha}}\sum_{i=1}^{T}\ell(h,s_ {t})\)._ We provide a proof in Appendix E for completeness. Given this decomposition of the regret, we need to handle both terms carefully. Just to appreciate the tradeoff, note that as we increase the number of hallucinated examples, the perturbation term generally increases, but the stability term generally decreases. First, let us focus on the stability term which is harder to deal with. The main approach we will use is a generalization of the decomposition of the stability term introduced in Haghtalab et al. (2022) even when the losses are unbounded, as is the case with the log-loss. The main idea is to decompose the stability term in terms of the distance between the distribution of the average prediction at the next time step and the distribution of the current time step, as captured by the \(\chi^{2}\) distance and a term that captures how different the predictions of the algorithm are when the sample is resampled from the same distribution. The proof can be found in Appendix F. 4 Footnote 4: For the particular use in our analysis a simpler version of the lemma similar to Haghtalab et al. (2022) suffices but we prove a general version since we believe such a version is useful in providing improved regret bounds for the problem. **Lemma 4.3** ( \(\chi^{2}\) + Generalization \(\Rightarrow\) Stability).: _Let \(\mathcal{Q}_{t}\) denote learner's distribution over \(\mathcal{F}\) in at round \(t\), \(\mathcal{D}_{t}\) be adversary's distribution at time \(t\) (given the history \(s_{1},\cdots,s_{t-1}\)), \(s_{t}\sim\mathcal{D}_{t}\) be the realized adversarial instance at time \(t\), and \(s^{\prime}_{t}\) be an independent copy \(s^{\prime}_{t}\sim\mathcal{D}_{t}\). Let \(R^{(t+1)}\) refers to the randomness used by the algorithm in round \(t+1\). Then,_ \[\underset{s_{t}\sim\mathcal{D}_{t}}{\mathbb{E}}\left(\underset{h_ {t}\sim\mathcal{Q}_{t}}{\mathbb{E}}[\ell(h_{t},s_{t})]-\underset{h_{t+1}\sim \mathcal{Q}_{t+1}}{\mathbb{E}}[\ell(h_{t+1},s_{t})]\right)\] \[\qquad\leq\sqrt{\frac{1}{2}\chi^{2}(\underset{s_{t}\sim\mathcal{ D}_{t}}{\mathbb{E}}[\mathcal{Q}_{t+1}],\mathcal{Q}_{t})}\cdot\log\left(\frac{1}{ \alpha}\right)+\underset{s_{t},s^{\prime}_{t}\sim\mathcal{D}_{t};R^{(t+1)}}{ \mathbb{E}}[\ell(h_{t+1},s^{\prime}_{t})-\ell(h_{t+1},s_{t})].\] Given this lemma, we move on to bounding the \(\chi^{2}\) divergence between the distribution of the average prediction at the next time step and the distribution of the current time step. This is done using the Ingster method to bound the divergence of mixtures. We include a proof in Appendix G for completeness. **Lemma 4.4** (Bound on \(\chi^{2}\)).: \(\chi^{2}\left(\mathbb{E}_{s_{t}\sim\mathcal{D}_{t}}[\mathcal{Q}_{t+1}], \mathcal{Q}_{t}\right)\leq\frac{2}{\sigma n}\)_._ Next, we move on second term in Lemma 4.3. Note that this term involves the difference between the loss of the hypothesis output at time \(t+1\) evaluated on two independent points \(s_{t}\) and \(s^{\prime}_{t}\) drawn from \(\mathcal{D}_{t}\). The main idea to bound the term is to use a stronger version of the coupling lemma Theorem 2.1 which allows us to extract subsequences of points sampled according to smooth distributions from iid samples from the base measure. This allows us to relate the required generalization bound to the Rademacher complexity of the class composed with the loss. Using the truncation and the contraction principle, we get the desired bound. The proof can be found in Appendix H. **Lemma 4.5** (Generalization).: _Let \(h_{t+1}\) denote the hypothesis output by the Algorithm 1 at time \(t+1\). Then, for any \(m\leq n\), we have_ \[\operatorname*{\mathbb{E}}_{s_{t},s^{\prime}_{t}\sim\mathcal{D}_{t};R^{(t+1)}} \left[\ell(h_{t+1},s^{\prime}_{t})-\ell(h_{t+1},s_{t})\right]\leq\frac{1}{ \alpha}\mathrm{RAD}\left(\mathcal{F}_{\alpha},n/m\right)+\frac{n\left(1- \sigma\right)^{m}\log\left(1/\alpha\right)}{m}+e^{-n/8}.\] The final term that we want to bound is the perturbation term. In order to bound this note that we set our truncation parameter \(\alpha\) such that the loss our the prediction made by our algorithm is bounded and consequently the perturbation term is bounded by \(n\log\frac{1}{\alpha}\), see Appendix I. Theorem 4.1 follows by combining the above results. ## 5 Conclusions and Open Problems In this paper, we initiated the study of sequential probability assignment with smoothed adversaries. We characterize the minimax regret in terms of the minimax regret for transductive learning and use this to provide tight regret bounds, e.g., for VC classes. Furthermore, we initiate the study of oracle efficiency in this setting and show that sublinear regret can be achieve for general classes. Our work motivates several directions for future work. An interesting direction is whether the optimal \(O(\log(T))\) regret is achievable for some classes, such as VC classes, using oracle-efficient algorithms. More generally, are there computational barriers to obtaining fast rates in prediction with log-loss in this setting? ## 6 Acknowledgments This work was supported in part by the National Science Foundation under grant CCF-2145898, a C3.AI Digital Transformation Institute grant, and Berkeley AI Research Commons grants. This work was partially done while authors were visitors at the Simons Institute for the Theory of Computing.
2302.13976
A Counterexample to the Lévy Flight Foraging Hypothesis in the Narrow Capture Framework
The L\'evy flight foraging hypothesis asserts that biological organisms have evolved to employ (truncated) L\'evy flight searches due to such strategies being more efficient than those based on Brownian motion. However, we provide here a concrete two-dimensional counterexample in which Brownian search is more efficient. In fact, we show that the efficiency of L\'evy searches worsens the farther the L\'evy flight tail index deviates from the Brownian limit. Our counterexample is based on the framework of the classic narrow capture problem in which a random search is performed for a small target within a confined search domain. Our results are obtained via three avenues: Monte Carlo simulations of the discrete search processes, finite difference solutions and a matched asymptotic analysis of the elliptic (pseudo)-differential equations of the corresponding continuum limits.
J. C. Tzou, Leo Tzou
2023-02-23T10:40:36Z
http://arxiv.org/abs/2302.13976v3
# Challenging the Levy flight foraging hypothesis - a joint Monte Carlo and numerical PDE approach ###### Abstract. For a Levy process on the flat torus \(\mathbb{T}^{2}\) with power law jump length distribution \(\sim|x|^{-2-2\alpha}\) for \(0<\alpha<1\), Monte Carlo and finite difference methods for inverting the fractional Laplacian are employed to confirm recently obtained leading order analytic results for the mean stopping time \(u_{\epsilon}\) to a circular target of radius \(0<\epsilon\ll 1\). The Monte Carlo simulations of the Levy process rely on a rejection sampling algorithm to sample from a power law distribution, while the finite difference method numerically solves the nonlocal exterior problem \((-\Delta)^{\alpha}u_{\epsilon}=-1\) with \(u_{\epsilon}=0\) inside the target. Our results confirm that the mean stopping time indeed scales as \(O(\epsilon^{2\alpha-2})\), in stark contrast to the well-known \(O(|\log\epsilon|)\) scaling of the Brownian narrow escape time. For a sufficiently small target size, this difference in scaling implies that a Levy search strategy may in fact be (significantly) slower than a Brownian search strategy. 1 Footnote 1: School of Mathematical and Physical Sciences, Macquarie University, Sydney, NSW, Australia; [email protected]. 2 Footnote 2: Korteweg-de Vries Institute for Mathematics, University of Amsterdam, Amsterdam, Netherlands; [email protected]. **Keywords:** narrow capture, fractional Laplacian, exterior problem, Monte Carlo simulation ## 1. Introduction It is a widely held belief that random search algorithms using Levy flights can find a target faster than using Brownian motions [26, 30]. This so called "Levy flight foraging hypothesis", which forms the basis of many biological models [29, 5] as well as numerical search algorithms [30, 15, 33, 32, 13], is now facing challenge on multiple fronts [22, 17], leading to a dialogue taking place on Phys. Rev. Lett. [6, 18]. Recent analytic calculation of the mean first passage time (MFPT) asymptotic [7, 21] for small stationary targets taking place on a broad scope of geometric settings (e.g. sphere and flat torus) were the first to give a rigorous mathematical account to this debate. The purpose of this article is to provide a numerical realization of [7, 21] in the special case of the 2 dimensional torus via two different computational approaches: Monte Carlo simulation in Section 2 and solving a (pseudo)-differential equation in Section 3. Let us recall the setting of [7] in the special case of the 2D flat torus \(\mathbb{T}^{2}\): Let \((X_{t})_{t\geq 0}\) be an isotropic Levy process (see [2]) defined on the flat torus \(\mathbb{T}^{2}\) with Levy measure on each tangent space \(T_{p}\mathbb{T}^{2}\) given by the power law \(d\nu(v)=C(n,\alpha)|v|^{-n-2\alpha}dv\) where \(C(n,\alpha):=\frac{4^{\alpha}\Gamma(1+\alpha)}{\pi|\Gamma(-\alpha)|}\). Here we take \(\alpha\in(0,1)\) so that the "inverse square law" studied in [17] corresponds to \(\alpha=1/2\) in our setting. For \(p_{0}\in\mathbb{T}^{2}\) and \(\epsilon>0\) we define the expected first passage time \(u_{\epsilon}(p)\) for processes starting at each point \(p\in\mathbb{T}^{2}\) by \[u_{\epsilon}(p):=\mathbb{E}(\mathcal{T}_{\epsilon}(X_{\cdot})\mid X_{0}=p)\] where \(\mathcal{T}_{\epsilon}(X_{\cdot}):=\inf\{t\geq 0\mid X_{t}\in\overline{B_{ \epsilon}(p_{0})}\}\) and \(B_{\epsilon}(p_{0})\) is the ball of radius \(\epsilon>0\) centred at \(p_{0}\). We have the following asymptotic expansion for \(u_{\epsilon}(p)\) **Theorem 1.1** (Thm 1.10 [7] with correct constant).: _At every point \(p\neq p_{0}\) have the following expansion as \(\epsilon\to 0\):_ \[u_{\epsilon}(p)\sim\epsilon^{2\alpha-2}\frac{\Gamma(1-\alpha)(1-\alpha)}{4^{ \alpha}|\Gamma(\alpha)|\sin((1-\alpha)\pi)}\] Similarly, if \((Y_{t})_{t\geq 0}\) is the Brownian motion with \(-\Delta\) as its infinitesimal generator (here we take convention of \(\Delta\) with non-negative spectrum), we can define the expected first passage time \(v_{\epsilon}(p)\) analogously. In this case we have **Theorem 1.2** (Thm 1.1 [21]).: _At all \(p\neq p_{0}\) we have the expansion as \(\epsilon\to 0\):_ \[v_{\epsilon}(p)\sim-\frac{1}{2\pi}\log\epsilon.\] We will numerically verify these expansions appearing in Theorem 1.1 for \(\alpha=1/2\) and do the same for Theorem 1.2 which is the Brownian motion case. This will confirm the comparison between [7] and [21], showing that for small stationary targets, Brownian motion search is the faster strategy. ## 2. Monte Carlo Simulation For Levy Flight on \(\mathbb{T}^{2}\) ### Discrete Process on Lattice We briefly describe the approximate discrete process \((X_{t}^{disc})\) for which we will use to perform Monte Carlo simulation to verify the theoretical result of Theorem 1.1. Following the idea of [28], for \(N\in\mathbb{N}\) large, we set \(h=1/N\). We partition \(\mathbb{T}^{2}\) into a lattice grid which is our discrete state space where the process \((X_{t}^{disc})\) will take its value: \[h\left(\mathbb{Z}^{2}/N\right):=\{(hn,hm)\ \mathrm{mod}1\mid m,n\in\mathbb{Z}\}\] Let \(\mathcal{K}(k)\) be a probability mass function on \(\mathbb{Z}^{2}\) defined by \(\mathcal{K}(k)=C_{\alpha}|k|^{-2-2\alpha}\) for \(k\neq 0\) and set \(\mathcal{K}(0)=0\) where \(C_{\alpha}\) chosen such that \[\sum_{k\in\mathbb{Z}^{2},\ k\neq 0}\mathcal{K}(k)=1. \tag{2.1}\] In an \(h\)-dependent unit of time \[\tau:=D_{\alpha}h^{2\alpha} \tag{2.2}\] (the constant \(D_{\alpha}\) chosen later) the process \((X_{t}^{disc})_{t\in\mathbb{N}_{0}}\) at time \(t+\tau\) is given by \(X_{t+\tau}^{disc}=\exp_{X_{t}^{disc}}(hk)\) where \(k\) is a \(\mathbb{Z}^{2}\) valued random variable whose probability mass function is given by \(\mathcal{K}(\cdot)\). Recall that for any \(x\in\mathbb{T}^{2}\) the exponential map \(\exp_{x}:T_{x}\mathbb{T}^{2}\to\mathbb{T}^{2}\) is given by \(\exp_{x}(v)=\gamma_{x,v}(1)\) where \(\gamma_{x,v}(\cdot)\) is the geodesic starting at \(x\) with initial velocity \(v\in T_{x}\mathbb{T}^{2}\). This simplifies in the case of our lattice on \(\mathbb{T}^{2}\) by the following: If \(x\in h\left(\mathbb{Z}^{2}/N\right)\), \[\exp_{x}(hk)=x+hk\ \mathrm{mod}\ 1\in h\left(\mathbb{Z}^{2}/N\right).\] Observe that due to homogeneity we have that \[\frac{\mathcal{K}(k)}{\tau}=h^{2}D_{\alpha}^{-1}\mathcal{K}(hk)\,. \tag{2.3}\] We would like that in certain limits the process becomes one whose infinitesimal generator is the fractional Laplacian \(-(\Delta)^{\alpha}\) so that we can compare the expected passage time of this process to its Brownian counterpart described by the regular Laplacian \(-\Delta\). Motivated by this, we set \[D_{\alpha}:=C_{\alpha}\frac{\pi|\Gamma(-\alpha)|}{4^{\alpha}\Gamma(1+\alpha)}\,. \tag{2.4}\] Let \(B_{\epsilon}:=\{x\in\mathbb{T}^{2}\mid\mathrm{dist}_{\mathbb{T}^{2}}(x,(1/2,1/ 2))\leq\epsilon\}\) and for a given process \((X_{t}^{disc})_{t\in\tau\mathbb{N}_{0}}\), set \[\mathcal{T}_{\epsilon}^{disc}(X_{\cdot}^{disc}):=\min(t\mid X_{t}^{disc}\in B_ {\epsilon}). \tag{2.5}\] Let \(U(x,t)=\mathbb{P}(X_{t}=x)\). Following the calculation of [28] (but keeping track of constants) we get that using (2.3) \[\frac{U(x,t+\tau)-U(x,t)}{\tau}=h^{2}D_{\alpha}^{-1}\sum_{k\in\mathbb{Z}^{2}} \mathcal{K}(hk)\left(U(\Pi(x+hk),t)-U(x,t)\right).\] The right side is a Riemann sum approximation of integral with parton size \(h>0\). So taking \(h\to 0\) (and therefore \(\tau\to 0\) by (2.2)) we get that, heuristically, \[U_{t}(x,t)=D_{\alpha}^{-1}C_{\alpha}\int_{T_{x}\mathbb{T}^{2}}\frac{U(\exp_{ x}(y),t)-U(x,t)}{|y|^{2+2\alpha}}\] By our choice of \(D_{\alpha}\) in (2.4) the above equation becomes we get exactly \[U_{t}=\mathcal{A}U\] where \(\mathcal{A}\) is the infinitesimal generator introduced by [2] with Levy measure \[\nu(A)=\frac{4^{\alpha}\Gamma(1+\alpha)}{\pi|\Gamma(-\alpha)|}\int_{A}\frac{dy }{|y|^{2+2\alpha}}.\] It was shown in Thm 1.4 of [7] that \(\mathcal{A}=-(\Delta)^{\alpha}\) on \(\mathbb{T}^{2}\). So the discrete process we described approximates a continuous Levy process whose infinitesimal generator is the fractional Laplacian on \(\mathbb{T}^{2}\). ### Sampling Algorithm To implement the Monte Carlo simulation for the discrete process described in Subsection 2.1 we need a way to sample from the distribution \(\mathcal{K}\). We will use rejection sampling to do this. First observe that \[\mathcal{K}(k)\leq\frac{C_{\alpha}}{\tilde{C}_{\alpha}}\tilde{\mathcal{K}}(k) \tag{2.6}\] where \(\tilde{\mathcal{K}}(k)=\frac{C_{\alpha}}{|k|^{\frac{2}{3}2\alpha}}\) and \(|\cdot|_{\infty}\) is the \(\ell^{\infty}\) norm on \(\mathbb{R}^{2}\). The estimate (2.6) is quite tight so \(\tilde{\mathcal{K}}\) is a good proposal distribution for rejection sampling. It turns out to be easy to sample from the distribution \(\tilde{\mathcal{K}}\) which we now describe. The distribution \(\tilde{\mathcal{K}}(k)\) depends purely on the \(\ell^{\infty}\) norm of the random variable \(k\in\mathbb{Z}^{2}\). As such we observe for each fixed \(\hat{k}\in\mathbb{Z}^{2}\), a random variable \(k\in\mathbb{Z}^{2}\) satisfies \[\mathbb{P}(k=\hat{k})=\mathbb{P}(|k|_{\infty}=|\hat{k}|_{\infty})\mathbb{P}( k=\hat{k}\ |\ |k|_{\infty}=|\hat{k}|_{\infty})\] where \(\mathbb{P}(k=\hat{k}\ |\ |k|_{\infty}=|\hat{k}|_{\infty})\) is uniformly distributed amongst the \(8|\hat{k}|_{\infty}\) points having \(\ell^{\infty}\) norm \(|\hat{k}|_{\infty}\). For each \(n\in\mathbb{N}\), using the explicit form of \(\tilde{\mathcal{K}}\) and the fact there are \(8n\) points on \(\mathbb{Z}^{2}\) having \(\ell^{\infty}\) norm \(n\), we see that \[\mathbb{P}(|k|=n)=\tilde{C}_{\alpha}8n/n^{2+2\alpha}=\tilde{C}_{\alpha}8/n^{1 +2\alpha} \tag{2.7}\] In the special case when \(\alpha=1/2\) we then have that \[1=8\tilde{C}_{1/2}\sum_{n=1}^{\infty}\frac{1}{n^{2}}=\frac{4\pi^{2}}{3} \tilde{C}_{1/2}.\] That is, \(\tilde{C}_{1/2}=\frac{3}{4\pi^{2}}\). Inserting this back to (2.7) we get, for each fixed \(n\in\mathbb{N}\), \[\mathbb{P}(|k|=n)=\frac{6}{\pi^{2}}\frac{1}{n^{2}} \tag{2.8}\] for \(\alpha=1/2\). We can sample from this distribution using inversion sampling for discrete distributions. _Rejection Sampling Algorithm for Distribution \(\mathcal{K}\) (\(\alpha=1/2\)):_ 1. Sample \(n\in\{1,\ldots,10000\}\) from (2.8) using inversion sampling. 2. For this \(n\in\mathbb{N}\) sample \(k\in\mathbb{Z}^{2}\) uniformly from the \(8n\) points on \(\mathbb{Z}^{2}\) have \(\ell^{\infty}\) norm \(n\). 3. For this \(k\in\mathbb{Z}^{2}\), sample \(r\in\left(0,\frac{C_{1/2}}{\tilde{C}_{1/2}}\tilde{\mathcal{K}}(k)\right)\) uniformly. If \(r\leq\mathcal{K}(k)\), accept this \(k\in\mathbb{Z}^{2}\). If not, reject and repeat. Note that because the estimate (2.6) is tight, this algorithm rarely rejects. In fact, numerical experiment shows that it accepts \(\approx 69\%\) of the time. ### Finding Expected Stopping Time via Monte Carlo We are now ready to implement the Monte Carlo simulation for computing \[u_{\epsilon}^{disc}(hn,hm):=\mathbb{E}(\mathcal{T}_{\epsilon}^{disc}(X_{ \cdot}^{disc})\ |\ X_{0}^{disc}=(hn,hm))\] for the discrete approximate process \((X_{t}^{disc})_{t\in\mathbb{T}_{0}}\) outlined in Subsection 2.1 with initial position \((nh,mh)\). We choose \(\alpha=1/2\), \(N=1000\) (so that \(h=0.001\)). For all \(\epsilon>0\) let the target be \(B_{\epsilon}(0):=\{x\in\mathbb{T}^{2}\ |\ \mathrm{dist}_{\mathbb{T}^{2}}(x,(0,0))\leq\epsilon\}\). #### Monte Carlo Simulation For Expected Stopping Time Set \(T=0\) and \(x=(hn,hm)\). Repeat the following until \(x\in B_{\epsilon}(0)\): 1. Sample \(k\in\mathbb{Z}^{2}\) using _Rejection Sampling Algorithm for Distribution \(\mathcal{K}\)_ 2. \(x=\Pi(x+hk)\) where \(\Pi:\mathbb{R}^{2}\to\mathbb{T}^{2}\) is the projection from the universal cover. 3. \(T=T+\tau\) where \(\tau\) is as in (2.2). Each run of the above generates a stopping time \(T\). After taking \(K\) runs and generating stopping times \(T_{1},T_{2},\ldots,T_{K}\) we calculate \[u_{\epsilon}^{disc}(hn,hm)\approx\frac{T_{1}+T_{2}+\cdots+T_{K}}{K} \tag{2.9}\] for large \(K\in\mathbb{N}\). In SS4, we compare this Monte Carlo result against the leading order result of Theorem 1.1 as well as those computed numerically by solving the elliptic (pseudo)-differential equation for the mean stopping time. We discuss a simple finite difference method for obtaining the latter in the next section. ## 3. Numerically solving the Elliptic (Pseudo)-Differential Equation Consider \((X_{t})_{t\leq 0}\), the discrete process in SS2.1 and let \(u_{\epsilon}(p):=\mathbb{E}(\tau_{\epsilon}\mid X_{0}=p)\) denote the stopping time of this process. Since this discrete process approximates a continuous Levy process whose infinitesimal generator is the fractional Laplacian \(\mathcal{A}\), we have by Proposition 4.1 of [7] that \(u_{\epsilon}(p)\) satisfies the integral equation \[\mathcal{A}u_{\epsilon}=-1\ \text{on}\ \mathbb{T}^{2}\setminus\overline{B_{ \epsilon}(p_{0})},\ \ u_{\epsilon}=0\ \text{on}\ B_{\epsilon}(p_{0}) \tag{3.1}\] where \(\mathcal{A}\) is the operator \[\mathcal{A}u(p)=\frac{4^{\alpha}\Gamma(1+\alpha)}{\pi|\Gamma(-\alpha)|}\left( p.v.\int_{T_{p}\mathbb{T}^{2}}\frac{u\left((p+v)/\mathbb{Z}^{2}\right)-u(p)}{|v|^{2+2 \alpha}}dT_{p}(v)\right). \tag{3.2}\] In (3.2) we identify each vector \(v\in T_{p}\mathbb{T}^{2}\) with an element of \(\mathbb{R}^{2}\) by using the canonical coordinate system on \(\mathbb{T}^{2}=\mathbb{R}^{2}/\mathbb{Z}^{2}\). We note that, by Proposition 5.2 of [7], the solution of (3.1) exists and is unique. Similarly, it was shown in Appendix A of [21] that \(v_{\epsilon}\) satisfies the ellitpic boundary value problem \[\Delta v_{\epsilon}=-1\ \text{on}\ \mathbb{T}^{2}\setminus\overline{B_{ \epsilon}(p_{0})},\ \ v_{\epsilon}=0\ \text{on}\ \partial B_{\epsilon}(p_{0}) \tag{3.3}\] While (3.3) is a standard elliptic boundary value problem, equation (3.1) is a pseudodifferential equation involving a nonlocal pseudodifferential operator \(\mathcal{A}\) with exterior conditions on all of \(B_{\epsilon}(p_{0})\). ### Lifting to the Universal Cover To numerically implement the operator \(\mathcal{A}\) defined in (3.2) we need a way to find the lengths of all geodesics joining two points on \(\mathbb{T}^{2}\). For the purpose of future work we propose a method which can be generalized to surfaces of any genus. To this end, let \(\Pi:\mathbb{R}^{2}\to\mathbb{R}^{2}/\mathbb{Z}^{2}\cong\mathbb{T}^{2}\) be the covering map and let \(\text{Deck}(\Pi)\cong\mathbb{Z}^{2}\) be the group of deck transformations acting on \(\mathbb{R}^{2}\). Observe that \(\Pi\) is a local isometry between the canonical metric on \(\mathbb{R}^{2}\) and the flat torus \(\mathbb{T}^{2}\). Denoting \(\ell(\gamma)\) to be the length of a curve segment on the flat torus \(\mathbb{T}^{2}\), we then have that for any two points \(p_{1},p_{2}\in\mathbb{T}^{2}\) with lifts \(P_{j}\in\Pi^{-1}(p_{j})\), \[\{\ell(\gamma_{p_{1},p_{2}})\mid\gamma_{p_{1},p_{2}}\ \text{ geodesic segment with end points}\ p_{1},p_{2}\}\] \[= \{|P_{1}-\alpha\cdot P_{2}|\mid\alpha\in\text{Deck}(\Pi)\} \tag{3.4}\] where the group action \(\alpha\cdot\) can be represented by \(\alpha\cdot P=P+\alpha\). In our numerical implementation we will approximate the infinite group \(\text{Deck}(\Pi)\) by the finite group \[\text{Deck}(\Pi)\approx\text{Deck}_{l}(\Pi):=\{\alpha\in\text{Deck}(\Pi)\mid| \alpha|_{\infty}\leq l\} \tag{3.5}\] for some \(l\in\mathbb{N}\) large. ### Numerical implementation We now briefly outline a straightforward finite difference scheme for numerically approximating the solution to (3.1) on the periodic cell \((x_{1},x_{2})\in\mathbb{T}^{2}\) with the target of radius \(\epsilon\) centered at the point \(\mathbf{p}_{0}=(1/2,1/2)\). We begin by dividing \(\mathbb{T}^{2}\) into a grid of \((N+1)\times(N+1)\) lattice points with uniform spacing \(h\times h\), where \(h\equiv 1/N\). By periodicity, the computational grid is then the uniformly-spaced \(N\times N\) lattice on the domain \([0,1-h]\times[0,1-h]\). We now approximate (3.1) as a discrete \(N^{2}\times N^{2}\) system of linear equations \(A\mathbf{u}=\mathbf{b}\), where \(A\) is an \(N^{2}\times N^{2}\) matrix, \(\mathbf{b}\) and \(\mathbf{u}\) are \(N^{2}\times 1\) vectors, with the latter containing the entries whose values approximate \(u_{\epsilon}\) at each of the \(N^{2}\) lattice points on the computational grid. Let these lattice points be indexed by \(i\), \(i=1,\ldots,N^{2}\), and let \(\mathbf{u}_{i}\), the \(i\)-th entry of the vector, denote the approximate of \(u_{\epsilon}\) at the \(i\)-th lattice point with location \(\mathbf{x}_{i}\). Let us now populate the matrix \(A\) and the vector \(\mathbf{b}\). For all \(i\), \(i=1,\ldots,N^{2}\), such that \(|\mathbf{x}_{i}-\mathbf{p}_{0}|\leq\epsilon\), we have that the corresponding entry \(\mathbf{b}_{i}=0\) and \(A_{ii}=1\). Here, \(\mathbf{b}_{i}\) and \(A_{ii}\) denote the \(i\)-th and \((i,i)\)-th entry of \(\mathbf{b}\) and \(A\), respectively. This encodes the exterior condition \(u_{\epsilon}=0\) on \(B_{\epsilon}(p_{0})\) of (3.1). For all other integers \(i\) such that \(|\mathbf{x}_{i}-\mathbf{p}_{0}|>\epsilon\), we have that, for \(j\neq i\), \[A_{ij}=C(2,\alpha)\sum\frac{h^{2}}{\ell(\gamma_{\mathbf{x}_{i},\mathbf{x}_{j}})^ {3}}\,,\qquad j\neq i\] where the sum is taken over all possible geodesics \(\gamma_{\mathbf{x}_{j},\mathbf{x}_{i}}\) joining \(\mathbf{x}_{j}\) and \(\mathbf{x}_{i}\). Using (3.4) and (3.5) we see that \[A_{ij}\approx C(2,\alpha)\sum_{m=-l}^{l}\sum_{n=-l}^{l}\frac{h^{2}}{\left| \mathbf{x}_{i}-\mathbf{x}_{j}^{(m,n)}\right|^{3}}\,,\qquad j\neq i\,. \tag{3.6}\] where \(l\in\mathbb{N}\) is some positive integer large enough that the solution \(\mathbf{u}\) is deemed to be sufficiently insensitive to \(l\), while \(\mathbf{x}_{j}^{(m,n)}:=(m,n)\cdot\mathbf{x}_{j}=\mathbf{x}_{j}+(m,n)\) for \((m,n)\in\mathrm{Deck}_{l}(\Pi)\). Finally, on the diagonal entries of such rows (i.e., rows \(i\) such that \(|\mathbf{x}_{i}-\mathbf{p}_{0}|>\epsilon\)], we set \[A_{ii}=-\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N^{2}}A_{ij}\,. \tag{3.7}\] That is, for all \(i=1,\ldots,N^{2}\) such that \(|\mathbf{x}_{i}-\mathbf{p}_{0}|>\epsilon\), (3.7) guarantees that the \(i\)-th row of \(A\) sums to zero. This stipulation is motivated by the fact that constant functions lie in the nullspace of the infinitesimal generator \(\mathcal{A}\). We emphasize, however, that the matrix \(A\) encodes the exterior problem (3.1), which includes the exterior condition \(u_{\epsilon}=0\) on \(B_{\epsilon}(p_{0})\), for which a solution exists and is unique, and not the infinitesimal generator \(\mathcal{A}\) itself. The matrix \(A\) is therefore expected to be nonsingular; the system \(A\mathbf{u}=\mathbf{b}\) can then be solved for \(\mathbf{u}\) using any standard method. We now make two remarks on the computational expense of obtaining mean stopping time results numerically and via Monte Carlo simulations. Firstly, In regards to the numerical scheme, the number of entries of \(A\) that require the computation of (3.6) is \(\mathcal{O}(N^{2})\), where \(N^{2}\) is the total number of lattice points. For each entry in the \(i\)-th row of \(A\), whenever \(|\mathbf{x}_{i}-\mathbf{p}_{0}|>\epsilon\), the distance from \(\mathbf{x}_{i}\) all other \(N^{2}-1\) lattice points and their images in \((2l)^{2}\) periodic cells under the action of \(\mathrm{Deck}_{l}(\Pi)\) (see (3.5)), must be computed. The computational expense of the above numerical scheme is then \(\mathcal{O}(N^{4})\). While the number of required computations can be reduced by exploiting the symmetry of this particular problem along with the fact that \(A\) is symmetric, the \(\mathcal{O}(N^{4})\) scaling may quickly become prohibitively expensive as \(\epsilon\) is reduced, the reduction of which necessitates a corresponding increase in \(N\). The scaling of this scheme with \(\epsilon\) highlights the need for a more efficient numerical scheme and, perhaps even more so, the need for analytic results such as the leading order result Theorem 1.1 computed in [7]. The leading order result, however, provides no information on how the \(u_{\epsilon}\) depends on how the mean stopping time depends on the starting position of the Levy flight. Capturing this higher order effect may require the development of a hybrid analytic-numerical method analogous to those developed by Ward et. al in [31] in 1993 for singularly perturbed elliptic problems in order to formulate a smooth problem for correction terms that is independent of \(\epsilon\). This work is currently in progress. Secondly, the scaling of the variance with \(\epsilon\) of the mean stopping time necessitates a large number of Monte Carlo runs before a reasonable convergence in mean stopping times can be observed. It is well-known that the second moment \(w_{\epsilon}(p)\) of the stopping time for a Brownian process satisfies \(\Delta w_{\epsilon}=-2v_{\epsilon}\) on \(\mathcal{T}^{2}\setminus\overline{B_{\epsilon}(p_{0})}\) with \(w_{\epsilon}=0\) on \(\partial B_{\epsilon}(p_{0})\)[24, 19, 12, 16]. Here, \(v_{\epsilon}(p)\) is the first moment of the stopping time of the Brownian process satisfying (3.3). Analogously, for the second moment \(s_{\epsilon}(p)\) of the stopping time of the Levy process on \(\mathbb{T}^{2}\), we have \[\mathcal{A}s_{\epsilon}=-2u_{\epsilon}\text{ on }\mathbb{T}^{2}\setminus \overline{B_{\epsilon}(p_{0})},\ \ s_{\epsilon}=0\text{ on }B_{\epsilon}(p_{0})\,, \tag{3.8}\] where in (3.8), \(u_{\epsilon}(p)\) is the first moment of the stopping time of the Levy process satisfying (3.1). In contrast to the Brownian case where the second moment has been computed using matched asymptotic methods (see, e.g., [12, 19, 3, 16]), no analytic results are currently available for the solution to (3.8). However, the same numerical method used to solve (3.1) may of course be applied to solve (3.8), and it suggests that the variance of the stopping time, given by \(V(p):=s_{\epsilon}(p)-[u_{\epsilon}(p)]^{2}\), may grow faster in \(1/\epsilon\) than does \(u_{\epsilon}\) itself. This is shown in Fig. 1, where, for \(\alpha=1/2\), we plot the spatial average of the variance versus \(\epsilon\) on a log-scale. For this \(\alpha\), we have that \(V(p)\sim ce^{\beta}\) for some \(O(1)\) constant \(c\), where \(\beta\sim-2.4\). We recall that \(u_{\epsilon}(p)\sim(4\epsilon)^{-1}\) when \(\alpha=1/2\). The faster growth of the variance with \(1/\epsilon\) for \(\alpha=1/2\) is in contrast in to the Brownian case, where both the mean and the variance of the stopping time scale as \(O(|\log\epsilon|)\)[19]. It also implies that, for small \(\epsilon\), many iterations may be needed for Monte Carlo simulations to converge at reliable value for the mean stopping time. For example, suppose \(\epsilon=0.03\) and we wish to compute the mean stopping time accurate to the first decimal place with \(95\%\) confidence. From Fig. 1, the variance of the distribution is approximately \(70\). By the central limit theorem, the number of Monte Carlo iterations needed is equal to \((1.96/0.05)^{2}\times 70^{2}\approx 7.5\times 10^{6}\). Such a number of iterations may require several days of computing time on a standard desktop computer. Halving \(\epsilon\) would require a more than fivefold increase in the number of samples to achieve the same confidence interval. The steep computational cost of using the Monte Carlo method to determine \(u_{\epsilon}(p)\) at even just a _single point_ again underscores the importance of analytic results for the solution (3.1) in other geometries. ## 4. Results In this section, we confirm the scaling with \(\epsilon\) of the mean stopping time given in Theorem 1.1 using both the Monte Carlo of SS2.1 and numerical method of SS3.2. We also show a concrete example for which the mean stopping time of a Brownian process governed by infinitesimal generator \(-\Delta\) is indeed shorter than that of a Levy process with infinitesimal generator \(-(\Delta)^{\alpha}\), showing that a Levy search strategy may not always be more efficient than a Brownian search. While we show only the \(\alpha=1/2\) case, similar results (not shown) were obtained for other values of \(\alpha\in(0,1)\). For \(\alpha=1/2\) with target of radius \(\epsilon=0.03\) centered at \((1/2,1/2)\), we plot in Fig. 1(a) the finite difference solution for \(u_{\epsilon}\) on \(\mathbb{T}^{2}\). In Figs. 1(b) and 1(c), we plot in red and blue two cross-sections of \(u_{\epsilon}\) as indicated by the curves of the same color in Fig. 1(a). Notice that, as expected, the solution achieves a zero normal derivative on the boundary of the cell, which are points of symmetry on the universal cover. In addition, we plot the corresponding cross-sections for the Brownian mean stopping time, the solution to (3.3). Note that the cross-sections of \(u_{\epsilon}\) are plotted on the left vertical axis and those of \(v_{\epsilon}\) on the right. The spatial average of \(u_{\epsilon}\) is approximately \(8.57\) while that of \(v_{\epsilon}\) is approximately \(0.357\), indicating that an average search conducted via the Levy process with \(\alpha=1/2\) for a small target will be significantly longer in comparison. Finally, in Fig. 2(a), we corroborate with the scaling of the leading order behavior \(u_{\epsilon}\sim 1/(4\epsilon)\) when \(\alpha=1/2\) as stated in Theorem 1.1 using the Monte Carlo method of SS2.3 and the numerical method of SS3.2. On a log-log scale, we plot \(u_{\epsilon}(0.1,0.1)\) against \(\epsilon\) computed using both methods against the theoretical leading order prediction of \(1/(4\epsilon)\). Excellent agreement is observed, especially when \(\epsilon\) is sufficiently small. The numerical solution loses accuracy, when \(\epsilon=0.02\) due to the inability to resolve the small target size while still being computationally feasible. For \(\epsilon\) values near \(0.1\), both the numerical and Monte Carlo results agree rather well; it is the leading order result that loses accuracy when \(\epsilon\) is relatively large. We note that the quantity \(u_{\epsilon}(0.1,0.1)\) is plotted instead of its spatial average due to the expense of computing the latter using Monte Carlo iterations. The constant leading order behavior of \(u_{\epsilon}(p)\), however, being the same for all \(p\in\mathbb{T}^{2}\), is still confirmed. Figure 1. For \(\alpha=1/2\) with target of radius \(\epsilon=0.03\) centered at \((1/2,1/2)\), we show the scaling of the spatial average of the variance (black open circles), \(\int_{\mathbb{T}^{2}}V(p)dp\), obtained by first numerically solving (3.8) for \(t_{\epsilon}(p)\), then defining the variance as \(V(p):=t_{\epsilon}(p)-[u_{\epsilon}(p)]^{2}\). The red line, given by \(\epsilon^{-2.4}/63\), is a heuristic fit to the numerical data, use to illustrate that \(V(p)\) scales as a power law in \(\epsilon\) that is steeper than that of the first moment \(u_{\epsilon}(p)\), which scales as \(\epsilon^{-1}\). For comparison, both the mean and variance of the stopping time in the Brownian case scale as \(O(|\log\epsilon|)\). In contrast to the \(1/(4\epsilon)\) scaling of the Levy flight mean stopping time when \(\alpha=1\), we plot in Fig. 3bthe quantity \(\exp(-2\pi v_{\epsilon}(0.1,0.1))\), where \(v_{\epsilon}(0.1,0.1)\) is the mean stopping time for a Brownian search starting from \((0.1,0.1)\). Since \(v_{\epsilon}\sim(2\pi)^{-1}\log\epsilon\) to leading order, the quantity \(\exp(-2\pi v_{\epsilon}(0.1,0.1))\) ought to be a linear function of \(\epsilon\) whose slope can be easily computed using matched asymptotic and Green's function methods (see, e.g., [23]). This linear behavior for \(v_{\epsilon}(0.1,0.1)\) is shown in Fig. 3b. The values of \(v_{\epsilon}(0.1,0.1)\) in Fig. 3b range from \(\sim 0.21\) at \(\epsilon=0.1\) to \(\sim 0.464\) at \(\epsilon=0.02\), all of which are smaller than the smallest value of \(u_{\epsilon}(0.1,0.1)\) in Fig. 3a, showing that the Brownian search is more efficient on average. We note that we have forgone that Monte Carlo simulations of the Brownian walk, as its relationship to the infinitesimal generator \(-\Delta\) is elementary. ## 5. Discussion Through both Monte Carlo simulations and direct numerical solutions of the limiting nonlocal exterior problem (3.1), we have verified that the average search time on of a Levy process with infinitesimal generator \((-\Delta)^{\alpha}\) for \(0<\alpha<1\) on the flat torus \(\mathbb{T}^{2}\) with a small circular target of radius \(0<\epsilon\ll 1\) centered at \((1/2,1/2)\) scales as \(O(\epsilon^{2\alpha-2})\). While we have only shown results for the case of \(\alpha=1/2\), similar results (not shown) were observed for other \(\alpha\in(0,1)\). By comparing to average stopping times of the Brownian walk with infinitesimal generator \(-\Delta\) on the same domain, we have shown a concrete example of a search for a small target on \(\mathbb{T}^{2}\) for which the Brownian strategy is more efficient than the Levy strategy. We now discuss some open problems, several of which are already under progress. While a single target \(\mathbb{T}^{2}\) is a very simple domain on which to perform this comparison, it would be interesting to consider more complex domains. For example, a finite domain with reflecting boundaries containing perhaps small reflecting obstacles may present challenges from both a modeling and analytic perspective. For the former, it would require a model that correctly handles how a Levy flight particle interacts with reflecting obstacles and boundaries, while for the latter, one would need to formulate boundary conditions that respect the model. A domain featuring non-constant curvature would present computational challenges - geodesics would need to be computed for both the Monte Carlo algorithm as well as the finite difference method for discretizing the corresponding infinitesimal generator (see [7]). This would add to the already significant computational cost. The sphere, on the other hand, has simple geodesics and may be a good candidate for a follow-up study, especially considering the interesting predictions made in [7] regarding the divergence of the mean stopping time from the mean when the starting point is the point antipodal to the center of the target (see Theorem 1.1 part (iii)). Another domain feature that we have not considered is the inclusion of more than one target. The multiple-target problem has been considered at length for the regular Laplacian \(-\Delta\) on flat 2- and 3-dimensional geometries using matched asymptotic methods see (e.g., [12, 20, 23, 9, 11, 5, 4, 14] and Figure 2. For \(\alpha=1/2\) with target of radius \(\epsilon=0.03\) centered at \((1/2,1/2)\), we show in (a) the numerical solution for \(u_{\epsilon}\) of (3.1) using the finite difference scheme of §3.2. The red and black lines indicate the contours plotted in red and black in (b) and (c), respectively. In (b) and (c), in blue, we plot the corresponding contours of the numerical solution \(v_{\epsilon}\) of (3.3) (not shown). Note that \(u_{\epsilon}\) (\(v_{\epsilon}\)) is plotted on the left (right) vertical axis. The spatial average of \(u_{\epsilon}\) is approximately 8.57 while that of \(v_{\epsilon}\) is approximately 0.357, indicating that an average search conducted via the Lévy process with \(\alpha=1/2\) for a small target will be significantly longer in comparison.
2306.01885
Multifunctionality in a Connectome-Based Reservoir Computer
Multifunctionality describes the capacity for a neural network to perform multiple mutually exclusive tasks without altering its network connections; and is an emerging area of interest in the reservoir computing machine learning paradigm. Multifunctionality has been observed in the brains of humans and other animals: particularly, in the lateral horn of the fruit fly. In this work, we transplant the connectome of the fruit fly lateral horn to a reservoir computer (RC), and investigate the extent to which this 'fruit fly RC' (FFRC) exhibits multifunctionality using the 'seeing double' problem as a benchmark test. We furthermore explore the dynamics of how this FFRC achieves multifunctionality while varying the network's spectral radius. Compared to the widely-used Erd\"os-Renyi Reservoir Computer (ERRC), we report that the FFRC exhibits a greater capacity for multifunctionality; is multifunctional across a broader hyperparameter range; and solves the seeing double problem far beyond the previously observed spectral radius limit, wherein the ERRC's dynamics become chaotic.
Jacob Morra, Andrew Flynn, Andreas Amann, Mark Daley
2023-06-02T19:37:38Z
http://arxiv.org/abs/2306.01885v1
# Multifunctionality in a Connectome-Based Reservoir Computer ###### Abstract Multifunctionality describes the capacity for a neural network to perform multiple mutually exclusive tasks without altering its network connections; and is an emerging area of interest in the reservoir computing machine learning paradigm. Multifunctionality has been observed in the brains of humans and other animals: particularly, in the lateral horn of the fruit fly. In this work, we transplant the connectome of the fruit fly lateral horn to a reservoir computer (RC), and investigate the extent to which this 'fruit fly RC' (FFRC) exhibits multifunctionality using the'seeing double' problem as a benchmark test. We furthermore explore the dynamics of how this FFRC achieves multifunctionality while varying the network's spectral radius. Compared to the widely-used Erdos-Renyi Reservoir Computer (ERRC), we report that the FFRC exhibits a greater capacity for multifunctionality; is multifunctional across a broader hyperparameter range; and solves the seeing double problem far beyond the previously observed spectral radius limit, wherein the ERRC's dynamics become chaotic. fruit fly, reservoir computing, dynamical systems, chaos, connectome, brain-inspired machine learning ## I Introduction In the pursuit of developing artificially intelligent systems, there is much to be gained from integrating further physiological features of biological neural networks (BNNs) into machine learning (ML) environments. Inspired by the ability of certain BNNs to exhibit'multifunctionality', in this paper we investigate whether a corresponding connectome maintains multifunctionality when transplanted to the reservoir computing ML paradigm. _Multifunctionality_ describes the ability of a neural network to perform multiple tasks without changing any network connections [1]. This neurological phenomenon was first translated from biological to artificial neural networks (ANNs) using a'reservoir computer' (RC) in [2]. An RC is a _dynamical system_ which can be realised as an ANN. What distinguishes the RC amongst other ML approaches is that network weights are trained at the _readout layer only_ in order to solve a given task. Multifunctional RCs, in particular, have the capacity to reconstruct a coexistence of chaotic attractors from: a multistable system [2]; two different systems [3]; and multiple copies of a chaotic attractor from the same system [4]; all without re-training the weights or re-tuning hyperparameters. In the life sciences, multifunctionality is well-documented. For example, in the land snail, the subesophageal ganglion complex has been shown to act as a controller for maintaining homeostasis of respiratory, genital, and cardioreal functions [5]; in the turtle spinal cord, multifunctionality is present in the interneurons, which contribute to the forward swimming, flexion, and scratching reflexes [6] - analogous interneurons are also present in the mammalian central nervous system [7]. In the fruit fly brain, there is evidence supporting multifunctionality in multiple regions of interest (ROIs): in the wing, for example, where neurons of the ventral nerve cord are multifunctional for both song and flight [8]; and in the lateral horn, where sleep and odour aversion coexist [9], and where olfactory and visual stimuli are mediated simultaneously [10]. In this paper, we investigate whether a fruit fly connectome-based RC (FFRC) exhibits multifunctionality as observed in its biological counterpart neural network; specifically, by capturing the _approximate_ - see Sec. III-A2 - network topology of the lateral horn ROI from the hemibrain dataset [11] and applying its RC analogue (as in [12, 13]) to the'seeing double' problem. The _seeing double_ benchmark test for exploring the limits of multifunctionality was first introduced in [3] and further explored in [14]; in this test, the RC is tasked with having to reconstruct a coexistence of two circular orbits (\(\mathcal{C}_{A}\) and \(\mathcal{C}_{B}\)) which rotate in opposing directions (see Sec. II-B). The rest of the paper is outlined as follows: In Sec. II we introduce the particular RC formulation of interest, outline how it is trained to become multifunctional, and describe the specifics of the seeing double problem. Thereafter in Sec. III, we describe each of the RC network topologies used, detail how they are constructed, and also propose four experiments which aim to shed some light on the differences between how each achieves multifunctionality (across varying hyperparameters of interest). Furthermore in Sec. IV we discuss the results of all experiments. Finally, in Sec. V, we summarize the major experimental findings, describe the project limitations and caveats, and highlight future directions for investigation. ## II Background ### _Reservoir Computing_ We use the continuous-time RC formulation devised in [15] which was shown in [2, 3, 4, 14] to be capable of achieving multifunctionality and is expressed in the following equation: \[\dot{\mathbf{r}}(t)=\gamma\left[-\mathbf{r}(t)+\tanh\left(\ \mathbf{M}\mathbf{r}(t)+ \sigma\mathbf{W}_{in}\mathbf{u}(t)\ \right)\right]. \tag{1}\] Here \(\mathbf{r}(t)\in\mathbb{R}^{N}\) is the state of the RC at a given time \(t\) and \(N\) is the number of neurons in the network. \(\gamma\) is the decay-rate parameter. \(\mathbf{M}\in\mathbb{R}^{N\times N}\) is the adjacency matrix describing the internal layer of the RC. \(\sigma\) is the input strength parameter and \(\mathbf{W}_{in}\in\mathbb{R}^{N\times D}\) is the input matrix (constructed as in [15]), when multiplied together this represents the weight given to the \(D\)-dimensional input, \(\mathbf{u}(t)\in\mathbb{R}^{D}\), as it is projected into the RC. This input is taken from the particular attractor or time series that one would like to either reconstruct or make future predictions of. Solutions of Eq. (1) are computed using the \(4^{th}\) order Runge-Kutta method with time step \(\tau=0.01\). In our numerical experiments we consider two different variations of \(\mathbf{M}\). The first \(\mathbf{M}\) is constructed with an Erdos-Renyi topology where each of the non-zero elements are then replaced with a random number between \(-1\) and \(1\); the matrix is subsequently scaled to a specific spectral radius, \(\rho\). The second \(\mathbf{M}\) is constructed from the right lateral horn ROI from the hemibrain connectome [11]: further details on this construction are outlined in Sec. III-A2. The spectral radius, \(\rho\), for each \(\mathbf{M}\) is a key parameter involved in the training: in particular, \(\rho\) is associated with the RC's memory as it is used to tune the weight the RC places on its own internal dynamics. To train the RC in Eq. (1), the system is first driven by the input \(\mathbf{u}(t)\) from \(t=0\) to time \(t=t_{listen}\) in order to remove any dependency which \(\mathbf{r}(t)\) has on its initial condition \(\mathbf{r}(0)=\left(0,0,\ldots,0\right)^{T}=\mathbf{0}^{T}\). The training data is then generated by driving the RC with \(\mathbf{u}(t)\) from \(t=t_{listen}\) to \(t=t_{train}\). A suitable readout layer needs to be calculated in order to train the RC and replace the training input signal, \(\mathbf{u}(t)\), in Eq. (1) with a post-processing function, \(\hat{\psi}\left(\cdot\right)\). If the training is successful then we say that, \[\hat{\psi}\left(\mathbf{r}(t)\right)=\hat{\mathbf{u}}(t)\approx\mathbf{u}(t),\ \ \text{for}\ t>t_{train}, \tag{2}\] where \(\hat{\mathbf{u}}(t)\) denotes the predicted time-series. This layer 'closes the loop' of the nonautonomous system in Eq. (1) and provides a map from the \(N\)-dimensional state space of the RC, \(\mathbb{S}\), to the \(D\)-dimensional 'prediction state space', \(\mathbb{P}\). In this work \(\hat{\psi}\left(\mathbf{r}(t)\right)=\mathbf{W}_{out}\mathbf{q}(\mathbf{r}(t))\) - where \(\mathbf{W}_{out}\) is the readout matrix - and we use \(\mathbf{q}(\mathbf{r}(t))\) to break the symmetry in Eq. (1) using the'squaring technique' described by \[\mathbf{q}(\mathbf{r}(t))=\left(\begin{array}{cc}\mathbf{r}(t),&\mathbf{r}^{2}(t)\end{array} \right)^{T}. \tag{3}\] We calculate \(\mathbf{W}_{out}\) using the ridge regression approach, \[\mathbf{W}_{out}=\mathbf{Y}\mathbf{X}^{T}\left(\mathbf{X}\mathbf{X}^{T}+\beta \,\mathbf{I}\right)^{-1}, \tag{4}\] where \[\mathbf{X}\!\!=\!\!\left[\begin{pmatrix}\mathbf{r}(t_{listen})\\ \mathbf{r}^{2}(t_{listen})\end{pmatrix}\!\!\begin{pmatrix}\mathbf{r}(t_{listen}+\tau )\\ \mathbf{r}^{2}(t_{listen}+\tau)\end{pmatrix}\!\!\cdots\!\begin{pmatrix}\mathbf{r}(t_{ train})\\ \mathbf{r}^{2}(t_{train})\end{pmatrix}\right] \tag{5}\] is the RC reservoir's response to the input data, which is represented as \[\mathbf{Y}=\left[\begin{array}{cc}\mathbf{u}(t_{listen})&\mathbf{u}(t_{listen}+ \tau)&\cdots&\mathbf{u}(t_{train})\end{array}\right]. \tag{6}\] In Eq. 4, \(\beta\) is the regularization parameter which is used to help prevent overfitting. \(\mathbf{I}\) is the identity matrix. We write the 'predicting RC' as the following autonomous dynamical system: \[\dot{\hat{\mathbf{r}}}(t)\!=\!\gamma\!\left[-\,\hat{\mathbf{r}}(t)\!+\!\tanh\!\left( \mathbf{M}\hat{\mathbf{r}}(t)\!+\!\sigma\mathbf{W}_{in}\mathbf{W}_{out}^{(1)}\mathbf{q }(\hat{\mathbf{r}}(t))\right)\right]\!, \tag{7}\] where \(\hat{\mathbf{r}}\) denotes the state of the predicting RC at time \(t\) and \(\hat{\mathbf{r}}(0)=\mathbf{r}(t_{train})\). For the case of multifunctionality, Eq. (1) is driven by two different input signals, \(\mathbf{u}_{1}\) and \(\mathbf{u}_{2}\), that describe trajectories on two attractors \(\mathcal{A}_{1}\) and \(\mathcal{A}_{2}\). Following the above steps, this produces two corresponding RC response data matrices, \(\mathbf{X}_{1}\) and \(\mathbf{X}_{2}\), and in accordance with the input data matrices, \(\mathbf{Y}_{1}\) and \(\mathbf{Y}_{2}\). These \(\mathbf{X}_{1}\) and \(\mathbf{X}_{2}\), and \(\mathbf{Y}_{1}\) and \(\mathbf{Y}_{2}\) are 'blended' together according to the _blending technique_ featured in [2]. The resulting blended matrices are used to solve for \(\mathbf{W}_{out}\) according to Eq. (4). The predicting RC in this case is described as in Eq. (7); if multifunctionality is achieved, once Eq. (7) is initialised with either \(\hat{\mathbf{r}}_{1}(0)\) or \(\hat{\mathbf{r}}_{2}(0)\) then the predicting RC will reconstruct the dynamics of either \(\mathcal{A}_{1}\) or \(\mathcal{A}_{2}\). ### _The 'Seeing Double' task_ The _seeing double_ task was first introduced in [3] as a benchmark task to compare how different RCs achieve multifunctionality. An illustration of the basic setup which is used in this numerical experiment is provided in Fig. 1. For this task we consider training the RC in Eq. (1) to reconstruct a coexistence of attractors which describe trajectories on two (partially or completely) overlapping circular orbits - \(\mathcal{C}_{A}\) and \(\mathcal{C}_{B}\) - that rotate in opposite directions (see Fig. 1). The input data to train the RC in Eq. (1) is generated by \[u(t)=\left(\begin{array}{c}x(t)\\ y(t)\end{array}\right)=\left(\begin{array}{c}s_{x}\ \cos(t)+x_{cen}\\ s_{y}\ \sin(t)+y_{cen}\end{array}\right)\!. \tag{8}\] In this paper, to train the RC in Eq. (1) to become multifunctional, \(s_{x}\) and \(s_{y}\) are assigned as \(s_{x}=s_{y}=5\) to create \(\mathcal{C}_{A}\) and thus the training input signal \(\mathbf{u}_{1}\); moreover, for \(\mathcal{C}_{B}\) we use \(s_{x}=-5\) and \(s_{y}=5\) to produce the corresponding \(\mathbf{u}_{2}\). In this case, the radius (\(s\)) of both \(\mathcal{C}_{A}\) and \(\mathcal{C}_{B}\) is equal to \(5\). Formally, the RC achieves multifunctionality in this instance if - after training on the input time series from Eq. (8) - it reconstructs a coexistence of attractors in \(\mathbb{S}\) such that the dynamics in \(\mathbb{P}\) resembles trajectories on both \(\mathcal{C}_{A}\) and \(\mathcal{C}_{B}\). In practice we determine whether the'reconstructed attractors' Fig. 1: Illustration of the fundamentals of the seeing double problem. in \(\mathbb{P}\) - that is, \(\hat{\mathcal{C}}_{A}\) and \(\hat{\mathcal{C}}_{B}\) - follow the correct directional arcs and satisfy a 'roundness' condition below \(0.25\) - determined empirically in [3, 14] and is defined as the difference in radii between the largest and smallest circular trajectories which inscribe a particular circular orbit. While the seeing double problem may at first seem as a relatively simple problem - i.e. in comparison to reconstructing chaotic attractors - it is the overlap between \(\mathcal{C}_{A}\) and \(\mathcal{C}_{B}\) that makes this task difficult for the RC to solve. For instance, when the RC approaches a junction where the circular trajectories intersect, it must use its _memory_ of the previous time steps in order to continue on its correct trajectory. In [3, 14], the critical role of \(\rho\) is revealed: in particular, it is found that when \(\mathcal{C}_{A}\) and \(\mathcal{C}_{B}\) are completely overlapping - i.e. when \(x_{cen}=y_{cen}=0\) - then multifunctionality is achieved for a small range of relatively large values of \(\rho\). On the other hand, if \(\rho\) is too large, then multifunctionality is lost. To provide a comparison, in this paper we confine our study to the this extreme scenario where \(x_{cen}=y_{cen}=0\). ## III Methods ### _Model Pipeline_ #### Iii-A1 The Erdos-Renyi Reservoir Computer (ERRC) As mentioned in Sec. II-A, we use a weighted Erdos-Renyi topology with a sparsity of \(0.05\). Weights are drawn randomly from \([-1,1]\) to construct an adjacency matrix \(\textbf{M}_{ER}\) of size \(N=500\). We vary the spectral radius, \(\rho\), of the resulting \(\textbf{M}_{ER}\) from 0 to 2.0 for Experiments 1-2 and 0 to 1.8 for Experiment 3 as outlined in Secs. III-B1-III-B3. We also explore the ERRC's dynamics at larger \(\rho\) values (up to \(\rho=2.2\)) in Experiment 4. #### Iii-A2 The Fruit Fly Reservoir Computer (FFRC) The "Fruit Fly RC" (FFRC) is derived from the _hemibrain_[11]: Using Neuprint [16], we access the hemibrain API with a provided key, and run a set of Cypher queries on the Neo4j graph database to select all neurons in the right lateral horn ROI which are _connected_ to all others (i.e. in the same ROI). We define two neurons as being "connected" if they are synaptic partners and have a sufficiently-large number of shared synaptic sites; here we select a tolerance of 50 synapses in order to reduce the size and complexity of the model - i.e. to make the training pipeline more efficient. After querying the hemibrain to collect neuron connection data, we construct a NetworkX graph which retains all synaptic partners and their _weights_ (the number of synaptic sites); finally, from this graph we construct our adjacency matrix \(\textbf{M}_{FF}\) of size \(N=426\). We interpolate the weight values into \([-1,1]\). Following a common approach (as in [17] and [18]), we also diagonalize the matrix. It is important to stress that \(\textbf{M}_{FF}\) is unchanging: it has a _fixed structure_, which is based off of the hemibrain connectome data, whereas the ERRC by contrast is randomly initialised. ### _Experiments_ In each of the experiments listed below, we set \(t_{listen}=6T\) and \(t_{train}=15T\), where \(T\) is the period of rotation on each orbit \(\mathcal{C}\). We also assign \(\sigma=0.2\) and \(\beta=0.01\). #### Iii-B1 Experiment 1 - Multifunctionality trials We conduct 50 sets of 100 trials; in each set, the ERRC and FFRC are both independently evaluated on the seeing double problem for previously-found optimal hyperparameter values (\(\rho=1.4\), \(\gamma=5.0\)) - see [3]. \(\textbf{W}_{in}\) is randomly initialised on each simulation for both the RC setups. While the FFRC, as previously mentioned, uses a fixed **M** structure, the corresponding \(\textbf{M}_{ER}\) for the ERRC is re-initialized for each trial. In a given set of trials, we count all instances where multifunctionality is achieved according to the criteria in Sec. II-B and as outlined in [3, 14]. #### Iii-B2 Experiment 2 - Varying \(\rho\) and \(\gamma\) As previously observed in [3], depending on the choice of both \(\rho\) and \(\gamma\), these parameters can have a profound impact on the multifunctionality of RCs on the seeing double problem. Here, as in [3], we aim to determine the regions in the \((\rho,\gamma)\)-plane where multifunctionality is achieved for both the FFRC and ERRC. We conduct one set of 100 multifunctionality trials (as in Experiment 1) for each \(\rho,\gamma\) combination for \(\rho,\gamma\in[0,2.0]\times[5,95]\). #### Iii-B3 Experiment 3 - Comparing RC activations As the RC's predicted time series is a trained linear combination of RC activation states, it is reasonable to analyze the activity of each neuron in the cases of multifunctionality and non-multifunctionality, for both RC models. Following this intuition, we compute the number of _unique local maxima_ - on the interval from \(t=t_{train}\) to \(t=27T-\) for each neuron (\(\hat{r}_{(i)}\)) in both RC setups. We then construct a _heat map_ (see Fig. 5) which shows the count for each neuron versus \(\rho\) for a multifunctional and a _non-multifunctional_ - i.e. in cases where only one orbit is reconstructed or neither - instance of the FFRC and ERRC models, respectively. #### Iii-B4 Experiment 4 - Exploring seeing double dynamics Finally, we aim to shed some light on the differences between how the FF and ER RCs solve the seeing double problem, which follows the bifurcation analysis presented in [14] - here we explore the changes in the dynamics of the FF and ER RCs in \(\mathbb{P}\) for changes in \(\rho\). More specifically, we track the evolution of predictions in the \([\rho,x,y]\)-space and illustrate how \(\hat{\mathcal{C}}_{A}\) and \(\hat{\mathcal{C}}_{B}\) come into existence in cases where both the FF and ER RCs achieve multifunctionality. The influence of 'untrained attractors', attractors which exist in \(\mathbb{P}\) but were not present during the training (like in [2, 14]) are also examined here. ## IV Results and Discussion Note: we recommend the use of 'Adobe Reader' or 'Chrome PDF Viewer' to view the figures in this section. #### Iv-B1 Experiment 1 Comparing the instances where multifunctionality (MF) occurs in the FF and ER RCs on the seeing double problem across 50 sets of 100 trials - see Fig. 2) - we observe that the FFRC achieves an average multifunctionality count of 6.46 out of 100; conversely, the RC scores 4.66 out of 100 on average. The distribution of FFRC scores is neither positively nor negatively skewed; whereas the RC is negatively skewed. Differences observed between the distributions of multifunctionality instances are significant: here \(p=0.000035<0.05\) for the Wilcoxon rank-sum test. #### Iv-B2 Experiment 2 In Fig. 3 we provide multifunctionality counts (out of 100 trials, respectively) for the ERRC on the seeing double problem across the \([\gamma,\rho]\)-plane of \([5,95]\times[0,2.0]\). We report an approximate window of multifunctionality in the \([\gamma,\,\rho]\) range of \([5,35]\times[1.25,1.5]\). Maximum multifunctionality occurs at \([\gamma,\rho]\)=\([15,1.5]\) - here we observe a count of 7 out of 100 trials. As previously observed in [3], multifunctional capacity falls as \(\rho\) continues to increase. At \(\rho=2.0\), for example, we observe no evidence of multifunctionality for any of the reported \(\gamma\) values. For the FFRC (see Fig. 4) we find the greatest occurrance of multifunctionality at \([\gamma,\rho]=[15,1.5]\) and also \([15,2.0]\), where the reported count is 10 instances out of 100 trials. The hyperparameter region where multifunctionality occurs is approximately \([\gamma,\rho]\in[5,75]\times[1.5,2.0]\), a much larger than the window of multifunctionality found for the ERRC. Moreover, the frequency of multifunctionality across all sets of 100 trials is greater overall. Importantly, the FFRC is also capable of exhibiting multifunctionality at large \(\rho\) values (the ERRC is not); which is particularly interesting as this is where we observe - i.e. for \(\gamma=15\) - _maximum multifunctionality_. #### Iv-B3 Experiment 3 Looking at the activity profiles of all neurons \(\hat{r}_{(i)}\) in the ERRC and FFRC (Fig. 5), we first compare the population dynamics of RCs exhibiting multifunctionality (MF) and non-multifunctionality. For the FFRC activations, we observe that the number of unique local maxima is "high" (above 40) in more neurons during MF. We speculate - as in [19] - that a higher proportion of reservoir activation neurons with many unique local maxima would indicate a greater _richness_ of reservoir activation curves to draw a set of predictions from. Between the MF and non-MF ERRC heat maps, we observe a slightly larger population of neurons with "high" (above 60) unique local maxima; however, the differences are less pronounced. Between the ERRC and FFRC models, we see that more neurons are involved in the ERRC predictions overall in both MF and non-MF cases. Moreover, in the ERRC there are neurons with a higher magnitude of unique local maxima (roughly 70 in the ERRC versus 45 in the FFRC). This follows intuitively: a small subset of neurons in the fly brain act as "information highways for multisensory integration" [20]; conversely, randomly-weighted neurons have arbitrary importance, and thus we would expect that an effective output prediction would rely on a broad sampling of activations in order to match to a ground truth signal. Finally, across all figures in Fig. 5, we note that increasing the spectral radius \(\rho\) also increases the proportion of neurons which have a high number of unique local maxima. #### Iv-B4 Experiment 4 We now explore the prediction dynamics of the ERRC and FFRC in the respective \(\mathbb{P}\) for increasing \(\rho\). For the ERRC, (see Fig. (a)a), like in [14], at small \(\rho\) values we find that four anti-symmetric fixed points exist which subsequently bifurcate into two distinct limit cycles (\(LC\)) at \(\rho\approx 0.55\), whose dynamics are shown above the bifurcation diagram for \(\rho=0.6\). \(LC_{1}\) can no longer be tracked for \(\rho>0.62\), while \(LC_{2}\) remains stable and exists up to \(\rho=1.65\). Coexisting with \(LC_{2}\) here is the reconstruction of \(\mathcal{C}_{B}\), which first appears as a torus at \(\rho=0.76\) and begins to more closely resemble \(\mathcal{C}_{B}\) from \(\rho=0.78\) to \(\rho=1.25\). The state of the ERRC subsequently tends to \(LC_{3}\), which is born at \(\rho=0.91\); this is initially a torus before it becomes a limit cycle - as highlighted by the plots in \(\mathbb{P}\) at \(\rho=0.92,1.15\) (above the bifurcation diagram in Fig. (a)a). \(LC_{3}\) can no longer be tracked for \(\rho>1.69\). At \(\rho=1.64\), \(\hat{\mathcal{C}}_{B}\) is reborn; however, for \(\rho>1.83\) and beyond, \(\hat{\mathcal{C}}_{B}\) becomes chaotic, where it remains indefinitely (or for as far as it can be tracked with reasonable accuracy). The ERRC is shown to exhibit _multifunctionality_ - wherein \(\hat{\mathcal{C}}_{A}\) is found to coexist with \(\hat{\mathcal{C}}_{B}\) - for the intervals of Fig. 4: FFRC counts of multifunctionality in the \([\gamma,\rho]\)-plane. Fig. 3: ERRC counts of multifunctionality in the \([\gamma,\rho]\)-plane. Fig. 2: A raincloud plot illustrating FF versus ER RC multifunctionality (MF) across 50 sets of 100 trials on the seeing double problem. \([1.17,1.25]\) and \([1.71,1.76]\). An additional limit cycle, \(LC_{4}\), also briefly appears here, for \(\rho=[1.69,1.71]\), and its dynamics for \(\rho=1.7\) is shown above the bifurcation diagram. In Fig. (b)b we illustrate the prediction dynamics of the FFRC on the seeing double problem. We find that, as in Fig. (a)a, at small \(\rho\) values, four anti-symmetric fixed points exist. However, as we continue to track the changes in these fixed points the differences between how the FF and ER RC solves the seeing double problem emerges. We find that for \(FP_{3}\) and \(FP_{4}\) there is a small region of hysteresis with an additional branch of stable fixed points - \(FP_{31}\) and \(FP_{41}\), respectively. For \(\rho>0.7\), \(FP_{31}\) and \(FP_{41}\) can no longer be tracked and the state of the RC tends to \(FP_{1}\) and \(FP_{2}\), respectively. At \(\rho=0.73\) there is a bifurcation from these fixed points to a limit cycle, \(LC_{1}\) (potentially a SNIPER/homoclinic bifurcation based on the characteristics of \(LC_{1}\)). For \(\rho>0.84\), \(LC_{1}\) can no longer be tracked and the state of the FFRC tends to \(\hat{\mathcal{C}}_{A}\), which exists up to \(\rho=2.08\). By tracking \(\hat{\mathcal{C}}_{A}\) as \(\rho\) decreases, we find that it goes through several bouts of torii bifurcations, initially appearing as a period-3 limit cycle at \(\rho=0.75\), highlighted by the plot above the bifurcation diagram in Fig. (b)b for \(\rho=0.76\), we also show here for \(\rho=0.79\) that there exists two antisymmetric limit cycles \(LC_{2}\) and \(LC_{3}\) for \(\rho=[0.77,0.80]\). We see in Fig. (b)b that the FFRC achieves _multifunctionality_ for \(\rho=[1.29,2.08]\); even though \(\mathcal{C}_{B}\) comes into existence at \(\rho=1.27\), it is only properly reconstructed (according to the criteria in Sec. II-B) at \(\rho=1.29\), and coexists with \(\mathcal{C}_{A}\) until \(\rho=2.08\). We also note that another limit cycle (\(LC_{4}\)) exists while the FFRC is multifunctional. It is suggested that additional limit cycles exist in \(\mathbb{P}\) as \(\rho\) continues to increase. Comparing the findings for the ERRC and FFRC, what is perhaps most interesting is that we observe clear evidence of multifunctionality for a much broader range of row values - from \(\rho=1.29\) to \(2.08\), totalling a multifunctional \(\rho\)-interval of \(0.79\), which is much larger than that of the ERRC (\(\approx 0.13\)). Moreover, compared to the ERRC - which becomes chaotic at large \(\rho\) values - the FFRC prediction dynamics persist (without succumbing to chaos) long past this \(\rho\) limit. ## V Conclusion ### _Summary_ In this paper we find that the FFRC outperforms the ERRC on the seeing double problem in **three ways**: 1. A higher frequency of achieving multifunctionality for \(\rho=1.4\), \(\gamma=5\) (Experiment 1). 2. A broader window of \([\rho,\gamma]\)-space where multifunctionality is present, and a larger magnitude of multifunctionality overall (Experiment 2). 3. Prediction dynamics of the FFRC persist as non-chaotic, circular trajectories well beyond the observed \(\rho\) threshold in the ERRC (Experiment 4). These findings suggest that fruit fly brain structure - relative to an arbitrary, random topology - possesses a _greater capacity for multifunctionality_, and is _more robust_ to MF-related parameter fluctuations. Interestingly, the FFRC appears to mimic analogous abilities observed in its biological counterpart [8]. Fig. 5 suggests that the FFRC takes advantage of a smaller population of neurons with higher importance - i.e. for a given \(\rho\) value there are only a small number of neurons which 'fire' with a large number of unique local maxima. In comparison, we see here that for the ERRC there is a larger proportion of highly activated neurons for a given \(\rho\). ### _Limitations_ We acknowledge that our FFRC adjacency matrix, \(\mathbf{M}_{FF}\), is a _translation_ of its corresponding connectome ROI; which captures broad lateral horn connectivity, but does not include all synapses - i.e. due to applying a threshold (see Sec. III). As a structural translation only, the FFRC also fails to capture Fig. 5: Unique local maxima counts for each \(i^{th}\) neuron vs. \(\rho\) in a case where multifunctionality (MF) is and is not achieved for the FFRC in (a) and (b) and the ERRC in (c) and (d). functional_ aspects (e.g. the action of neurotransmitters). Detailed structural elements are also absent, such as neuron morphologies (we use point neurons). One could therefore argue that this structurally-inspired model is only providing a "taste" of the potential capacity of the fly brain for multifunctionality. ### _Future Work_ We will continue to analyse the nonlinear and chaotic dynamics of the FFRC and ERRC in order to further explore their limits of multifunctionality. We will also seek to (as in [3]) determine the impact of additional model factors - such as the input matrix designs (\(\mathbf{W}_{in}\)) - on multifunctionality. We aim to transplant other animal connectome-based networks, such as [21, 22], to RC setups to test whether these networks can also be exploited in a machine learning context.
2305.09751
Measuring ancient technological complexity and its cognitive implications using Petri nets
We implement a method from computer sciences to address a challenge in Paleolithic archaeology: how to infer cognition differences from material culture. Archaeological material culture is linked to cognition: more complex ancient technologies are assumed to have required complex cognition. We present an application of Petri net analysis to compare Neanderthal tar production technologies and tie the results to cognitive requirements. We applied three complexity metrics, each relying on their own unique definitions of complexity, to the modelled production sequences. Based on the results, we suggest that Neanderthal working memory requirements may have been similar to human preferences regarding working memory use today. This method also enables us to distinguish the high-order cognitive functions combining traits like planning, inhibitory control, and learnings that were likely required by different ancient technological processes. The Petri net approach can contribute to our understanding of technology and cognitive evolution as it can be used on different materials and technologies, across time and species.
Sebastian Fajardo, Paul R. B. Kozowyk, Geeske H. J. Langejans
2023-05-16T18:56:36Z
http://arxiv.org/abs/2305.09751v1
# Measuring ancient technological complexity and its cognitive implications using Petri nets ###### Abstract We implement a method from computer sciences to address a challenge in Paleolithic archaeology: how to infer cognition differences from material culture. Archaeological material culture is linked to cognition: more complex ancient technologies are assumed to have required complex cognition. We present an application of Petri net analysis to compare Neanderthal tar production technologies and the results to cognitive requirements. We applied three complexity metrics, each relying on their own unique definitions of complexity, to the modelled production sequences. Based on the results, we suggest that Neanderthal working memory requirements may have been similar to human preferences regarding working memory use today. This method also enables us to distinguish the high-order cognitive functions combining traits like planning, inhibitory control, and learnings that were likely required by different ancient technological processes. The Petri net approach can contribute to our understanding of technology and cognitive evolution as it can be used on different materials and technologies, across time and species. ## 1 Introduction Human origins and the evolution of cognition are intricately tied to the use of technology (Lombard et al. (2019); Nowell et al. (2010); Overmann et al. (2019); Roebroeks et al. (2016); Wadley (2010); Wynn et al. (2004)). The development of complex technologies over the last 3.3 million years provides a mirror to the cognitive developments that underpin behavioral changes. Generally, the processes of production and use of archaeological objects are first (modelled and/or experimentally) reconstructed and then interpreted using concepts such as cognitive load, learning, reflectiveness, working memory, extended thought, and action sequences (Goldenberg et al. (2009); Hodgson (2015); Lombard et al. (2012); Stout et al. (2014)). Oversimplifying one of the main hypotheses, it could be said that a more complex mind can give rise to more complex technologies and thus that we can reverse engineer cognition from technology and material culture. However, the link between the complexity of technologies and cognition remains qualitative, restricting systematic comparisons of different technological behaviors and their cognitive requirements. Tar production, an example of complex technology, often features in discussions about Neanderthal and modern human technological and cognitive capabilities (Koller et al. (2001); Niekus et al. (2019); Roebroeks et al. (2016); Schmidt et al. (2022)). However, the exact complexity of birch tar technology is debated, as there are multiple ways to make tar without fireproof containers (Kozowyk et al. (2020); Schmidt et al. (2019); Schmidt (2021)). Recent experiments show that birch tar can be produced with simple methods (Schmidt et al. (2019)). However, none of the reconstructed methods have been systematically studied for their complexity with definitions for what is considered simple or complex that can contribute to the current technology and cognition debates. Condensation, the simplest method, does require less materials and the production process consists of fewer unique steps than other techniques, but the implications of these criteria on complexity/cognition are unspecified. In this paper we take a step back in the debate and we explore a method to overcome these two problems of a) measuring technological complexity, and b) linking technology to cognition. We use Petri net modelling (Fajardo et al. (2022)) to compare the complexity of Neanderthal birch tar production methods in terms of the cognitive requirements of their technological behaviors. The measurement and comparison of different technological processes is often challenging due to the uniqueness of interrelations between cognitive processes and technological behaviors. Various measures have been used in the past, such as counting techno-units in tool kits, steps in behavioral sequences, procedural units in the production of a specific tool, number of and/or decisions, and distance between a need and the satisfaction of that need (Kline, Boyd (2010); Kozowyk et al. (2017); Lombard et al. (2019); Muller et al. (2017); Oswalt (1987); Perreault et al. (2013)). These measurements are generally unique for specific tools and materials. In this paper we use Petri nets as a tool for expanding measurements of technological processes. Petri nets can model and study the causality and execution of events, including sequential, concurrent and parallel execution (Fajardo et al. (2022)). This allows the identification of differences in the way information is processed to obtain or use a product. Additionally, we argue that with Petri nets, different definitions for complexity can be applied and compared for the same production process, exposing related behavioral and cognitive implications. Our approach moves away from focusing solely on verifying one particular dimension (definition) of technological complexity. Instead, we consider the differences between production methods in light of the type of solution a method represents. Previous studies suggest that the production of Paleolithic or Stone Age adhesives requires the maker to be able to: a) accessing multiple pieces of information at the same time while executing the process; b) avoid errors and correct problems throughout the production process; c) understand and abstract information about the materials, product templates, and the process itself before starting to make adhesives (Lombard, Hogberg (2021); Wadley (2010); Wynn et al. (2016)). Accessing information at the same time (a) is closely related to working memory, which is one key cognitive feature that extends over the evolution of the human mind (Coolidge, Wynn (2005)). The requirements of working memory can be identified by the number of interactions between elements in a process; the more interaction, the higher the cognitive load (Figl, Laue (2011); Wang et al. (2020)). Further, the reliance on working memory and higher cognitive functions (Raduntz (2020)) can be identified as the number of features retained in memory during natural behavior (Draschkow et al. (2021)). Conscious attention and working memory also appear to be closely linked. For example, working memory serves to focus attention and make decisions (Coolidge, Wynn (2009)). An important component in working memory, described as 'executive attention' (Kane, Engle (2002)), also functions to maintain neural stimuli that are relevant to reaching the end goal of a task in the face of interference-rich contexts (Coolidge, Wynn (2009)). Multiple permutations of events increase the likelihood of errors (b) and complexity. In a production process, different permutations create more possible paths to obtain the final product and increase the probability of deadlocks, wasting time and resources (Park et al. (2021)). Complexity is also increased with more choices, because individuals may be unaware of all available choices, or what the best choices are to obtain the desired product. For these reasons, a sequential production process is also easier to execute than one with several paths to reach the end of the workflow (Ranganathan, Campbell (2007)). Implementing strategic planning (Fragaszy et al. (2003)) and inhibitory control (Shenoy, Yu (2011)) may reduce the likelihood of errors produced by different permutations and choices and help to solve complex problems. Process structures with more elements and relations are harder to understand (c) because more information needs to be processed. Since information presented in a simple arrangement is easier to understand than if the same information is presented in an elaborate structure, both the amount of information and the types of structures affect the structural complexity of a process. This in turn affects process understandability, which provides an indication of how much information is embedded in the process (Dikici et al. (2018)). Previous studies have identified that understanding how to produce a technology was an important aspect in the acquisition, transmission, and production of Paleolithic technologies (see for example Nonaka et al. (2010); Stout et al. (2014); but see also (Lombard (2015); Pargeter et al. (2020)). The complexity of the production of prehistoric artifacts can be measured using the requirements introduced above in combination with computational models (Hoffecker (2018)). To do this, we used Petri nets to model Paleolithic tar production processes. Petri nets are a modelling language with underlying mathematical semantics (Fajardo et al. (2022); Reisig (2013)). These nets are used to study systems that may show concurrent agents or events and components that operate independently with occasional resource sharing or synchronization. We used workflow nets (van der Aalst (1998)), a class of Petri net, to model and measure the complexity of resulting models using three pre-existing metrics: a) the density metric that takes into account the interconnectedness between events and resources (Mendling (2016)), and can be related to working memory; b) the extended cyclomatic metric (Lassen, van der Aalst (2009)), that concerns the likelihood of errors throughout the process, and the potential need for planning and inhibition control; and c) the structuredness metric, which relates to the effort to understand abstract information about the materials, product templates and the process itself, and thus to learning (Lassen, van der Aalst (2009)). We model and measure three experimental techniques of birch bark tar production known from the literature: condensation (Schmidt et al. (2019)), pit roll, and raised structure (Kozowyk et al. (2017)). Currently, the condensation method is interpreted to be the simplest of the three, and the raised structure the most complex. These methods cover the widest range of potential tar making techniques and represent our current knowledge about aceramic tar production, both in terms of yield, time invested, number of production steps, and materials required. The Petri net models of these production processes and the results of the complexity metrics allow us to present a multidimensional comparison of the complexity of an ancient technology. With these cognitively related metrics, we can also show that, contrary to current ideas (Kochiyama et al. (2018); Wynn, Coolidge (2004)), working memory, inhibitory control, and planning are all cognitive requirements involved in the aceramic production of tar. ## 2 Methods Author2, and author3 recreated the tar production processes (Kozowyk et al. (2017); Schmidt et al. (2019)) in field experiments. Non-participant observation was implemented to record technical behaviors during the process of each experiment (Czarniawska (2018)). Author1 observed and recorded the experiments and Author2 and Author3 actions with a video camera, and time stamped for all events during the coding phase. After the experiments, Author2 was asked to describe the workflow of the experiments from the executant's perspective. These different sources were integrated to produce a comprehensive list of the actions, events, conditions and sequences that occurred during the workflow. ### Petri nets Petri nets are directed bipartite graphs with three basic elements: places, transitions, and arcs (e.g. Figure 1). Places in a Petri net represent states, conditions, or resources that need to be met or available before an event can be carried out. They can also represent the result of an event, i.e., a new state or condition of a resource. Places cannot be directly connected with one another. Transitions in a Petri net represent events that change conditions or states of resources. They can be enabled or disabled depending on the availability of the required resources or the fulfilment of the necessary conditions. Two transitions cannot be directly connected. A transition may occur when all the conditions and resources of its input places are available. The firing of a transition is instantaneous and the choice of which transition to fire when several transitions are enabled at the same time is random. When several transitions are enabled, these transitions may occur concurrently (Fajardo et al. (2022); Reisig (2013)). Arcs in a Petri net represent the relations between places and transitions. They indicate the resources or conditions required for a transition to be enabled or the resources or conditions resulting from a transition. Arcs can be either input arcs, output arcs, or both, depending on their direction and function. By combining these three elements, Petri nets provide a graphical representation of the behavior of a system. It is important to note that Petri nets are not just a graphical representation of a system, but can also be used to analyze the behavior of the system. In a Petri net the availability of resources or the fulfillment of conditions is graphically represented using black dots or numbers called tokens. They are used to track the flow of information and resources in the system. Tokens are placed in the input places of a transition to indicate that the required resources or conditions are available for the transition to be enabled. When a transition is fired, tokens are removed from the input places and produced in the output places, representing the new state or condition of the resources. The distribution of tokens in different places at any given moment represents the current state of the system and provides insight into the behavior of the system over time. The states of the process are represented by the distribution of tokens over places. These states are also called markings in Petri nets. In workflow nets, arcs have a weight of one because places correspond to conditions that can be validated as true or false. Processes modeled as workflow nets start with a token marking one unique input place and should always be able to end with a token in a different unique output place, with all the other places being empty. In workflow nets, transitions can always occur by following the appropriate route in the workflow and they do not create infinite loops (van der Aalst (1998); Lassen, van der Aalst (2009)). Figure 1: A workflow Petri net representation and reachability graph to illustrate dynamics. The Petri net consists of six places labeled p1 to p6, four transitions labeled t1 to t4, and arcs connecting them. All arcs and transitions have the same function that returns 1, meaning that any transition can fire as long as its input places are marked with tokens. Each subfigure (oval) represents a reachable state (marking) and together with the directed edges (arrows) represent the reachability graph of the Petri net. In the initial reachable state of the Petri net (a), only place p1 has one token, and all other places are empty. When t1 fires, the marking of the Petri net changes and places p2 and p3 have one token each (b). Then transitions t2 and t3 can occur in any order, including in parallel, to produce tokens in p4 and p5, respectively, as shown in (c) and (d). After places p4 and p5 are marked with one token each (e), then transition t4 may occur and produce a token in p6 to reach the final reachable state of the Petri net (f). ### Modelling approach The tar production models relied on a set of assumptions. First, we focused on the intrinsic variability of tar production processes, rather than the way environmental settings determine the availability of resources. Therefore, we assumed that cultural, social and environmental restrictions related to resource availability did not play a role in the workings of the processes. Second, we considered that resources, tools, and time required for activities were available and did not represent behavioral constraints. Third, models were developed as action oriented, and people executing actions were excluded from the models. For the models presented here, we defined the atomic units as actions or events that changed the location or modified the physical properties of resources. In the tar production models, places represented either the presence/absence of a material, or whether an action occurred. The models ran a single process instance. This enabled the use of pre-existent complexity metrics and controlled the effects of parameters such as the required amount of tar. Assuming infinite resources and time, any of the tar production techniques can be repeated as many times as needed to obtain a specific amount of tar, but these repetitions do not change the workings of each process. For example, for the condensation method, we modelled the events from when one piece of bark was burned, until the tar produced by that piece of bark was stored. However, in practice, this method involved burning several pieces of bark and extracting and storing tar repeatedly in the same way to obtain more tar. We restricted the maximum number of tokens that each place can hold to one. This restriction is commonly used in workflow nets to ensure that time complexity was not a practical constraint for the calculation of the metrics (Lassen, van der Aalst (2009)). Finally, causal logic determined the control flow of the models. When actions needed to be executed for a certain amount of time, the beginning and end were represented by start-and end-activities, with a place in between denoting 'in progress'. The Petri nets were saved as pnml files, a XML-based interchange format for Petri nets, using Snoopy 2 version 1.22 (Heiner et al. (2012)). To analyze the Petri nets models, we calculated the three metrics using ProM Tools release 6.10 developed by the Process Mining Group at Eindhoven Technical University (van der Aalst et al. (2007); ProM (2010)). We imported pnml files produced in Snoopy to ProM 6.10 using the plug-in PNML Petri net files. We used the plug-in Petri-net Metrics in ProM 6.10 for calculating the metrics. Reachability graphs were also calculated using ProM 6.10. ### Metrics Density metric. The density metric measures the degree of connection between actions and conditions in the process (Mendling (2016)). It can be used as a proxy for how much information a maker has to access when several actions can be executed at the same time or when several conditions are needed to execute a given action in the process. This metric was originally formulated to characterize networks and later adapted to measure the density of connections in workflow notations, including workflow nets (Mendling (2016)). The density metric in a workflow net calculates the ratio of existing arcs to the maximum number of possible arcs. The maximum number or arcs is calculated by multiplying the total number of places and transitions and then multiplying the result by two. A high ratio of existing arcs means that conditions and actions are more interconnected, and therefore the amount of information required to change states of the process is higher during behavior. For instance, in Figure 1, the Petri net example has six places and four transitions, which implies that the maximum number of arcs possible is 48. The actual number of arcs present in the Petri net is 10. As a result, the density metric value for this Petri net is calculated to be 0.208. Extended Cyclomatic metric. The extended cyclomatic metric measures the number of possible paths in which a product can be obtained given the structure of each production method (Lassen, van der Aalst (2009)). We use it as a proxy for the likelihood of reassessments and errors throughout the process. The extended cyclomatic metric is calculated with the reachability graph of a Petri net model. A reachability graph calculates the reachable states of a Petri net, that is all of the different moments in which the production system may be observed before obtaining the product (Fajardo et al. (2022)). The calculation of the extended cyclomatic metric includes the number of strongly connected components of each reachability graph. A Petri net's reachability graph represents each reachable state of the system being modeled as a vertex. All states (vertices) form the state space of the system. Transitions that occur between states are represented by directed edges (arcs) between vertices. A strongly connected component is a maximal set of reachable states, where each reachable state can be reached from any other state in the component. The minimal strongly connected component in a reachability graph is represented by a single reachable state. The extended cyclomatic metric is calculated by subtracting the total number of reachable states from the total number of directed edges and adding the number of strongly connected components to the result (Lassen, van der Aalst (2009)). Practically, this means that low values in the extended cyclomatic metric imply less possible paths to reach the product, and lower chances of producing errors. High values mean that several different paths exist which translates to more possible errors during the process and higher behavioral complexity. In Figure 1, there are six vertices (a,b,c,d,e,f), six directed edges (thick arrows), and six strongly connected components because all reachable states are minimal strongly connected components. As a result, the extended cyclomatic metric value for the Petri net in Figure 1 is six. Structuredness metric. The structuredness metric measures the effort required to understand the information behind the process (Lassen, van der Aalst (2009)). We use the structuredness metric to deconstruct the Petri net models into components similar to programming constructs such as sequences, selections and iterations (van der Aalst et al. (2003); Dikici et al. (2018)). These constructs are given a weight based on their structural complexity, which is a reported factor that reduces the comprehension of conceptual models (Lassen, van der Aalst (2009); Winter et al. (2020)). In Petri nets, these constructs are sets of places, transitions and arcs. Constructs with a smaller number of elements allow the information about the process to be transmitted easier and more accurately. This makes it easier to store information about components for future use, meaning the process is easier to learn. The algorithm to calculate the metric searches for seven types of components: (1) sequence, (2) choice, (3) while, (4) marked graph, (5) state machine, (6) well-structured, and (7) unstructured (van der Aalst et al. (2003); Reisig (2013)). A sequence is an event that is enabled unconditionally after the completion of another event. A choice is a point in the process where an event of several possibilities is chosen. A while is an event that can be repeated. A marked graph is a process where there are no choices between events, but events can occur concurrently. A state machine is a process that changes states after the occurrence of an event, which can be in conflict with other events, but these and other events cannot occur concurrently. A well-structured component is a process where choice and synchronization between events are separated in the process and there are no cycles. An unstructured component is a minimal process that does not fall within any of the types of components above. The structuredness metric is calculated using a function that weighs each component following the order above. First, the algorithm searches for sequences; the less complex components which receive the lowest weight, and it finishes with unstructured components, which receive the highest weight. Next, a weight is calculated for the identified component which is folded into a single transition. Then, the algorithm continues to search, weight, and fold components in the Petri net using the same priority function. This procedure is conducted until a single transition connected to the initial and final place remains. The weight of the last transition represents all the weights of the identified components. The metric gives higher weights to components containing embedded components, based on the assumption that components made of other nested components are more complicated and therefore harder to understand. A low metric score means a process has low complexity because the amount of information embedded in its structure is small. A high value indicates that the amount of information is large and it will require more effort to understand the structure of the process. For instance, to calculate the structuredness metric of the Petri net in Figure 1, we apply the algorithm by Lassen and van der Aalst (Lassen, van der Aalst (2009)). This Petri net is only a marked graph, and its weight function is determined by doubling the number of transitions and multiplying it with the diff function, which measures how evenly split and merge points are matched within the component. A higher diff value indicates unevenness in a workflow net can cause imbalances in the flow and limit the firing of certain transitions that require multiple tokens from different places. The marked graph in Figure 1 has four transition and a diff value of one, making the structuredness metric value equal to eight. Refer to Lassen and van der Aalst (Lassen, van der Aalst (2009)) for further details on the weight functions and algorithm. ## 3 Results A comparison of the three workflow nets showed that the condensation and pit roll method have less conditions (places), events (transitions) and relations (arcs) than the raised structure method (Figure 2, Figure 3). The condensation model has the smallest number for all elements (Figure 2A, Figure 3). The pit roll model is slightly larger in each element category (Figure 2B, Figure 3), and the raised structure model is the largest in all categories (Figure 2C, Figure 3). In each net, places represent approximately 24%, transitions 24%, and arcs 52% of all elements. These results suggested that the models systematically represent the observed process behaviors with little ambiguity in different types of dependency relations. ### The condensation method relies heavily on working memory capacity The condensation model scored the highest in the density metric (value = 0.082), followed by the pit roll model (value = 0.076), and then the raised structure model (value = 0.043; Figure 4A). The density value of the condensation model can be explained by the information processing peak during the use of fire. Considering the number of possible connections between transitions and places, and the size of the model, the density metric shows that this peak is more demanding than those in the other models. The use of fire is modelled with the transition 'Light bark', and the place 'p6'. These elements are connected with other transitions and places of the model by three and four arcs, respectively (Figure 4A). The arc density is also higher during and after using fire in the condensation method than in the pit roll and raised structure methods. Nine arcs are located before the transition 'Light bark', and 21 arcs occur after in the condensation model. Figure 2: Initial state of Petri net models for the three tar production methods. condensation (a); pit roll (b); raised structure (c). Places (circles) represent the conditions or resources before or after an event. Transitions (rectangles) represent events that change the local states of the system. Arcs (arrows) are directed and form logical connections between places and transitions. They indicate the flow of the system and the causal relations between places and transitions. Token inside places (black dots) represent availability of resources or the fulfillment of conditions The pit roll model scored the second highest density of the three models (value= 0.075). When compared with the condensation model, this value results from a larger number of conditions to be fulfilled without a strong increment in the arc density. The pit roll model included only two more arcs and two more places than the condensation model. Three transitions and three places are connected each with more than two arcs (Figure 4C). Nineteen arcs in the pit roll model are located before the start of use of fire, represented by the transition 'Place embers', and the other 13 arcs appear after. This net structure shows that the arc density and the number of conditions are higher in preparations before using fire than during the use of fire. The raised structure model showed the lowest density (value =0.043). There are four places and three transitions in the Petri net connected with more than two arcs with other elements (Figure 4C). However, the raised structure model almost doubles the number of places, transitions, and arcs compared with the other two models. This reduced the weight that the multiple arcs have in the density. When comparing net structures of the raised structure and condensation models, the raised structure model shows arcs that are more uniformly distributed between the preparations before using fire and the actions involved in the use of fire. Twenty seven arcs are located before the use of fire, represented by the transition 'Light dome', and 29 arcs appear after. The raised structure model is therefore balanced in terms of the actions associated with the preparations before using fire and the actions during and after the use of fire. ### The pit roll and raised structured methods require control actions to reduce errors The condensation model scored the lowest in the extended cyclomatic metric (value = 13), followed by the pit roll model (value = 31), and then the raised structure model (value = 38, Figure 4B). A comparison of the number (Figure 6) and distribution (Figure 5) of directed edges, vertices and strongly connected components in the reachability graphs provide insights into the factors generating the differences in this metric. The low score of the condensation model (value = 13), and the low number of reachable states are explained by the number of transitions that can be executed concurrently and the small number of sequential actions. Thirty eight percent (N=5) of the total reachable states occur during the preparations before the use of fire. The only two transitions ('Tear bark' and 'Place rock') that may occur concurrently at any given marking are part of the preparations. The potential for concurrent actions in the condensation model is the lowest of the three models. The largest strongly connected components in the condensation model has four reachable states representing 30% of the total. This is the highest percentage for any strongly connected component in the three models. The largest strongly connected component in the condensation model occurs after the transition 'Light bark' (Figure 5A). One repetitive action ('Hold bark') occurs as a self-loop during the use of fire. The pit roll model scored the second highest for the extended cyclomatic metric Figure 3: Number of Petri net elements contained in each tar production model Figure 4: Complexity metrics values for the models of tar production. Density (a), Extended Cyclomatic (b) and Structuredness (c). Figure 5: Reachability graphs showing states (vertices) and transitions (numbered edges) for each tar production model. (a) Condensation (1. Start, 2. Near bark, 3. Place rock, 4. Light bark, 5. Grab, 6. Reignite, 7. Place lit bark, 8. Bark moves, 9. Bark extinguishes, 10. Start condense, 11. Hold bark, 12. Stop condense, 13. Scrape, 14. Store); (b) pit roll (1. Start, 2. Clean soil, 3. Make cup, 4. Make roll, 5. Dig, 6. Place cup, 7. Place roll, 8. Place embers, 9. Fan embers, 10. Cool pit, 11. Dig roll & cup, 12. Reheat roll, 13. Collect tar, 14. Store); (c) raised structure (1. Start, 2. Make roll, 3. Dig pit, 4. Make cup, 5. Place cup, 6. Place net, 7. Place pebbles, 8. Place roll, 9. Make dome, 10. Fix dome, 11. Place firewood, 12. Light dome, 13. Add firewood, 14. Fix dome, 15. Fire stops, 16. Cool dome, 17. Open dome, 18. Dome fumes, 19. Close dome, 20. Fuming stops, 21. Remove dome, 22. Remove roll, 23. Remove net & pebbles, 24. Remove cup, 25. Collect tar, 26. Store). (value=31). Compared with the condensation model, the reachability graph of the pit roll model has almost double the number of reachable states, and twice the number of edges and strongly connected components (Figure 5B; Figure 6). Seventy six percent (N=16) of the total reachable states occur during the preparations for the use of fire, where a maximum of three transitions in any combination from the set 'Make cup'; 'Clean soil; 'Dig pit'; 'Place cup' and 'Make roll' can occur concurrently (Figure 2B). The maximum number of possible concurrent transitions in the pit roll suggests that concurrent actions are more important in the pit roll method than in the condensation method. The largest strongly connected component in the reachability graph of the pit roll model has two reachable states and represents 9.5% of the total. One repetitive action ('Fan') occurs as a self-loop during the use of fire. The raised structure model scored the highest for the extended cyclomatic metric (value= 38). The raised structure model is the longest of the three models and also shows the largest number of sequences (Figure 5C; Figure 6). This model also includes transitions with the potential to be executed concurrently before the use of fire, and repetition of actions during the use of fire. Fifty three percent (N=18) of the total reachable states occur during the preparations for the use of fire. A maximum of three of the transitions 'Make cup', 'Place net', 'Place pebbles', 'Dig pit', 'Place cup', and 'Make roll' can occur concurrently in any combination (Figure 2C). The largest strongly connected component in the reachability graph of the raised structure model has four reachable states and represents 13% of the total. This strongly connected component, generated by a cycle of actions during the opening of the structure, is twice the size of that from the pit roll and has the same number of reachable states as the condensation model. Three other actions ('Fix dome'; 'Add firewood'; 'Fix dome 2') are repeated as self-loops, separated by sequences of actions during the use of fire. ### The raised structure contains more embedded information than the condensation or pit roll methods The condensation model scored the lowest in the structuredness metric (value = 38), followed by the pit roll (value = 102), and then the raised structure (value = 132; Fig 3C). The metric values indicate that the raised structure model requires more planning and acquiring more knowledge about the production process than the pit roll and condensation models do. All three Petri net models showed sequences, whites, marked graphs, and state machines (Figure 7), organized in a three tier hierarchical structure. Sequences are found in the deepest tier inside the marked graph of the pit roll and raised structure models, but they are also found embedded in the state machine components in the second tier in all nets. Sequences are the most dominant component representing 68% of all components matched. The raised structure model shows the highest number of sequences (N = 12) and the pit roll model the lowest (N = 3). Figure 6: Number of elements in the reachability graphs of the tar production models The while components appear in the second tier of the deconstructed nets, embedded within the state machine components. Each model contained one while component. The marked graph components were found with different sizes in the second tier of the deconstructed nets. The marked graphs relate to the preparations before the use of fire. The marked graph of the condensation model is the smallest, collapsing four places and four transitions. The marked graph of the pit roll model collapses eight places and seven transitions, and has one embedded sequence. The raised structure model has a marked graph that collapses nine places and has two embedded sequences. The top tier of the hierarchical structure of the decomposed nets is a state machine component. The sequences, whites and marked graphs are embedded in this tier. The top tier shows different sizes for each model with macro transitions representing the embedded sequences, whites, and marked graphs described above. At the top tier of the deconstructed net, the condensation model had a total of four places, four macro transitions, and one transition. The pit roll model showed four places, one macro transition, and three transitions. Finally, the raised structure model showed five places, four macro transitions, and three transitions. ## 4 Discussion The experiments, the Petri net models, and the metrics show that the condensation method relies on working memory use. The complexity metrics also show that the raised structure method relies on cognitive functions that combine the use of different cognitive processes, such as working memory, planning, and learning. Based on the structure of the Petri net models and the density metric, the condensation method imposes the most intense cognitive load in working memory of all three methods (Figure 6). The values of the density metric show that actions and resources are more interconnected in the condensation model. Having more actions and resources interacting at the same time requires accessing more information at a given time. The cognitive load in the condensation method is generated by the attention required to maintain the pieces of lit bark before and during the tar condensing on the cobble. The results for the pit roll model indicate that it requires less attention than the condensation model and that the most interconnected elements and actions occur during the preparations before using fire. Working memory is less intensely used in the pit roll and raised structure methods because peaks in information processing are smoother compared with the condensation model. In the raised structure method, actions are less interconnected and more distributed through the entire process, so the focus of the working memory resources can be on fewer actions at a time. Working memory and allocation of attention are two modern human cognitive resources that allow us to process and navigate through large amounts of information (Lieder, Griffiths (2020)). Figure 7: Number of constructs found with the structuredness metric algorithm in the condensation, pit roll and raised structure models The lower likelihood of errors shown in the cyclomatic metric suggests that the condensation method is a simpler solution because fewer possible paths exist to obtain the tar product. This means that the variability in the way the process works, and thus the underlying behavior is less complex than in the pit roll and raised structure methods. The pit roll model is similar to the raised structure in that it shows more possible paths and a higher likelihood of errors than the condensation model. The potential for concurrency generates most of the behavioral complexity of the pit roll method. The results show that the possible paths to reach the final product and the possibility of errors of the raised structure model are generated by combinations of concurrent activities, actions executed as cycles, and repetitive actions executed as self-loops. The raised structure has the largest number of reachable states, paths, and possible errors to reach the final product. The condensation method is the only method where fewer reachable states are present before fire than after (Figure 2 and Figure 5). The results show that the behavioral complexity in the condensation model after the use of fire is associated with repetitive actions that may occur as cycles or self-loops. The potential for concurrency and the largest number of reachable states of the pit roll and raised structure methods occurs before fire, suggesting that the technical behavior during preparation is more cognitively demanding. The use of fire is unique among actions in tar production. No possible concurrent actions occur after the use of fire in any of the three models, meaning that fire use is a synchronization event where all tasks that can be conducted concurrently come together, and attention shifts from being divided among multiple possible actions, to being focused on one action at a time. This is done to finish the process and obtain the product. The need for synchronization of material flows obtained via concurrent events is a feature that appear in human made systems, for example, in ceramic production systems that mix clays to obtain pottery with specific properties (Costin (2000)). The models and the structuredness metric suggest that the condensation method is easier to learn than the two other methods. The embedded components of the condensation model, and especially its marked graph, are smaller, suggesting that the amount of information required to execute the process is also smaller. The condensation model scored the lowest in the structuredness metric, despite having more transitions in its state machine component than the pit roll model. The marked graph of the pit roll model is the second largest and has more actions with potential for concurrent activities than the condensation model, making the score of the pit roll model the second highest in the structuredness metric. The raised structure model scored the highest in the extended cyclomatic metric because it has more than twice the number of sequences in the workflow and its marked graph and state machine components are larger than the other two models. This suggests that the raised structure method requires more effort to understand the elements in the production process because it shows a more elaborated and larger process structure than the pit roll and condensation models. The results also indicate that the pit roll and raised structure model have at least three times more embedded information in their process structures than the condensation model. The raised structure model shows the largest amount of information in its structure, meaning that greater understanding and planning is needed to complete the process. The experiment observations further support this and showed that the pit roll and the raised structure methods required more planning for their execution. The differences in understandability of the three methods show that reasoning and planning, common cognitive processes used by modern humans (Callaway et al. (2022); Lieder, Griffiths (2020)), may have been involved in some of the tar production methods available to Neanderthals. Currently, we do not know what production methods Neanderthals used. Recent studies show evidence of other production processes like plant cooking (Kabukcu et al. (2022)) and fire use (MacDonald et al. (2021)) with multiple steps and components. This evidence of complex behaviors supports the possibility that the 50ka year old Dutch Zandmotor tar could have been produced with methods similar to the raised structure method (Niekus et al. (2019)). We show that irrespective of the methods being used, prehistoric tar making may have required aspects of cognition analogous to that of contemporary modern humans. The results of our study lend further support to the hypothesis that Neanderthals and modern humans may have had similar working memory capacities and likely employed them in comparable ways (Ambrose (2010); Haidle (2010); but see also (Kochiyama et al. (2018); Wynn, Coolidge (2004)). This is observed in the number of features that must be kept in working memory in each of the three methods. The working memory capacity for contemporary humans is estimated at around four items (Cowan (2001)), but recent studies show that working memory usage maintains two to three features simultaneously (Draschkow et al. (2021)). A peak of three retained features in memory occurs in the condensation method. This is related to three conditions: the location of the bark against the rock, the state of the lit bark, and the moment when condensation starts. The attention required to monitor these three conditions creates the relations between resources and activities that gives the condensation method its high-density metric value. If the production processes were executed in one event without interruption, then a maximum of two concurrent activities in the condensation method, and three activities in the pit roll and raised structure methods require the makers to store these activities in their working memory to avoid repetition. This is also required for components of composite tools that are produced asynchronously (Hoffecker (2018); Hoffecker, Hoffecker (2018)). We suggest that the working memory requirements for ancient tar technology were comparable to the use of working memory by modern humans today. These results are consistent with evidence from other Neanderthal cognitively complex behaviors such as deep cave activity (Jaubert et al. (2016)), cave painting (Hoffmann et al. (2018)), use of jewelry and body painting (Bednarik (2001)), and deliberate burial (Rendu et al. (2014)) identified over the last decades. The three production processes all contain choices that require inhibitory control. To ensure that the process will produce tar when its execution terminates, makers are required to control their need of obtaining tar. All the models ensure termination of the production processes and the cyclomatic metric evaluates every possible combination of actions in modeled production processes. However, in real world situations, we cannot ensure that a production process will yield the desired product if it terminates because not all events will occur successfully every time. Interference, random events, or urgency in obtaining the product may prevent makers from fetching tar, even if all steps in the process are executed. For example, opening the dome of the raised structure without giving enough time to reach high temperatures, increase the possibility of not obtaining tar even if all the steps in the process are executed. Therefore, self-control is required in tar production. This type of self-control in the production of material culture is argued to date back to the Early Stone Age and a prerequisite for any tool-making and extended problem solution distance behaviors (Haidle (2010); Kohler (1925); Lombard et al. (2019)). For example, in stone-tool making (Pargeter et al. (2019)) and specifically in the production of symmetrical Acheulean handaxes (Green, Spikins (2020)) self-control is required to invest time to acquire the required stone knapping skills. The modelling approach and the metrics presented here can be helpful to test such hypotheses in the future. If the information required to produce a technology is acquired cumulatively (Caldwell (2020); Wadley (2021); but see also Vaesen, Houkes (2021)), processes with low understandability (i.e., more information), such as the raised structure, are likely to emerge later in the development of a technology. Conversely, processes with high understandability are likely to be more prevalent during the emergence of a technology. In this light, the condensation method could have been discovered first (Schmidt et al. (2019)). It relies on materials directly available in the environment and the process of tar formation can be directly observed in an open fire. This technique may even qualify as a latent solution (Schmidt et al. (2019); Tennie et al. (2017, 2016); presented under the right circumstances to an individual and not requiring teaching. For Neanderthal tar making via condensation, these circumstances must have included sufficient working memory, access to birch bark, a suitable rock, a tool for scraping, and fire. The other production methods have more embedded information, making them more difficult to learn. They are, therefore, unlikely to be latent solutions. It is more likely that, if used, the technological know-how of the pit roll and raised structure techniques were transmitted culturally (Tomasello et al. (1993)). These two production methods also rely on a greater planning depth and inhibition ability, and the integration of working memory with other cognitive processes. Our study and method are not without limitations. Here we studied a single technology using three possible production methods and in reality, Neanderthals may have used more or different methods (Koch, Schmidt (2022); Kozowyk et al. (2017); Pomstra, Meijer (2010)). The results are by no means a complete representation of the complexity of the Neanderthal technological world; this would require modelling of the production of bone, wood, stone tools, and other technologies like fire (Adler et al. (2014); Aranguren et al. (2018); Leder et al. (2021); Schlanger (1996); Sorensen et al. (2018); Sorensen et al. (2013)). In future works, all the possible production methods of multiple technologies from ancient technological systems should be compared to illuminate the trends in technological and cognitive solutions available in the past. In addition, some of the tar production techniques are scalable and we did not model for that here. The models and metrics have limitations to study the scalability of the production process or the effects of resource availability in the process complexity. Models scaling up the production will influence the results of the metrics because the number of connections between actions and resources, and amount of information embedded in the structure of the models will be dependent on how much the processes are scaled up. Other classes of Petri nets such as place/transitions nets are more adequate to explore these problems (Fajardo et al. (2022)). In addition, with Petri nets we model a reconstructed version of the past; missing details in this reconstruction may influence measured outcomes. For some technologies, like refitted lithic technologies much data may be present, whereas for others, like basketball, there is heavy reliance on experimental archaeology. The used metrics are designed for modern human cognition and one can question their relevance to ancient cognition. In this study we work from the assumption that both Neanderthals and modern humans share evolutionary traits, and we considered the metrics suitable. Neanderthals are increasingly argued to have technological and cognitive capabilities comparable to modern humans (MacDonald et al. (2021); Pitarch Marti et al. (2021); Roebroeks, Sorensi (2016); Villa, Roebroeks (2014)). We have shown it is possible to derive the cognitive requirements of technologies, making such arguments concrete and measurable. Petri nets and their derived measures are unique because they allow us to also compare complexity across different materials and technologies. Petri nets are also a promising tool to study older technologies where inferences are harder to make compared to the Middle Paleolithic, shedding light on the cognitive requirements of our earliest tool making ancestors. ## 5 Conclusion The method presented here links process complexity of ancient technologies to cognitive processes. At face value the condensation method relies on working memory. The other two methods rely much less on working memory, but are more demanding in terms of knowledge density, and require cultural learning, self-control, and planning. The Petri net approach to measure complexity presented here has proven to be useful for the systematic analysis and comparison of ancient technologies and their production systems. Independent of which proposed tar production methods were used in the past, the Petri net models and complexity metrics suggest that Neanderthals probably relied on several cognitive traits that archaeologists often associate with modern behavior. Our multi-angled approach to the embedded information in technological behaviors does justice to the different and idiosyncratic forms of technological complexity and cognitive traits hominins may have had. Future studies can extend this implementation to other technological behaviors to improve our understanding of human and technological evolution. ## 6 Acknowledgments We thank Alessandro Aleo for his collaboration conducting experiments. We also thank to the educational archaeological site Masamuda in Vlaardingen (the Netherlands) for their generous use of space for the experiments. This research was supported as part of the Ancient Adhesives project, funded by the European Research Council ([https://erc.europa.eu/](https://erc.europa.eu/)) under the European Union's Horizon 2020 research and innovation programme grant agreement No. 804151 (grant holder GHJL). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
2303.12799
Time Series as Images: Vision Transformer for Irregularly Sampled Time Series
Irregularly sampled time series are increasingly prevalent, particularly in medical domains. While various specialized methods have been developed to handle these irregularities, effectively modeling their complex dynamics and pronounced sparsity remains a challenge. This paper introduces a novel perspective by converting irregularly sampled time series into line graph images, then utilizing powerful pre-trained vision transformers for time series classification in the same way as image classification. This method not only largely simplifies specialized algorithm designs but also presents the potential to serve as a universal framework for time series modeling. Remarkably, despite its simplicity, our approach outperforms state-of-the-art specialized algorithms on several popular healthcare and human activity datasets. Especially in the rigorous leave-sensors-out setting where a portion of variables is omitted during testing, our method exhibits strong robustness against varying degrees of missing observations, achieving an impressive improvement of 42.8% in absolute F1 score points over leading specialized baselines even with half the variables masked. Code and data are available at https://github.com/Leezekun/ViTST
Zekun Li, Shiyang Li, Xifeng Yan
2023-03-01T22:42:44Z
http://arxiv.org/abs/2303.12799v2
# Time Series as Images: Vision Transformer for Irregularly Sampled Time Series ###### Abstract Irregularly sampled time series are becoming increasingly prevalent in various domains, especially in medical applications. Although different highly-customized methods have been proposed to tackle irregularity, how to effectively model their complicated dynamics and high sparsity is still an open problem. This paper studies the problem from a whole new perspective: transforming irregularly sampled time series into line graph images and adapting powerful vision transformers to perform time series classification in the same way as image classification. Our approach largely simplifies algorithm designs without assuming prior knowledge and can be potentially extended as a general-purpose framework. Despite its simplicity, we show that it substantially outperforms state-of-the-art specialized algorithms on several popular healthcare and human activity datasets. Especially in the challenging leave-sensors-out setting where a subset of variables is masked during testing, the performance improvement is up to 54.0% in absolute F1 score points. Our code and data are available at [https://github.com/Leezekun/ViTST](https://github.com/Leezekun/ViTST). Machine Learning, ICML ## 1 Introduction Time series data are ubiquitous in a wide range of domains, including healthcare, finance, traffic, and climate science. With the advances in deep learning architectures such as LSTM (Graves, 2012), Temporal Convolutional Network (TCN) (Lea et al., 2017), and Transformer (Vaswani et al., 2017), numerous algorithms have been developed for time series modeling. However, these methods typically assume fully observed data points at regular intervals and fixed-size numerical inputs. They cannot deal with irregularly sampled ones, a sequence of samples with irregular intervals between their observation times. To tackle this challenge, highly specialized models were developed, which require a considerable amount of prior knowledge in the model architecture choice and design (Marlin et al., 2012; Lipton et al., 2016; Che et al., 2018; Horn et al., 2020; Zhang et al., 2022; Shukla & Marlin, 2020; Zhang et al., 2022). The recently emerging transformer-based vision models, most notably Vision Transformers (Dosovitskiy et al., 2020)1, have demonstrated strong performance on various vision tasks such as image classification and object detection. In this paper, we raise a simple question: _Since vision transformers have exceeded humans in various image recognition tasks, can they "visually" capture temporal patterns in the visualized time series data?_ To answer this question, we explore the following minimalist approach: transform the irregularly sampled multivariate time series into line graphs (Fig. 1), arrange these line graphs into a standard RGB image, and train a vision transformer to perceive the image and perform the classification task. We dub this approach **ViTST**, short for **V**ision **T**ime **S**eries **T**ransformer. Footnote 1: In this paper, we refer to vision transformers as a type of vision models based on Transformer, including ViT (Dosovitskiy et al., 2020), Swin Transformer (Liu et al., 2021), DeiT (Touvron et al., 2021), to name a few. The line graph images could encode two kinds of informative patterns in multivariate time series: (1) the temporal dynamics of each variable in its corresponding line graph; and (2) the correlation of variables across different line graphs. We assume that vision transformers can capture pattern (1) via modeling local patch interactions within a single time series line graph images and pattern (2) from global patch interactions across different line graphs. Experimental results demonstrate that our approach ViTST outperforms previous state-of-the-art (SOTA) results by 2.4 and 1.2 AUROC points (%) on two irregularly sampled healthcare datasets P19 (Reyna et al., 2019) and P12 (Goldberger et al., 2000), and 7.6 accuracy and 6.8 F1 score points (%) on a human activity dataset PAM (Reiss & Stricker, 2012). Our approach also shows superior robustness to missing observations. It improves the prior work by up to 54.0% in absolute F1-score points in the challenging leave-sensors-out setting where a part of the sensors (variables) in the test set are masked. We also evaluate our approach ViST on regular time series data, where it still attains excellent results compared with SOTA algorithms designed for regular time series modeling, demonstrating its generality as regularly sampled time series algorithms that usually do not work well for irregularly sampled data and vice versa. In summary, the contributions of this work are three-fold: (1) We propose a simple but effective approach for multivariate irregularly sampled time series classification. The simplicity contrasts with the state-of-the-art performance it has achieved. (2) The proposed approach attains excellent results on both irregular and regular time series data, and can be potentially extended as a general-purpose framework for time series modeling. (3) It opens up a new direction and might encourage the utilization of fast-evolving and well-studied computer vision techniques in the time series domain, such as better model architecture (Liu et al., 2022), data augmentation (Shorten and Khoshgoftaar, 2019), interpretability (Chefer et al., 2021), self-supervised pre-training (He et al., 2022), to name a few. ## 2 Related work Irregularly Sampled Time Series.An irregularly sampled time series is a sequence of observations with irregular intervals between observation times. In a multivariate setting, different variables within the same time series may not align. Such characteristics have posed a significant challenge to the standard time series modeling methods, which typically assume fully observed and regularly sampled data points. A common approach to handle irregular sampling is to convert continuous-time observations into fixed time intervals (Marlin et al., 2012; Lipton et al., 2016). To incorporate the dynamics between observations, GRU-D (Che et al., 2018) decays the hidden states based on gated recurrent units (GRU) (Chung et al., 2014), which takes as input the observations' values and also times. (Pham et al., 2017) modified the forget gate of LSTM (Graves, 2012) to better account for irregularity. Similarly, (Yoon et al., 2017) proposed an approach based on multi-directional RNN, which can capture the inter- and intra-steam patterns. Besides the recurrent and differential equation-based model architectures, recent work has explored attention-based models. Transformer (Vaswani et al., 2017) is naturally able to handle arbitrary sequences of observations. ATTAIN (Zhang, 2019) incorporates attention mechanism with LSTM to model the time irregularity between observations. SeFT (Horn et al., 2020) maps the irregular time series into a set of observations based on differentiable set functions and utilizes an attention mechanism for classification. mTAND (Shukla and Marlin, 2020) presented a multi-time attention network, which learns continuous-time embeddings coupled with a multi-time attention mechanism to deal with the continuous-time inputs. UTDE (Zhang et al., 2022) integrated embeddings from mTAND and classical imputed time series with learnable gates to take their advantages for tackling complex temporal patterns. Raindrop (Zhang et al., 2022) modeled the irregularly sampled time series as graphs and utilized graph neural networks (Kipf and Welling, 2016; Hamilton et al., 2017) to model the relationships between different variables. Overall, these methods are all highly specialized for irregular time series. In this work, we explore a simple and general vision transformer-based approach for irregularly sampled time series modeling without using dedicated model architecture modifications. Numerical Time Series Modeling Methods with Transformer.Transformers possess superior abilities to capture long-range dependencies in sequential data, making them appealing to time series modeling (Li et al., 2019). A surge of transformer-based methods have been proposed and successfully applied to various time series modeling tasks, such as forecasting (Li et al., 2019; Zhou et al., 2021; Wu et al., 2021; Zhou et al., 2022), classification (Zerveas et al., 2021), and anomaly detection (Xu et al., 2021). These methods are usually designed for regular time series settings, where they view multivariate numerical values at the same timestamp as a unit and model temporal interactions across different units. A recent work (Nie et al., 2022), on the other hand, proposes to segment each univariate time series into a sequence of sub-series and model their interactions independently. Time Series as Other Modalities.The recently emerging pre-trained transformer-based models, initially proposed in Natural Language Processing (NLP) field, have since come to monopolize the state-of-the-art performance across various downstream tasks in NLP and Computer Vision (CV) fields. For example, the pre-trained language model BERT (Kenton and Toutanova, 2019) and GPTs (Radford et al., 2018, 2019; Brown et al., 2020) can be adapted to various NLP tasks. Some non-language tasks can also be solved by these pre-trained transformer-based language models by transforming them into language sentence prompts (Dinh et al., 2022). A recent work (Xue and Salim) tried to represent the time series in natural language and utilize pre-trained language models to forecast. However, such a method has difficulties modeling long-range multivariate time series as it usually involves tens of thousands of numerical values, which cannot be fitted into the language models (512/1024 max tokens for most LMs). In addition, it is hard to express the informative irregularity of time series in natural language sentences. By contrast, we transform numerical time series data into images and utilize pre-trained transformer-based vision models to perform time series modeling, which doesn't have the issues. Note that some prior studies tried to transform time series into Gramian fields (Wang and Oates, 2015), recurring plots (Hatami et al., 2018; Tripathy and Acharya, 2018), and Markov transition fields (Wang and Oates, 2015) images and utilize CNN to perform classifications. However, these methods are not domain-agnostic and require domain knowledge in designing specialized imaging methods. By contrast, we simply transform time series into line graph RGB images without assuming prior knowledge. ## 3 Approach As illustrated in Fig. 1, ViTST consists of two steps: (1) transform multivariate time series into a concatenated line graph image; (2) utilize the vision transformer as an image classifier for the classification task. To begin with, we present some basic notations and problem formulation. **Notation.** Let \(\mathcal{D}=\{(\mathcal{S}_{i},y_{i})|i=1,\cdots,N\}\) denote a time series dataset containing \(N\) samples. Every data sample is associated with a label \(y_{i}\in\{1,\cdots,C\}\), where \(C\) is the number of classes. Each multivariate time series \(\mathcal{S}_{i}\) consists of observations of \(D\) variables at most (some variables might have no observations). The observations for each variable \(d\) are given by a sequence of tuples with observed time and value \([(t_{1}^{d},v_{1}^{d}),(t_{2}^{d},v_{2}^{d}),\cdots,(t_{n_{d}}^{d},v_{n_{d}}^{d })]\), where \(n_{d}\) is the number of observations for variable \(d\). If the successive intervals among observation times \([t_{1}^{d},t_{2}^{d},\cdots,t_{n_{d}}^{d}]\) are different within the same variable or across variables/samples, \(\mathcal{S}_{i}\) is an irregularly sampled time series. Otherwise, it is regular time series. **Problem Formulation.** Given the dataset \(\mathcal{D}=\{(\mathcal{S}_{i},y_{i})|i=1,\cdots,N\}\) containing \(N\) multivariate time series, we aim to predict the label \(\hat{y}_{i}\in\{1,\cdots,C\}\) for each time series \(\mathcal{S}_{i}\). There are mainly two components in our framework: (1) a function that transforms the time series \(\mathcal{S}_{i}\) into an image x\({}_{i}\); (2) a vision transformer serves as an image classifier that takes the line graph image x\({}_{i}\) as input and predict the corresponding label \(\hat{y}_{i}\). ### Time Series to Image Transformation **Time Series Line Graph.** Time series line graph is a widely-used data visualization method to illustrate temporal data points at successive intervals. Each point on the line graph corresponds to an observation with an observed time and value. The horizontal axis is used to plot timestamps, and the vertical axis is used to plot values. Straight lines connect the points on the graph in the order of time, where the missing value interpolation is done automatically. We use markers "\(\star\)" to indicate the data point in the line. As the scale of different variables varies greatly, we plot the observations of each variable in an individual line graph, as shown in Fig. 1. The scales of each line graph g\({}_{i,d}\) are kept the same across different time series \(\mathcal{S}_{i}\). Different colors are used for each line graph to distinguish them. We experimentally found that the tick labels and other components in the line graph figure are unnecessary, as the position of an observation in a line graph indicates the relative magnitude of observed time and value. **Image Creation.** Given a set of time series line graphs \(\mathcal{G}_{i}=\{\texttt{g}_{1},\texttt{g}_{2},\cdots,\texttt{g}_{D}\}\) for time series \(\mathcal{S}_{i}\), we place them in a single image x\({}_{i}\) using a pre-defined grid layout, in which the line graph of each variable is in a grid cell. Similar to (Fan et al., 2021), we experimentally found that a compact layout (_i.e._, square grid) leads to consistently good performance. Specifically, given the \(D\) time series line graphs for a time series \(\mathcal{S}_{i}\), we place them in a grid, whose size is \(l\times l\) when \(l\times(l-1)<D<=l\times l\), and \(l\times(l+1)\) when \(l\times l<D<=l\times(l+1)\). For example, there are 34, 36, and 17 variables in P19, P12, and PAM datasets, respectively. The default grid layouts are thus \(6\times 6\), \(6\times 6\), and \(4\times 5\). If the grid is not full, the cells at the end of the grid are left blank (see Fig. 5 for examples of created images from these Figure 1: An illustration of our approach ViTST. The example is from a healthcare dataset P12 (Goldberger et al., 2000), which provides the irregularly sampled observations of 36 variables for patients (we only show 4 variables here for simplicity). Each column in the table is an observation of a variable, with the observed time and value. We plot separate line graphs for each variable and arrange them into an image, which is then fed into the vision transformer to perform the classification task. three datasets). More details are provided in Appendix A. ### Vision Transformers for Time Series Modeling Given the image \(\mathrm{x}_{i}\) transformed from time series \(\mathcal{S}_{i}\), we leverage an image classifier to perceive the image and perform the classification task. The time series patterns in a line graph image involve both local (_i.e_., the temporal dynamics of a single variable in a line graph) and global (the correlation among variables across different line graphs) contexts. To better capture these patterns, we choose the recently developed vision transformers. Unlike the predominant CNNs, vision transformers are proven to have much less image-specific inductive bias and stronger abilities to capture local and global dependencies (Dosovitskiy et al., 2020; Liu et al., 2021). **Preliminary.** Vision Transformer (ViT) (Dosovitskiy et al., 2020) is originally adapted from NLP. An image is split into fix-sized patches, each linearly embedded and augmented with position embeddings. The resulting sequence of vectors is fed into a standard Transformer encoder consisting of a stack of multi-head attention modules (MSA) and MLP to obtain patch representations. An extra classification token is added to the sequence to perform classification or other tasks. ViT models _global_ inter-unit interactions between each pair of patches, which faces efficiency issues when dealing with high-resolution images. Swin Transformer (Liu et al., 2021), on the other hand, has a hierarchical architecture that contains multi-level feature maps and computes self-attention locally within non-overlapping windows, significantly reducing the computation complexity and improving the recognition performance. We thus use Swin Transformer as the default backbone vision model if not specified. Note that any other vision model can be applied under this framework. Specifically, Swin Transformer constructs the hierarchical representation starting from small-sized patches in earlier layers to capture the fine-grained _local_ information and gradually merging neighboring patches in deeper layers to model _global_ coarse-grained information. As illustrated in Fig. 2, the self-attention is calculated within each non-overlapping window in the W-MSA block. When the sliding window is within a single line graph for variable \(d\), the local intra-variable interactions and temporal dynamics of the variable \(d\) are captured. The shifted window block SW-MSA enables the connection of different windows. After shifting, the window spans across different line graphs. Mathematically, the consecutive Swin Transformer blocks are calculated as: \[\hat{\mathbf{z}}^{l} =\text{W-MSA}\left(\text{LN}\left(\mathbf{z}^{l-1}\right)\right) +\mathbf{z}^{l-1},\] \[\mathbf{z}^{l} =\text{MLP}\left(\text{LN}\left(\hat{\mathbf{z}}^{l}\right) \right)+\hat{\mathbf{z}}^{l},\] \[\hat{\mathbf{z}}^{l+1} =\text{SW-MSA}\left(\text{LN}\left(\mathbf{z}^{l}\right)\right) +\mathbf{z}^{l},\] \[\mathbf{z}^{l+1} =\text{MLP}\left(\text{LN}\left(\hat{\mathbf{z}}^{l+1}\right) \right)+\hat{\mathbf{z}}^{l+1}, \tag{1}\] where \(\hat{\mathbf{z}}^{l}\) and \(\mathbf{z}^{l}\) denote the output features of the (S)W-MSA module and the MLP module for block \(l\), respectively; LN stands for the layer normalization (Ba et al., 2016). After multiple stages of blocks, the global interactions among the patches from all the line graphs can be modeled, and thus the correlation between different variables is learned. **Inference.** We use the vision transformers to predict the labels of time series in the same way as image classification. The outputs of Swin Transformer blocks at the final stage are used as the patch representations, upon which a flattened layer with a linear head is applied to obtain the prediction \(\hat{y}_{i}\). As for ViT, the representation of the additional classification token at the final layer is used for prediction. We use the cross-entropy loss when fine-tuning the model on the classification task. ## 4 Experiments ### Experimental Setup **Datasets and Metrics.** We conduct experiments using three popular datasets in healthcare and human activity, as shown in Table 1. The P19 dataset (Reyna et al., 2019) contains information from 38,803 patients, including 34 sensor variables and a binary label indicating sepsis. The P12 dataset (Goldberger et al., 2000) consists of data from 11,988 patients, including 36 sensor variables and a binary label indicating survival during hospitalization. The PAM dataset (Reiss and Stricker, 2012) includes 5,333 samples from 8 different human activities, with 17 sensor variables provided for each sample. We used the processed data provided by Raindrop (Zhang et al., 2022)2. More details are given in Appendix B.1. We employed the same data splits for all comparison baselines, as provided. The evaluation metrics were consistent across all experiments, including the Area Under a ROC Curve (AUROC) and Area Under Precision-Recall Curve (AUPRC) for the imbalanced Figure 2: Illustration of the shifted window approach of Swin Transformer. The self-attention is calculated within each window (grey box). When the window is within a single line graph, the local interactions are captured. After shifting, the window contains patches from different line graphs, and thus the global cross-variable interactions are modeled. datasets P12 and P19. For the balanced PAM dataset, we reported Accuracy, Precision, Recall, and F1 score. All the results are reported as %. **Implementation.** We use the Matplotlib package to draw the line graphs and save them as standard RGB images. The grid layouts of data in P19, P12, and PAM dataset are \(6\times 6\), \(6\times 6\), and \(4\times 5\). We set the size of each grid cell (line graph) as \(64\times 64\), and thus the image sizes are \(384\times 384\), \(384\times 384\), and \(256\times 320\), respectively. One can also directly set the image size to any size, regardless of the grid cell size. We use the checkpoint of Swin Transformer pre-trained on the ImageNet-21K dataset3. The default patch size and window size are \(4\) and \(7\), respectively. Footnote 3: [https://huggingface.co/microsoft/swin-base-patch4-window-7-224-in22k](https://huggingface.co/microsoft/swin-base-patch4-window-7-224-in22k) **Training.** We apply the cutout (DeVries and Taylor, 2017) augmentation method on the input images from P12 and P19 datasets during training to avoid over-fitting caused by upsampling. Specifically, 16 square regions with \(16\times 16\) size are randomly masked in each image. The models are trained using A6000 GPUs with 48G memory. As P12 and P19 datasets are highly imbalanced, we upsample the minority class to the same size as the majority class. We fine-tune Swin Transformer 2 and 4 epochs on upsampled P19 and P12 datasets and 20 epochs on the PAM dataset. The batch sizes are 48 for P19 and P12 and 72 for PAM. The learning rate is 2e-5. **Incorporating static features.** The P12 and P19 datasets provide patients' demographics, such as weight, height, and ICU type. This static information will not change over time and can be well described by the natural language. To incorporate them into our framework, we transform them into natural language sentences via a template and utilize a text encoder RoBERTa-base (Liu et al., 2019) to encode them. The obtained text embedding is concatenated with the image embeddings obtained from the vision transformer to perform classification. Note that the static feature is also applied to all the baselines we compare. ### Main Results **Comparison to state-of-the-art.** We compare our approach with several state-of-the-art methods specialized for irregularly sampled time series: Transformer (Vaswani et al., 2017) which replaces the missing values with 0, Trans-mean which is Transformer with an imputation method that replaces the missing value with the average observed value of the variable), GRU-D (Che et al., 2018), SeFT (Horn et al., 2020), mTAND (Shukla and Marlin, 2020), IP-Net (Shukla and Marlin, 2018), and Raindrop (Zhang et al., 2022). Besides, two methods initially designed for forecasting tasks are also compared, including DGM\({}^{2}\)-O (Wu et al., 2021) and MTGNN (Wu et al., 2020). The implementations and hyperparameter settings of these baselines all follow Raindrop (Zhang et al., 2022): The batch size is 128, and all the compared models are trained for 20 epochs. As the P12 and P19 datasets are highly imbalanced, it is ensured that each batch is balanced with half negative and half positive samples. The performances are averaged on 5 different data splits, which are kept the same across all the compared \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline Datasets & \#Samples & \#Variables & \#Avg. obs. & \#Classes & Static info & Imbalanced & Missing ratio \\ \hline P19 & 38,803 & 34 & 401 & 2 & True & True & 94.9\% \\ P12 & 11,988 & 36 & 233 & 2 & True & True & 88.4\% \\ PAM & 5,333 & 17 & 4,048 & 8 & False & False & 60.0\% \\ \hline \hline \end{tabular} \end{table} Table 1: Statistics of the irregularly sampled time series datasets (Zhang et al., 2022). “#Avg. obs.” denotes the average number of observations for each sample. “Static info” indicates if the time series sample is associated with static attributes (_e.g._, genders). \begin{table} \begin{tabular}{l|c c|c c|c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{2}{c|}{P19} & \multicolumn{2}{c|}{P12} & \multicolumn{4}{c}{PAM} \\ \cline{2-9} & AUROC & AUROC & AUROC & Accuracy & Precision & Recall & F1 score \\ \hline Transformer & 80.7 \(\pm\) 3.8 & 42.7 \(\pm\) 7.7 & 83.3 \(\pm\) 0.7 & 47.9 \(\pm\) 3.6 & 83.5 \(\pm\) 1.5 & 84.8 \(\pm\) 1.5 & 86.0 \(\pm\) 1.2 & 85.0 \(\pm\) 1.3 \\ Trans-mean & 83.7 \(\pm\) 1.8 & 45.8 \(\pm\) 3.2 & 82.6 \(\pm\) 2.0 & 46.3 \(\pm\) 4.0 & 83.7 \(\pm\) 2.3 & 84.9 \(\pm\) 2.6 & 86.4 \(\pm\) 2.1 & 85.1 \(\pm\) 2.4 \\ GRU-D & 83.9 \(\pm\)1.7 & 46.9 \(\pm\) 2.1 & 81.9 \(\pm\) 2.1 & 46.1 \(\pm\) 4.7 & 83.3 \(\pm\) 1.6 & 84.6 \(\pm\) 1.2 & 85.2 \(\pm\) 1.6 & 84.8 \(\pm\) 1.2 \\ SeFT & 81.2 \(\pm\) 2.3 & 41.9 \(\pm\) 3.1 & 73.9 \(\pm\) 2.5 & 31.1 \(\pm\) 4.1 & 67.1 \(\pm\) 2.2 & 70.0 \(\pm\) 2.4 & 68.2 \(\pm\) 1.5 & 68.5 \(\pm\) 1.8 \\ mTAND & 84.4 \(\pm\) 1.3 & 50.6 \(\pm\) 2.0 & 84.2 \(\pm\) 0.8 & 48.2 \(\pm\) 3.4 & 74.6 \(\pm\) 4.3 & 74.3 \(\pm\) 4.0 & 79.5 \(\pm\) 2.8 & 76.8 \(\pm\) 3.4 \\ IP-Net & 84.6 \(\pm\) 1.3 & 38.1 \(\pm\) 3.7 & 82.6 \(\pm\) 1.4 & 47.6 \(\pm\) 3.1 & 74.3 \(\pm\) 3.8 & 75.6 \(\pm\) 2.1 & 77.9 \(\pm\) 2.2 & 76.6 \(\pm\) 2.8 \\ DGM\({}^{2}\)-O & 86.7 \(\pm\) 3.4 & 44.7 \(\pm\) 11.7 & 84.4 \(\pm\) 1.6 & 47.3 \(\pm\) 3.6 & 82.4 \(\pm\) 2.3 & 85.2 \(\pm\) 1.2 & 83.9 \(\pm\) 2.3 & 84.3 \(\pm\) 1.8 \\ MTGNN & 81.9 \(\pm\) 6.2 & 39.9 \(\pm\) 8.9 & 74.4 \(\pm\) 6.7 & 35.5 \(\pm\) 6.0 & 83.4 \(\pm\) 1.9 & 85.2 \(\pm\) 1.7 & 86.1 \(\pm\) 1.9 & 85.9 \(\pm\) 2.4 \\ Raindrop & 87.0 \(\pm\) 2.3 & 51.8 \(\pm\) 5.5 & 82.8 \(\pm\) 1.7 & 44.0 \(\pm\) 3.0 & 88.5 \(\pm\) 1.5 & 89.9 \(\pm\) 1.5 & 89.9 \(\pm\) 1.5 & 89.8 \(\pm\) 1.0 \\ \hline **VITST** & **89.4**\(\pm\) 1.9 & **52.8**\(\pm\) 3.8 & **85.6**\(\pm\) 1.1 & **49.8**\(\pm\) 2.5 & **96.1**\(\pm\) 0.7 & **96.8**\(\pm\) 1.1 & **96.5**\(\pm\) 0.7 & **96.6**\(\pm\) 0.9 \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison with the baseline methods on irregularly sampled time series classification task. **Bold** indicates the best performer, while underline represents the second best. Results are reported as %. methods. As seen from Table 2, our approach substantially outperforms the specialized state-of-the-art algorithms on all these three datasets. On the P19 and P12 datasets, ViTST improves the state-of-the-art results by 2.4 and 1.2 AUROC points (%), respectively. As for the PAM dataset, the improvement is even more significant: 7.6 points in Accuracy, 6.9 points in Precision, 6.6 points in Recall, and 6.8 points in F1 score (%). The larger improvement on PAM dataset might be due to the lower missing ratio of PAM (60.0%) than P19 (94.9%) and P12 (88.4%), meaning that there are more observed values to better recover the fully observed line graphs and reflect patterns (see Fig. 5 for created images from these three datasets). **Leaving-sensors-out Settings.** We further evaluate models' performance in more challenging leave-sensors-out settings, where the observations of a subset of sensors (variables) are masked during testing. This setting simulates real-world scenarios when some sensors fail or become unreachable. Following (Zhang et al., 2022), we experiment with two setups on PAM dataset: (1) _leave-fixed-sensors-out_, which drops a fixed set of sensors across all the samples and compared methods; (2) _leave-random-sensors-out_ which drops the sensors randomly. Only the observations in the validation and test set are dropped. The training set is kept unchanged. For a fair comparison, we dropped the same set of sensors in the _leave-fixed-sensors-out_ setting as in (Zhang et al., 2022). The results are presented in Fig. 3, from which we observe that our approach consistently achieves the best performance and outperforms the second-best by a large margin. With the missing ratio ranging from 10% to 50%, the performance improvement over the previous best model becomes increasingly significant. When half of the variables are dropped, our approach can still achieve acceptable performance, exceeding the best-performed baseline by up to 50.2% in Accuracy, 40.7% in Precision, 59.6% in Recall, and 54.0% in the F1 score, which suggests the robustness of our approach to missing observations in time series. ### Ablation Studies To evaluate the effectiveness of our proposed design, we conduct ablation studies to answer the following questions. **How do backbone vision models affect the performance?** We first tested the performance of different backbone vision models under our framework. We tried another popular vision transformer ViT. For a fair comparison with Swin Transformer, we use the checkpoint pre-trained on ImageNet-21k dataset4. We also tested a CNN-based model ResNet5. In addition, we report the performance of Swin Transformer trained from scratch and Raindrop for comparison. The results are presented in Fig. 4. The pre-trained ViT performs similarly to Swin Transformer. They both outperform the previous state-of-the-art method Raindrop, which suggests the effectiveness of our proposed framework that utilizes vision transformers for time series Figure 3: Performance in leave-**fixed**-sensors-out and leave-**random**-sensors-out settings on PAM dataset. The x-axis is the “missing ratio” which denotes the ratio of masked variables. Results are reported as %. Detailed numbers are provided in Table 12 in Appendix B.4. modeling. However, the CNN-based ResNet achieves much worse performance than the transformer-based models Swin Transformer and ViT, showing that our framework's superior performance derives not only from the idea of casting time series classification to image classification but also the strong image recognition ability of vision transformers. Swin Transformer trained from scratch underperforms its pre-trained counterpart by a large margin, which shows that knowledge obtained from pre-training on natural images could contribute to recognizing patterns in synthetic time series line graph images. It also reveals the advantages of our proposed framework: pre-trained vision models can be easily leveraged for time series modeling. How to create the time series line graph images?As mentioned in Section 3.1, there are several key designs in drawing the line graphs for irregularly sampled multivariate time series and creating the images: (1) the linear _interpolation_, _i.e._, linking the consecutive observed data points on the line graphs; (2) _markers_ for observed data points to distinguish them from the "interpolated" ones on the line graph; (3) variable-specific line _colors_ to distinguish different line graphs. We conducted ablation studies to test their effectiveness. The results are presented in Table 3. We can see that the performance decreases without either of these designs. However, the performance drop of removing interpolation and markers are not as significant as removing variable-specific line colors, which is reasonable as it is most observable on the line graph images and distinguishes different line graphs. We defer more details on time series line graph image creation to Appendix A. What patterns does ViTST capture?To understand what patterns ViTST capture in the time series line graph images, we presented the averaged attention map of a ViTST with ViT as the backbone model in Fig. 5. The model learns to attend to the lines instead of the whitespace. In addition, we observe that the model correctly focuses on observed data points (dots) and changing slopes on the lines, which indicates the observation and trend information. Some flat line graphs which don't reflect many dynamic patterns receive less attention. ### Regular Time Series Classification The advantage of our approach is that it can be used to model any shape of time series, whether it is regular or not. We thus also tested the performance of our approach on regular time series data. We selected seven representative regular multivariate time series datasets from the UEA Time Series Classification Archive (Bagnall et al., 2018), which have diverse characteristics in terms of the number of classes, variables, and time series length. We follow (Zerveas et al., 2021) to use these baselines for comparison: DTW\({}_{D}\) which stands for dimension-Dependent DTW combined with dilation-CNN (Franceschi et al., 2019), LSTM (Graves, 2012), XGBoost (Chen & Guestrin, 2016), Rocket (Dempster et al., 2020), and a transformer-based TST (Zerveas et al., 2021) Figure 4: Performance of different backbone vision models and the state-of-the-art model Raindrop on P19, P12, and PAM datasets. Results are reported as %. Detailed numbers are provided in Table 13 in Appendix B.4. \begin{table} \begin{tabular}{l|c c|c c|c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{2}{c|}{P19} & \multicolumn{2}{c|}{P12} & \multicolumn{4}{c}{PAM} \\ \cline{2-9} & AUROC & AUPRC & AUROC & AUPRC & Accuracy & Precision & Recall & F1 score \\ \hline ViTST & 89.4 \(\pm\) 1.9 & 52.8 \(\pm\) 3.8 & 85.6 \(\pm\) 1.1 & 49.8 \(\pm\) 2.5 & 96.1 \(\pm\) 0.7 & 96.8 \(\pm\) 1.1 & 96.5 \(\pm\) 0.7 & 96.6 \(\pm\) 0.9 \\ \hline w/o interpolation & 87.5 \(\pm\) 1.5 & 51.2 \(\pm\) 3.6 & 84.1 \(\pm\) 1.4 & 48.3 \(\pm\) 3.5 & 96.0 \(\pm\) 1.1 & 96.8 \(\pm\) 0.9 & 96.4 \(\pm\) 0.9 & 96.6 \(\pm\) 0.9 \\ w/o markers & 88.3 \(\pm\) 1.6 & 51.0 \(\pm\) 2.4 & 84.8 \(\pm\) 1.3 & 48.7 \(\pm\) 3.8 & 94.1 \(\pm\) 0.9 & 95.1 \(\pm\) 0.7 & 94.8 \(\pm\) 1.1 & 94.9 \(\pm\) 0.8 \\ w/o colors & 85.3 \(\pm\) 0.8 & 48.5 \(\pm\) 2.1 & 83.9 \(\pm\) 1.1 & 46.5 \(\pm\) 3.2 & 92.9 \(\pm\) 1.9 & 94.9 \(\pm\) 1.2 & 93.6 \(\pm\) 1.5 & 94.1 \(\pm\) 1.5 \\ \hline \hline \end{tabular} \end{table} Table 3: Ablation studies on different designs when drawing the time series line graphs. Results are reported as %. which operates on fully observed numerical time series data. The performance comparisons are shown in Table 4. Our approach performs consistently well on these seven datasets with different characteristics. Its average accuracy is second-best and close to the best-performed baseline method TST. By contrast, on the irregularly sampled time series datasets, ViTST outperforms Transformer (Trans-mean) as discussed in Sect 4.2. It should be noted that most of the algorithms on regular and irregular time series are studied separately and could not handle the other type of time series well. However, our approach achieves promising results on both regular and irregular time series, showing its superiority in generality and effectiveness. ### Self-supervised Learning Self-supervised learning has become a popular approach to learning representation from unlabelled data, which can benefit various downstream tasks. Masked language modeling (MLM) (Kenton and Toutanova, 2019) and masked image modeling (MIM) (Xie et al., 2022; He et al., 2022) have been dominant self-supervised approaches in NLP and CV domains. We also perform a preliminary exploration on the _masked image modeling_ on time series line graph images: we mask a portion of patches in the line graph images and train the model to recover them. As shown in Fig. 6, we randomly mask several columns of the line graphs in the images. In this way, we can ensure that some regions containing line graphs are masked instead of all the empty places. A one-layer prediction head is applied on the vision transformer encoder to reconstruct the pixels of masked patches with \(\ell_{1}\) loss, _i.e.,_ predicting the missing parts on the line graphs. With self-supervised pre-training on the largest P19 dataset with 38803 samples, Our approach further improves AUPRC from 52.8 (\(\pm\) 3.8) to 53.8 (\(\pm\) 3.2). However, AUROC points (%) slightly dropped from 89.4 (\(\pm\) 1.9) to 88.9 (\(\pm\) 2.1), which is within a standard deviation. More details are given in Appendix B.3. Note that we did not perform an extensive hyperparameter search in the preliminary explorations. We believe this is worth further explorations, which we leave for future work. ## 5 Conclusion In this paper, we introduced a new perspective for multivariate time series modeling by transforming them into images, which enables the use of powerful vision transformers. This approach is simple and general since any type of time se \begin{table} \begin{tabular}{l c c c c c|c} \hline \hline Datasets & DTW\({}_{D}\) & LSTM & XGBoost & Rocket & TST & ViTST \\ \hline EC & 0.323 & 0.323 & 0.437 & 0.452 & 0.326 & **0.456** \\ UW & 0.903 & 0.412 & 0.759 & **0.944** & 0.913 & 0.862 \\ SCP1 & 0.775 & 0.689 & 0.846 & 0.908 & **0.922** & 0.898 \\ SCP2 & 0.539 & 0.466 & 0.489 & 0.533 & **0.604** & 0.561 \\ JV & 0.949 & 0.797 & 0.865 & 0.962 & **0.997** & 0.946 \\ SAD & 0.963 & 0.319 & 0.696 & 0.712 & **0.998** & 0.985 \\ HB & 0.717 & 0.722 & 0.732 & 0.756 & **0.776** & 0.766 \\ \hline Avg. & 0.738 & 0.533 & 0.689 & 0.703 & **0.791** & 0.782 \\ \hline \hline \end{tabular} \end{table} Table 4: Performance comparison on regular time series datasets. **Bold** indicates the best performer, while underline represents the second best. Figure 5: Illustration of the averaged attention map of ViTST on three images from P19, P12, and PAM datasets, respectively. Left: input images. Right: attention maps. Figure 6: Illustration of the masking area with a mask ratio of 0.5 in our mask image modeling exploration on time series line graph images. The model is trained to reconstruct the masking areas. ries can be transformed into line graph images and handled. Despite its simplicity, our approach demonstrates strong performance against highly specialized state-of-the-art methods on several popular datasets and shows robustness to missing observations. We also evaluate our approach on regular time series and witness promising results. We believe our approach can be potentially extended as a general-purpose framework for various time series tasks and encourage the reuse of fast-evolving computer vision techniques in the time series modeling domain.
2301.04782
Resource Theory of Imaginarity: New Distributed Scenarios
The resource theory of imaginarity studies the operational value of imaginary parts in quantum states, operations, and measurements. Here we introduce and study the distillation and conversion of imaginarity in distributed scenario. This arises naturally in bipartite systems where both parties work together to generate the maximum possible imaginarity on one of the subsystems. We give exact solutions to this problem for general qubit states and pure states of arbitrary dimension. We present a scenario that demonstrates the operational advantage of imaginarity: the discrimination of quantum channels without the aid of an ancillary system. We then link this scenario to LOCC discrimination of bipartite states. We experimentally demonstrate the relevant assisted distillation protocol, and show the usefulness of imaginarity in the aforementioned two tasks.
Kang-Da Wu, Tulja Varun Kondra, Carlo Maria Scandolo, Swapan Rana, Guo-Yong Xiang, Chuan-Feng Li, Guang-Can Guo, Alexander Streltsov
2023-01-12T02:05:08Z
http://arxiv.org/abs/2301.04782v1
# Resource Theory of Imaginarity: New Distributed Scenarios ###### Abstract The resource theory of imaginarity studies the operational value of imaginary parts in quantum states, operations, and measurements. Here we introduce and study the distillation and conversion of imaginarity in distributed scenario. This arises naturally in bipartite systems where both parties work together to generate the maximum possible imaginarity on one of the subsystems. We give exact solutions to this problem for general qubit states and pure states of arbitrary dimension. We present a scenario that demonstrates the operational advantage of imaginarity: the discrimination of quantum channels without the aid of an ancillary system. We then link this scenario to LOCC discrimination of bipartite states. We experimentally demonstrate the relevant assisted distillation protocol, and show the usefulness of imaginarity in the aforementioned two tasks. ## I Introduction Standard quantum theory describes physical reality with complex states, operators, and Hilbert spaces. However, there have always been lots of questions on the role of complex numbers since the early days of quantum physics [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15]. Recently the necessity and usefulness of the imaginary part of quantum mechanics has received significant attention [16; 17; 18; 19; 20; 21; 22; 23]. Today, quantum mechanics with imaginary numbers seems to be the most successful theory to describe the microscopic world. These research contributions have shown that complex quantum mechanics is fundamentally different from the corresponding real version in many aspects [2; 10; 13; 15; 19; 24; 25; 26; 27; 28], revealing that the imaginary part is not only necessary for the formulation of quantum theory but also plays an important role in many quantum information tasks [9; 11; 29]. The development of quantum information science over the last two decades has led to a reassessment of quantum properties, such as entanglement [30; 31] and coherence [32; 33], as resources, which led to the development of quantitative theories that captured these phenomena in a mathematically rigorous fashion [34; 35]. Nevertheless, imaginarity had not been studied in this framework until the last few years [16; 18; 20; 21]. In this setting, imaginarity is regarded as a valuable resource that cannot be generated or increased under a restricted class of operations known as _real operations_ (RO). Quantum states whose density matrices (in a fixed basis) contain imaginary parts are viewed as resource states, and thus cannot be created freely by RO. In this Letter, we study the resource theory of imaginarity in distributed scenarios. (At least) two parties, Alice (A) and Bob (B) are involved, who share a bipartite state \(\rho^{\Delta B}\). In this setting, imaginarity is considered a resource only in Bob's system, while Alice can perform arbitrary quantum operations on her system. The duo is further allowed to communicate classically with one another. Overall, we refer to the allowed set of operations in this protocol as _Local Quantum-Real operations and Classical Communication (LQRCC)_ borrowing the notion from the theory of entanglement [30] and quantum coherence [32]. This framework leads to a variety of problems, which we address and solve in this Letter. In particular, we consider assisted imaginarity distillation, where Alice assists Bob in extracting local imaginarity. If only one-way classical communication is used, we provide a solution of this problem for arbitrary two qubit states. We also study assisted state conversion, where the goal is to obtain a specific target state on Bob's side. We solve this problem for any target state, if Alice and Bob share a pure state initially. Furthermore, we study the role of imaginarity in ancilla-free channel discrimination, showing two real channels that are perfectly distinguishable in the ancilla-free scenario once we allow imaginarity, but become completely indistinguishable if we have access only to real states and real measurements. Additionally, we prove how this task is related to LOCC (Local Operations and Classical Communication) discrimination of quantum states, specifically to the LOCC discrimination of their normalized Choi matrices. Finally, we experimentally implement the above protocols in a quantum photonic setup, performing the proof of principle experiment testing the usefulness of imaginarity in such quantum tasks. Our work opens new avenues towards both theoretical and experimental exploration of imaginarity as a quantum resource. ## Resource theory of imaginarity The starting point of our work is the resource theory of imaginarity, introduced very recently in Refs. [16; 18; 20]. The free states in imaginarity theory are identified as _real_ states, which are real density matrices in a given basis \(\{|j\rangle\}\). The set of all real states is denoted by \(\mathcal{R}\), which can be described by \(\mathcal{R}=\{\rho:\langle j|\rho|k\rangle\in\mathbb{R}\,\,\text{for all}\,j,k\}\). A quantum operation specified by Kraus operators \(\{K_{j}\}\) satisfying \(\sum_{j}K_{j}^{\dagger}K_{j}=\mathbbm{1}\), is considered to be free, i.e., _real_, if it contains only real elements in the chosen basis: \(\langle m|K_{j}|n\rangle\in\mathbb{R}\,\,\text{for all}\,\,j,m,n\)[16; 18]. It is known that the set RO coincides with the set of _completely non-imaginarity creating operations_[16]. Moreover, RO coincides with the set of operations which have a _real dilation_[16]. The golden unit, i.e. the maximally resourceful state, is the same in any Hilbert space, regardless of its dimension. In particular, the maximally imaginary states are the two eigenstates of Pauli matrix \(\sigma_{\gamma}\), \[|\hat{\pm}\rangle=\frac{(\,|0\rangle\pm i\,|1\rangle\,)}{\sqrt{2}}. \tag{1}\] One maximally imaginary qubit is referred to as an _imbit_ in the following. Within the framework of quantum resource distillation [35; 36; 37; 38], general quantum states can be used for single-shot or asymptotic distillation of imbits via ROs. In the single-shot regime, the answer was already given in Refs. [18; 20]. In particular, the fidelity of imaginarity \(F_{\text{I}}\), which quantifies the maximum achievable fidelity between a state \(\rho\) and the imbit \[F_{1}\left(\rho\right)=\max_{\Lambda}F\left(\,\Lambda\left[\rho\right],|\hat{ \pm}\rangle\langle\hat{\pm}|\right), \tag{2}\] was used as the figure of merit for single-shot distillation, where \(F\left(\rho,\sigma\right)=\left[\text{Tr}\left(\sqrt{\sigma}\rho\sqrt{\sigma} \right)^{\frac{1}{2}}\right]^{2}\). The exact value of fidelity of imaginarity for general \(\rho\) was shown to be equal to \[F_{1}\left(\rho\right)=\frac{1+\mathcal{I}_{R}\left(\rho\right)}{2}, \tag{3}\] where \(\mathcal{I}_{R}\left(\rho\right)=\min_{\tau}\{s\geq 0:\left(\rho+s\tau\right)/ \left(1+s\right)\in\mathcal{R}\}\) is the robustness of imaginarity [18]. When we consider the asymptotic setting, for large \(n\), the fidelity of imaginarity exponentially converges to \(1\) (for any non-real states). The exponent, for large n, is given by \(-\log\left(\text{Tr}\sqrt{\rho\rho^{T}}\right)\). For real states, the fidelity of imaginarity is independent of \(n\), and is \(1/2\)[39]. Details of the proof can be found in the Appendix. One of the key motivations for us to study the resource of imaginarity is that we can simulate arbitrary operations or measurements with one imbit at hand, even if all devices allow only real ones in our lab, as we show explicitly in the Appendix. In entanglement theory, one maximally entangled qubit state (ebit) has a clear operational meaning: it can be used to teleport the state of an unknown qubit deterministically to a remote lab. In imaginarity theory, if all the devices are restricted to implement ROs, e.g., we have only half-wave plate in an optical setup [18; 20], we can still prepare arbitrary states or implement arbitrary measurements if we get one imbit at hand. We refer to the Appendix for more details. ## II Bipartite imaginarity theory The results studied so far concern imaginarity as resource in a single physical system. We now extend our considerations to the bipartite setting. As mentioned earlier, the task involves a bipartite state \(\rho^{AB}\) shared by Alice and Bob, and the goal is to maximize imaginarity on Bob's side under LQRCC. If both parties are restricted to real operations, the corresponding set is called local real operations and classical communication (LRCC) [40]. It is clear that via LQRCC it is possible to create only states of the form \[\rho_{\text{\#}}=\sum_{j}p_{j}\,\rho_{j}^{A}\otimes\sigma_{j}^{B}, \tag{4}\] where \(\rho_{j}^{A}\) is an arbitrary state on Alice's side, and \(\sigma_{j}^{B}\) is a real state on Bob's side. States of this form will be called _Quantum-Real (QR)_. In the appendix, we show that the Choi matrices corresponding to LQRCC are "invariant" under partial transpose over Bob (Bob is restricted to real operations). This also holds for more general LQRCC maps, which are trace non-increasing (similar to SLOCC in entanglement theory). Using this, we now show that, for arbitrary initial state \(\rho_{AB}\) and the target pure state \(|\psi_{A^{\text{-}B^{\prime}}}\rangle\), the optimal achievable fidelity for a given probability of success \(p\) (given by \(F_{p}\)), can be upperbounded by a SDP. **Theorem 1**.: _Achievable fidelity for a given probablity of success (\(F_{p}(\rho_{AB}\xrightarrow{LQRCC}\,|\psi_{A^{\text{-}B^{\prime}}}\rangle)\), of transforming \(\rho_{AB}\) into \(|\psi_{A^{\text{-}B^{\prime}}}\rangle\) via LQRCC operations can upper bounded by the following semidefinite programme. Maximise:_ \[\frac{1}{p}\,\text{Tr}\left(X_{ABA^{\text{-}B^{\prime}}}\rho_{AB}^{\text{T}} \otimes|\psi_{A^{\text{-}B^{\prime}}}\rangle\langle\psi_{A^{\text{-}B^{\prime }}}|\right) \tag{5}\] _under the constraints,_ \[X_{ABA^{\text{-}B^{\prime}}}\geq 0,\,X_{ABA^{\text{-}B^{\prime}}}^{T_{ \text{-}B^{\prime}}}=X_{ABA^{\text{-}B^{\prime}}},\text{Tr}_{A^{\text{-}B^{ \prime}}}\,X_{ABA^{\text{-}B^{\prime}}}\leq\mathbbm{1}_{AB}\,\,\text{and}\] \[\text{Tr}\left(X_{ABA^{\text{-}B^{\prime}}}\rho_{AB}^{\text{T}} \otimes\mathbbm{1}_{B^{\prime}}\right)=p. \tag{6}\] In the case of LRCC operations, one has to add an additonal constraint, given by \(X_{ABA^{\text{-}B^{\prime}}}^{T_{A^{\text{-}B^{\prime}}}}=X_{ABA^{\text{-}B^{ \prime}}}\). For the details about the proof, please refer to the appendix. In the special case when the target state is a local pure state of Bob \(|\psi_{B^{\text{-}}}\rangle\), one can replace \(|\psi_{A^{\text{-}B^{\prime}}}\rangle\) by \(|0\rangle\otimes|\psi_{B^{\text{-}}}\rangle\), in the objective function. ## III Assisted imaginarity distillation Having extended the theory of imaginarity to multipartite systems, we are now ready to present assisted imaginarity distillation. In this task, Alice and Bob aim to extract imaginarity on Bob's side by applying LQRCC operations, which is in analogy to assisted entanglement distillation [41; 42; 43] and assisted distillation of quantum coherence [44]. We assume that Alice and Bob share an arbitrary mixed state \(\rho^{AB}\), and the process is performed on a single copy of the state and only one-way classical communication from Alice to Bob is used. If Alice performs a general measurement \(\left\{M_{j}^{A}\right\}\) on her side, the probability \(p_{j}\) and the corresponding post-measurement state of Bob \(\rho_{j}^{B}\) are given respectively by \(p_{j}=\mathrm{Tr}\left[\left(M_{j}^{A}\otimes 1^{B}\right)\rho^{AB}\right]\), \(\rho_{j}^{B}=1/p_{j}\,\mathrm{Tr}_{A}\left[\left(M_{j}^{A}\otimes 1^{B}\right) \rho^{AB}\right]\). As a figure of merit we now introduce the _assisted fidelity of imaginarity_, quantifying the maximal single-shot fidelity between Bob's final state and the maximally imaginary state \(\left|\pm\right\rangle\): \[F_{\mathbf{x}}\left(\rho^{AB}\right)=\max_{\left[\mathbf{x}_{j}^{A},\Lambda_{j}\right] }\sum_{j}p_{j}F\left(\Lambda_{j}\left[\rho_{j}^{B}\right],\,\left|\pm\right\rangle \!\left\langle\mp\right|\right). \tag{7}\] The maximum is taken over all POVMs on Alice's side, and all real operations \(\Lambda_{j}\) on Bob's side. For two-qubit states, we can derive the exact analytic expression. Consider a two-qubit state \(\rho^{AB}\), which can be written as \(\rho=\left(\mathbbm{1}_{4}+\mathbf{a}\cdot\mathbf{\sigma}\otimes\mathbbm{1}+\mathbbm{1 }\otimes\mathbf{b}\cdot\mathbf{\sigma}+\sum_{kl}E_{kl}\sigma_{l}\otimes\sigma_{l} \right)/4\), where the \(\sigma_{k}\)'s are Pauli matrices, \(\mathbf{a}=\left(a_{1},a_{2},a_{3}\right)\) and \(\mathbf{b}=\left(b_{1},b_{2},b_{3}\right)\) describe local Bloch vectors of Alice and Bob, respectively, and \(E_{kl}=\mathrm{Tr}\left(\sigma_{k}\otimes\sigma_{l}\rho\right)\). Equipped with these tools, we are now ready to give a closed expression for the assisted fidelity of imaginarity for all two-qubit states. **Theorem 2**.: _For any two-qubit state \(\rho^{AB}\) the assisted fidelity of imaginarity is given by_ \[F_{\mathbf{x}}\left(\rho^{AB}\right)=\frac{1}{2}\left(1+\max\left\{\left|b_{2} \right|,\left|\mathbf{s}\right|\right\}\right). \tag{8}\] _where the vector \(\mathbf{s}=\left(E_{12},E_{22},E_{32}\right)\)._ The proof is presented in the Appendix. We will now extend our results to stochastic state transformations, where the goal is to achieve a transformation with the maximum possible probability. To this end, we introduce the _geometric measure of imaginarity_ and the _concurrence of imaginarity_, presented in Refs. [40; 45] respectively as \[\mathcal{I}_{g}\left(\rho\right) =\frac{1-\sqrt{F\left(\rho,\rho^{T}\right)}}{2}, \tag{9a}\] \[\mathcal{I}_{c}\left(\rho\right) =\max\left\{0,\lambda_{1}-\sum_{j>1}\lambda_{j}\right\}, \tag{9b}\] where \(\left\{\lambda_{1},\lambda_{2},\ldots\right\}\) are the eigenvalues (in decreasing order) of \(\left(\sqrt{\rho}\rho^{T}\sqrt{\rho}\right)^{\frac{1}{2}}\). With this in place, we now extend this scenario to the bipartite regime where we will show how Alice can assist Bob (\(\rho^{B}\)) to get the target state \(\sigma^{B}\) with optimal probability. Now we use the following parameterization: \(\sin^{2}\alpha=\left[1-\mathcal{I}_{c}\left(\rho^{B}\right)\right]/2\) and \(\sin^{2}\beta=\mathcal{I}_{g}\left(\sigma^{B}\right)\) with \(\alpha,\,\beta\in(0,\frac{\pi}{2})\). **Lemma 3**.: _For any bipartite pure state \(\psi^{AB}\), the optimal probability of Bob preparing a local state \(\sigma^{B}\), getting assistance from Alice, is given by_ \[P\left(\psi^{AB}\rightarrow\sigma^{B}\right)=\min\left\{\frac{\sin^{2}\alpha} {\sin^{2}\beta},1\right\}. \tag{10}\] The proof of Lemma 3 is presented in the Appendix. In Ref. [40] the authors provided tight continuity bounds for the geometric measure. Using these bounds, along with Lemma 3, we can provide an analytical expression for the optimal probability of Bob preparing a local state with an allowed error, with assistance from Alice. Similarly, we can also find a closed expression for the optimal achievable fidelity, for a given probability of success. The following theorem collects these results. **Theorem 4**.: _For any bipartite pure state \(\psi^{AB}\), the optimal probability \(P_{f}\) of Bob preparing a local state \(\sigma^{B}\), with a fidelity \(f\) via assistance from Alice, is given by_ \[P_{f}\left(\psi^{AB}\rightarrow\sigma^{B}\right)=\begin{cases}1&\text{for } \alpha-\beta+\gamma\geq 0\\ \frac{\sin^{2}\alpha}{\sin^{2}\left(\beta-\gamma\right)}&\text{otherwise}\end{cases} \tag{11}\] _where \(\gamma=\cos^{-1}\sqrt{f}\)._ _The optimal achievable fidelity for a given probability of success \(p\), can be expressed as:_ \[F_{p}\left(\psi^{AB}\rightarrow\sigma^{B}\right)=\begin{cases}1&\text{for }\ p\leq\frac{\sin^{2}\alpha}{\sin^{2}\beta}\\ \cos^{2}\left[\beta-\sin^{-1}\!\left(\frac{\sin\alpha}{\sqrt{p}}\right)\right] &\text{otherwise}.\end{cases} \tag{12}\] Details of the proof for the above theorem can be found in the Appendix. _Imaginarity in channel discrimination_--We will now discuss the role of imaginarity in channel discrimination. Specifically, here we focus on the variant of channel discrimination which we call _ancilla-free_, in that it does not involve an ancillary system (cf. Refs. [46; 47]). It can be regarded as a game, where one has access to a "black box" with the promise that it implements a quantum channel \(\Lambda_{j}\) with probability \(p_{j}\). The goal of the game is to guess \(\Lambda_{j}\) by choosing optimal initial state \(\rho\) and positive operator-valued measure (POVM) \(\left\{M_{j}\right\}\), which is used to distinguish the \(\Lambda_{j}\left(\rho\right)\)'s. Theoretically, the probability of guessing the channel \(\Lambda_{j}\) correctly is given as \[p_{\text{succ}}\left(\rho,\left\{p_{j},\Lambda_{j}\right\},\left\{M_{j} \right\}\right)=\sum_{j}p_{j}\,\mathrm{Tr}\left[M_{j}\Lambda_{j}\left(\rho \right)\right]. \tag{13}\] Recently, it has been shown that _any_ quantum resource has an operational advantage in the channel discrimination task [46; 47], namely a resource state \(\rho\) (i.e. a quantum state that is not free) outperforms any free \(\sigma\) in a specific channel discrimination task. Now we put the above protocol into imaginarity theory by considering the task of discrimination of real channels. To see an advantage, we need imaginarity both in the probe state and in the measurement, since, as we show in the Appendix, this task is equivalent to LOCC discrimination of their corresponding normalized Choi states, in which we need imaginarity in the measurements of both particles. To better illustrate this idea, we will provide an example of two real channels that cannot be distinguished in the ancilla-free scenario by using only real states and measurements, but they become instead perfectly distinguishable once we have access to imaginarity for states and measurements. To this end, let us consider two real qubit channels prepared with equal probability: \[\begin{split}\mathcal{N}\,:&\rho\mapsto\frac{1}{ 2}\left(\rho+\sigma_{x}\,\sigma_{z}\,\rho\,\sigma_{z}\,\sigma_{x}\,\right),\\ \mathcal{M}\,:&\rho\mapsto\frac{1}{2}\left(\, \sigma_{x}\rho\,\sigma_{x}+\sigma_{z}\rho\,\sigma_{z}\,\right),\end{split} \tag{14}\] where \(\sigma_{x}\) and \(\sigma_{z}\) are Pauli matrices. If we input a real state \(\rho\) into either of these two channels, they will produce exactly the same output \(1/2\), thus we cannot distinguish them better than making a random guess, even if we allowed imaginarity in our measurements. On the other hand, if imaginarity is forbidden in measurements, no matter how we choose the probe state (even if its non-real), we cannot still distinguish them at all, because the only way to discriminate between the outputs of the two channels would be to perform a measurement associated with the \(\sigma_{y}\) Pauli matrix. Indeed, if the probe state has an off-diagonal entry \(\rho_{01}\) with non-zero imaginary part, wherever the output of \(\mathcal{N}\) has \(\mathrm{Im}\,\rho_{01}\), the output of \(\mathcal{M}\) will show \(-\mathrm{Im}\,\rho_{01}\) in its place. Only if we implement a projective measurement of \(\sigma_{y}\) can we perfectly distinguish these two channels. Therefore, the only way to achieve a success probability better than random guessing is to introduce imaginarity into both the initial state \(\rho\) and the measurement. It is worth noting that the same two channels \(\mathcal{N}\) and \(\mathcal{M}\) become perfectly distinguishable even with no imaginarity in the probe state and in the measurement if we remove the requirement of ancilla-free discrimination. If we allow an ancilla \(R\), we need to consider a bipartite input state \(\rho^{RA}\) and a bipartite POVM \(\left\{M_{1}^{RA},M_{2}^{RA}\right\}\), with success probability \[p_{\mathrm{succ}}\left(\rho,\left\{\frac{1}{2},\Lambda_{j}\right\},\left\{M_{ j}\right\}\right)=\frac{1}{2}\sum_{j=1}^{2}\mathrm{Tr}\left[M_{j}^{RA}\left(T^{R} \otimes\Lambda_{j}\right)\left(\rho^{RA}\right)\right], \tag{15}\] where \(\Lambda_{1}=\mathcal{N}\) and \(\Lambda_{2}=\mathcal{M}\). Now, let us take \(\rho^{RA}=\phi^{+}=|\phi^{+}\rangle\!\langle\phi^{+}|\), with \(|\phi^{+}\rangle=\frac{1}{\sqrt{2}}\left(|00\rangle+|11\rangle\right)\). If we feed \(\phi^{+}\) to both channels, we get \[\begin{split}\mathcal{I}\otimes\mathcal{N}\left(\phi^{+}\right) &=\frac{1}{2}\left(|\phi^{+}\rangle\!\langle\phi^{+}|+|\psi^{-} \rangle\!\langle\psi^{-}|\right),\\ \mathcal{I}\otimes\mathcal{M}\left(\phi^{+}\right)& =\frac{1}{2}\left(|\phi^{-}\rangle\!\langle\phi^{-}|+|\psi^{+} \rangle\!\langle\psi^{+}|\right),\end{split} \tag{16}\] where \(|\phi^{-}\rangle=\frac{1}{\sqrt{2}}\left(|00\rangle-|11\rangle\right)\), \(|\psi^{+}\rangle=\frac{1}{\sqrt{2}}\left(|01\rangle+|10\rangle\right)\), and \(|\psi^{-}\rangle=\frac{1}{\sqrt{2}}\left(|01\rangle-|10\rangle\right)\). As noted in Ref. [18], these two output states can be perfectly distinguished by the real POVM \(\left\{M_{1},M_{2}\right\}\), where \[\begin{split}& M_{1}=|\hat{x}\rangle\!\langle\hat{x}|\otimes| \hat{-}\rangle\!\langle\hat{-}|+|\hat{-}\rangle\!\langle\hat{-}|\otimes|\hat {x}\rangle\!\langle\hat{x}|,\\ & M_{2}=|\hat{+}\rangle\!\langle\hat{x}|\otimes|\hat{+}\rangle \!\langle\hat{x}|+|\hat{-}\rangle\!\langle\hat{-}|\otimes|\hat{-}\rangle\! \langle\hat{-}|.\end{split} \tag{17}\] This shows that the two real channels can be distinguished perfectly with the aid of an ancilla, only using real states and real measurements. ## III Experiments We experimentally implement the aforementioned assisted imaginarity distillation and channel discrimination protocols. The whole experimental setup is illustrated in Fig. 1, which consists of three modules: module \(\mathbf{A}\) enables us to prepare a two-qubit entangled state via spontaneous parametric down conversion (SPDC) process: \[|\psi\rangle^{AB}=a\,|00\rangle+b\,|11\rangle, \tag{18}\] with arbitrary \(a\) and \(b\) with \(|a^{2}+|b|^{2}=1\) which can be tuned by changing the angles of 404 nm HWP and QWP. Note that we have conventionally set \(|0\rangle:=|H\rangle\) and \(|1\rangle:=|V\rangle\). Module \(\mathbf{B}\) utilizes an unbalanced Mach-Zehnder interferometer together with module \(\mathbf{A}\) to prepare a class of Werner states: \[\rho^{AB}=p\,|\phi^{+}\rangle\!\langle\phi^{+}|+(1-p)\,\frac{1}{4}, \tag{19}\] where \(p\) denotes the purity of the two-qubit state. Module \(\mathbf{B}\) also allow us to implement single-qubit channels in ancilla-free scenario. Module \(\mathbf{C}\) allows us to perform quantum-state tomography (QST) to identify the final two-qubit polarization-encoded states concerned, or perform assisted imaginarity distillation by performing local measurement on Alice's photons Figure 1: **Experimental setup**. The whole experimental setup is divided into three modules: \(\mathbf{A}\) Entangled source, \(\mathbf{B}\) state preparation & channel implementation, and \(\mathbf{C}\) discrimination & tomography. The optical components include: QP, quartz plate; SPD, single photon detectors; BS, beamsplitters; AA, adjustable aperture; PBS, polarizing beamsplitter; QWP, quarter-wave plate; HWP, half-wave plate. and identifying the exact amount of imaginarity by QST of Bob's state. Moreover, this module allows us to implement channel discrimination by performing local measurement on the polarization state of a single-photon when the other is used as a trigger. We refer to the Appendix for more details. We then perform proof of principle experiments of the one-shot assisted imaginarity distillation and the ancilla-free channel discrimination tasks. Results are shown in Figs. 2 and 3 respectively. For assisted imaginarity distillation, we experimentally prepare two classes of two-qubit states. The first class of states as in Eq. (18). Theoretically, the upper bound for single-shot assisted imaginarity distillation can be calculated from Theorem 2 as \(F_{1}\left(\left|\psi\right\rangle^{AB}\right)=2\left|ab\right|\). From Fig. 2(a), we can see that the experimentally obtained average imaginarity after assistance (blue disks) approximately equals to the experimentally obtained upper bound (red disks) within reasonable experimental imperfections. The second class of states are generated as Werner states in Eq. (19). Theoretically, the maximum average fidelity of imaginarity after assistance is calculated as \(F_{1}(\rho^{AB})=p\). Fig. 2(b) details the relevant experimental results. From both results we see that experimentally obtained average fidelity of imaginarity data and upper bound obtained from two-qubit state tomography agree well with theoretical predictions. We then show the usefulness of imaginarity in channel discrimination for various discrimination tasks. Fig. 3 details these results for two discrimination tasks. The first discrimination task involves two channels given by \[\begin{split}&\mathcal{M}\left(\rho,\,p\right)=p\rho+\left(1-p \right)\sigma_{x}\,\sigma_{z}\,\rho\,\sigma_{z}\,\sigma_{x},\\ &\mathcal{N}\left(\rho\right)=\frac{1}{2}\left(\sigma_{x}\rho\, \sigma_{x}+\sigma_{z}\rho\,\sigma_{z}\right).\end{split} \tag{20}\] Note that the two channels preserve real density matrices. The experimental results of this discrimination task are shown in Fig. 3(a). If we can use imaginarity in measurements and initial states, we can perfectly distinguish the two channels [orange disks in Fig. 3(a)]. However, if we allow only real density matrices as initial states or real measurement operators, we get a theoretical optimal guessing probability of \(1/2+\left|2p-1\right|/4\) for the ancilla-free channel discrimination. Experimental data are in agreement with the theoretical predictions [see green disks in Fig.3(a)]. Here we note that the two channels are exactly the same as in Eqs. (14) when \(p=1/2\). For the second discrimination task, we consider \[\begin{split}&\mathcal{M}\left(\rho,\,w\right)=w\rho+\left(1-w \right)\frac{1}{2},\\ &\mathcal{N}\left(\rho\right)=\frac{1}{2}\left(\sigma_{x}\rho\, \sigma_{x}+\sigma_{z}\rho\,\sigma_{z}\right).\end{split} \tag{21}\] The results are shown in Fig. 3(b). If non-real states and measurement operators are allowed, then we get a theoretical optimal distinguishing probability as \(3/4+w/4\), which is plotted as the upper orange line in Fig. 3(b). The relevant experimentally obtained distinguishing probabilities are shown as orange disks. If imaginarity is prohibited in this task, then the optimal distinguishing probability reads \(1/2+w/4\), and is plotted as the lower green line, together with experimental values represented by green disks. We can draw a similar conclusion to the first discrimination task. ## IV Discussion The results presented above are mainly based on the new set of LQRCC operations which was introduced and studied in this article. We considered assisted imaginarity distillation in this setting, and completely solved the problem for general two-qubit states. Moreover, we discussed the task of single-shot assisted imaginarity distillation for arbitrary pure states in higher dimensions. The usefulness of imaginarity in channel discrimination is both theoretically and experimentally shown for a class of real channels. There are in fact many scenarios of practical relevance where the task of assisted imaginarity distillation can play a central role. For instance, think of a remote or unaccessible system on which imaginarity is needed as a resource (e.g., in the task of local discrimination of quantum states): our results give optimal prescriptions to inject such imaginarity on the remote target by acting on an ancilla. The results provide insight into both the operational characterization as well as the Figure 2: **Experimental results for assisted imaginarity distillation**. (a) Initial pure states \(\left|\psi\right\rangle^{AB}=a|00\rangle+b|11\rangle\); (b) initial Werner states \(\rho^{AB}=p|\psi^{*}\rangle\mathbb{A}\phi^{*}|+\left(1-p\right)\,\mathbb{I}/4\). In both experiments, red disks represent the calculated fidelity of imaginarity by assistance using Theorem 2 for experimentally reconstructed two-qubit states, and blue disks represent actual obtained average fidelity of imaginarity in experiments using the optimal measurement on Alice’s system. mathematical formalism of the resource theory of imaginarity, contributing to a better understanding of this fundamental resource. The work at the University of Science and Technology of China is supported by the National Key Research and Development Program of China (No. 2018YFA0306400), the National Natural Science Foundation of China (Grants Nos. 12134014, 12104439, 61905234, 11974335, 11574291, and 11774334), the Key Research Program of Frontier Sciences, CAS (Grant No. QYZDYSSW-SLH003), USTC Research Funds of the Double First-Class Initiative (Grant No. YD2030002007) and the Fundamental Research Funds for the Central Universities (Grant No. WK2470000035, WK2030000063). The work at Poland was supported by the National Science Centre, Poland, within the QuantERA II Programme (No 2021/03/Y/ST2/00178, acronym ExTRaQT) that has received funding from the European Union's Horizon 2020 research and innovation programme under Grant Agreement No 101017733 and the "Quantum Optical Technologies" project, carried out within the International Research Agendas programme of the Foundation for Polish Science co-financed by the European Union under the European Regional Development Fund. CMS acknowledges the support of the Natural Sciences and Engineering Research Council of Canada (NSERC) through the Discovery Grant "The power of quantum resources" RGPIN-2022-03025 and the Discovery Launch Supplement DGECR-2022-00119.
2310.03974
On rainbow thresholds
Resolving a recent problem of Bell, Frieze, and Marbach, we establish both the threshold result of Frankston--Kahn--Narayanan--Park, and its strengthening by Spiro, in the rainbow setting. This has applications to the thresholds for rainbow structures in random graphs where each edge is given a uniformly random color from a set of given colors.
Jie Han, Xiaofan Yuan
2023-10-06T02:13:37Z
http://arxiv.org/abs/2310.03974v2
# On rainbow thresholds ###### Abstract. Resolving a recent problem of Bell, Frieze, and Marbach, we establish the threshold result of Frankston-Kahn-Narayanan-Park in the rainbow setting. JH is partially supported by Natural Science Foundation of China (12371341). ## 1. Introduction Threshold functions are central to the study of random discrete structures. In the last few years, we have witnessed celebrated breakthroughs, e.g. by Frankston, Kahn, Narayanan, and Park [9], and Park and Pham [14]. The goal of this note is to extend this study to the rainbow setting, which has been a focus for a series of papers [2, 5, 6, 10], and formulated recently in [4]. ### Thresholds We start with some notation following [9, 17]. Given a finite set \(X\), a family \(\mathcal{F}\subseteq 2^{X}\) is called _increasing_ if \(B\supseteq A\in\mathcal{F}\Rightarrow B\in\mathcal{F}\). For a given \(X\) and \(p\in[0,1]\), \(\mu_{p}\) is the product measure on \(2^{X}\) given by \(\mu_{p}(S):=p^{|S|}(1-p)^{|X\setminus S|}\) for any \(S\subseteq X\). For an increasing \(\mathcal{F}\), the _threshold_\(p_{\mathrm{c}}(\mathcal{F})\) is the unique \(p\) for which \(\mu_{p}(\mathcal{F})=\frac{1}{2}\). We say \(\mathcal{F}\) is \(p\)_-small_ if there is a \(\mathcal{G}\subseteq 2^{X}\) such that \(\mathcal{F}\subseteq\langle\mathcal{G}\rangle:=\{T:\exists S\in\mathcal{G},S \subseteq T\}\) and \(\sum_{S\in\mathcal{G}}p^{|S|}\leq\frac{1}{2}\). Then \(q(\mathcal{F}):=\max\{p:\mathcal{F}\text{ is $p$-small}\}\), which is defined as the _expectation-threshold of_\(\mathcal{F}\). Let \(\ell(\mathcal{F})\) be the maximum size of minimal members of \(\mathcal{F}\). The following result was conjectured by Kahn and Kalai [11], and resolved by Park and Pham [14] recently. **Theorem 1**.: _[_14_]_ _There exists a constant \(K\) such that for any finite \(X\) and increasing family \(\mathcal{F}\subseteq 2^{X}\), we have_ \[q(\mathcal{F})\leq p_{c}(\mathcal{F})\leq Kq(\mathcal{F})\log\ell(\mathcal{F}).\] This improves on another recent result of Frankston, Kahn, Narayanan, and Park [9], who showed a similar inequality resolving a conjecture of Talagrand [17], that \[p_{c}(\mathcal{F})\leq Kq_{f}(\mathcal{F})\log\ell(\mathcal{F}) \tag{1}\] (\(q_{f}\) will be defined later). ### Rainbow Thresholds The goal of this note is to establish rainbow versions of these results, as recently asked by Bell, Frieze, and Marbach [4], who also made a first attempt. A motivating question for thresholds in the rainbow setting could be the following (e.g. for random graphs). **Problem 1**.: _Determine the optimal \(p=p(n)\) such that if the edges of \(G(n,p)\) are randomly colored using \(c\geq n\) colors, then the resulting graph contains a rainbow colored Hamilton cycle._ In particular, Problem 1 has been resolved by Bal and Frieze [2] while previous results have to use more colors. See also [8] for a more accurate bound on \(p\). To introduce this problem in the abstract setting we first define our (random) object and its threshold. Given an increasing family \(\mathcal{F}\), let \(\mathcal{H}_{\mathcal{F}}\) be the family of minimal elements of \(\mathcal{F}\). We note that since \(\mathcal{F}=\langle\mathcal{H}_{\mathcal{F}}\rangle\), \(\mathcal{F}\) and \(\mathcal{H}_{\mathcal{F}}\) are uniquely determined by each other. Take an integer \(k\geq\ell(\mathcal{F})\) and let \([k]:=\{1,2,\ldots,k\}\). Our random object will be a randomly colored ## 1. Introduction Let \(X\) be a nonempty set of integers and \(k\geq\ell(\mathcal{F})\) be a set of integers. A _\(k\)-coloring_ of \(X\) is a _\(k\)-coloring_ of \(X\). A _\(k\)-coloring_ of \(X\) is _\(k\)-coloring_ of \(X\). A _\(k\)-coloring_ of \(X\) is _\(k\)-coloring_ of \(X\). A _\(k\)-coloring_ of \(X\) is _\(k\)-coloring_ of \(X\). A _\(k\)-coloring_ of \(X\) is _\(k\)-coloring_ of \(X\). A _\(k\)-coloring_ of \(X\) is _\(k\)-coloring_ of \(X\). A _\(k\)-coloring_ of \(X\) is _\(k\)-coloring_ of \(X\). A _\(k\)-coloring_ of \(X\) is _\(k\)-coloring_ of \(X\). A _\(k\)-coloring_ of \(X\) is _\(k\)-coloring_ of \(X\). A _\(k\)-coloring_ of \(X\) is \(k\)-coloring of \(X\). A _\(k\)-coloring_ of \(X\) is \(k\)-coloring of \(X\). A _\(k\)-coloring_ of \(X\) is \(k\)-coloring of \(X\). A _\(k\)-coloring_ of \(X\) is \(k\)-coloring of \(X\). A _\(k\)-coloring_ of \(X\) is \(k\)-coloring of \(X\). A _\(k\)-coloring_ of \(X\) is \(k\)-coloring of \(X\). A _\(k\)-coloring_ of \(X\) is \(k\)-coloring of \(X\). A _\(k\)-coloring_ of \(X\) is \(k\)-coloring of \(X\). A _\(k\)-coloring_ of \(X\) is \(k\)-coloring of \(X\). A _\(k\)-coloring_ of \(X\) is \(k\)-coloring of \(X\). A _\(k\)-coloring_ of \(X\) is \(k\)-coloring of \(X\). A _\(k\)-coloring_ of \(X\) is \(k\)-coloring of \(X\). A _\(k\)-coloring_ of \(X\) is \(k\)-coloring of \(X\). A _\(k\)-coloring_ of \(X\) is \(k\)-coloring of \(X\). A _\(k\)-coloring_ of \(X\) is \(k\)-coloring of \(X\). A _\(k\)-coloring_ of \(X\) is \(k\)-coloring of \(X\). A _\(k\)-coloring_ of \(X\) is \(k\)-coloring of \(X\). A _\(k\)-coloring_ of \(X\) is \(k\)-coloring of \(X\). A _\(k\)-coloring_ of \(X\) is \(k\)-coloring of \(X\). A _\(k\)-coloring_ of \(X\) is \(k\)-coloring of \(X\). A _\(k\)-coloring_ of \(X\) is \(k\)-coloring of \(X\). A _\(k\)-coloring_ of \(X\) is \(k\)-coloring of \(X\). A _\(k\)-coloring_ of \(X\) is \(k\)-coloring of \(X\). A _\(k\)-coloring_ of \(X\) is \(k\)-coloring of \(X\). A _\(k\)-coloring_ of \(X\) is \(k\)-coloring of \(X\). A _\(k\)-coloring_ of \(X\) is \(k\)-coloring of \(X\). A _\(k\)-coloring_ of \(X\) is \(k\)-coloring of \(X\). A _\(k\)-coloring_ of \(X\) is \(k\)-coloring of \(X\). A _\(k\)-coloring_ of \(X\) is \(k\)-coloring of \(X\). A _\(k\)-coloring_ of \(X\) is \(k\)-coloring of \(X\). A _\(k\)-coloring of \(X\) is \(k\)-coloring of \(X\). probability \(p\). Talagrand [17] observed that for increasing \(\mathcal{F}\), \(q\geq q_{f}(\mathcal{F})\) implies that there is a \((2q)\)-spread measure on \(\mathcal{F}\), and therefore, for applications, it usually _suffices_ to study/verify the spreadness condition. Now we are ready to state an application-friendly version of our main result, phrased in the spreadness condition. Note that its original ("uncolored") version was proved in [9]. **Theorem 3**.: _There is an absolute constant \(C>0\) such that the following holds. Let \(\mathcal{H}\) be an \(\ell\)-bounded \(\kappa\)-spread (multi-)hypergraph. Let \(X=V(\mathcal{H})\) be randomly colored from \([k]\), where \(k\geq\ell\). If_ \[p\geq\frac{C\log\ell}{\kappa},\] _then with probability \(1-o_{\ell\to\infty}(1)\), \(X_{p}\) contains a rainbow edge of \(\mathcal{H}\)._ Bell, Frieze and Marbach [4] proved Theorem 3 with an additional assumption that \(\kappa=\Omega(\ell)\) and conject that it can be removed. Quick consequences of Theorem 3 include rainbow thresholds for Hamilton cycles in random (hyper)graphs and bounded degree spanning trees in random graphs. These results are already covered by the result of [4], and the first one was established earlier by Bal and Frieze [2] and Dudek, English and Frieze [6]. We also provide new applications of Theorem 3 in Section 6. ### Transversal versions By the above observation of Talagrand and routine computations to push the error probabilities (see [9]), Theorem 3 implies the upper bound in Theorem 2. To prove Theorem 3, we first prove the following transversal version and then couple these two models following an ingenious coupling idea of McDiarmid [13], see other applications by Ferber [7] and Ferber-Krivelevich [8]. The transversal variant was already mentioned in [4]. **Theorem 4**.: _There is an absolute constant \(C>0\) such that the following holds. Let \(\mathcal{H}\) be an \(r\)-bounded, \(\kappa\)-spread (multi-)hypergraph. Let \(X=V(\mathcal{H})\) and \(k\geq r\), and let \(X^{\prime}=X\times[k]\). If_ \[p\geq\frac{C\log r}{\kappa},\] _then with probability \(1-o_{r\to\infty}(1)\), \(X^{\prime}_{p/k}\) contains a set \(S^{\prime}=\{(x_{1},i_{1}),(x_{2},i_{2}),\ldots,(x_{t},i_{t})\}\), where \(\{x_{1},x_{2},\ldots,x_{t}\}\) is an edge of \(\mathcal{H}\) and \(i_{1},i_{2},\ldots,i_{t}\) are distinct numbers in \([k]\)._ Theorem 4 is called a _transversal_ version because we can naturally consider a family of hypergraphs \(\mathscr{F}=\{\mathcal{F}_{1},\mathcal{F}_{2},\ldots,\mathcal{F}_{k}\}\), where \(V(\mathcal{F}_{i})=X\times\{i\}\), and \(\{(x_{1},i),(x_{2},i),\ldots,(x_{t},i)\}\in\mathcal{F}_{i}\) if and only if \(\{x_{1},x_{2},\ldots,x_{t}\}\in\mathcal{F}\). Then our target objects are the edges that contain at most one vertex from each family. In the special case \(k=r\) and \(\mathcal{H}\) is \(r\)-uniform, the target objects are the transversals in \(\mathscr{F}\), that is, each contains exactly one vertex from each family. The rest of this paper is organized as follows. In Section 2 we show that \(\mu^{\prime}_{p}\) is strictly increasing for increasing families when \(p\in(0,1)\), justifying our definition of the rainbow threshold. We then prove Theorem 4 and Theorem 3 in Sections 3 and 4, respectively. The proof of Theorem 2 is given in Section 5. We give some applications of our results in Section 6 and conclude with some further directions in Section 7. ## 2. Well-definedness of the rainbow threshold **Lemma 5**.: _Let \(X\) be a finite set and let \(X^{\prime}=X\times[k]\), where \(k\) is a positive integer. For \(p\in(0,1)\), let \(\mu^{\prime}_{p}(S)\) be the discrete probability measure on \(2^{X^{\prime}}\) defined point-wise by_ \[\mu^{\prime}_{p}(S):=\begin{cases}(p/k)^{t}(1-p)^{|X|-t}&\text{ if all }x_{1},\ldots,x_{t}\text{ are distinct,}\\ 0&\text{ otherwise,}\end{cases}\] _for \(S=\{(x_{1},i_{1}),\ldots,(x_{t},i_{t})\}\subseteq X^{\prime}\). Let \(\mathcal{F}^{\prime}\subseteq 2^{X^{\prime}}\) be an increasing family, and assume \(\mathcal{F}^{\prime}\neq 0,\ 2^{X^{\prime}}\). Then \(\mu^{\prime}_{p}(\mathcal{F}^{\prime})\) is strictly monotone increasing as \(p\) increases._ Proof.: We will show it for a more general measure which is no longer symmetric among the elements in \(X\), and therefore we can prove it by considering the 'local' contribution of each element in \(X\). Let \(n=|X|\), and we may assume \(X=[n]\). Let \(p_{1},p_{2},\ldots,p_{n}\in(0,1)\). We define a probability measure \(\mu_{\mathbf{p}}=\mu_{p_{1},p_{2},\ldots,p_{n}}\) point-wise by, for \(S=\{(x_{1},i_{1}),\ldots,(x_{t},i_{t})\}\subseteq X^{\prime}\), \[\mu_{\mathbf{p}}(S):=\begin{cases}\frac{1}{k^{t}}\prod_{i\in\{x_{1},\ldots,x_{t }\}}p_{i}\prod_{j\in X\setminus\{x_{1},\ldots,x_{t}\}}(1-p_{j})&\text{ if all $x_{1},\ldots,x_{t}$ are distinct,}\\ 0&\text{ otherwise,}\end{cases}\] This generalizes our original model to include each vertex in randomly colored \(X\) with its own probability, and \(\mu_{\mathbf{p}}=\mu_{p}^{\prime}\) when \(p_{1}=p_{2}=\cdots=p_{n}=p\). Let \(p_{1}<p_{1}^{\prime}<1\). Let \(\mu\) denote \(\mu_{\mathbf{p}}\) and let \(\mu^{+}\) denote \(\mu_{p_{1}^{\prime},p_{2},\ldots,p_{n}}\), i.e., replacing \(p_{1}\) by \(p_{1}^{\prime}\) in the above formula. Next we will show that \(\mu^{+}(\mathcal{F}^{\prime})\geq\mu(\mathcal{F}^{\prime})\). Note that \(\mathcal{F}^{\prime}\) is increasing. For any \(S\in\mathcal{F}^{\prime}\) such that '1' never shows up as the first coordinate of the members of \(S\), let \(S_{i}:=\{(1,i)\}\cup S\) for \(i\in[k]\). Then \(S_{1},\ldots,S_{k}\) are all in \(\mathcal{F}^{\prime}\) and they are distinct. That is, we can define a mapping \(\phi:\mathcal{F}^{\prime}\cap 2^{(X\setminus\{1\})\times[k]}\to\binom{ \mathcal{F}^{\prime}}{k}\), where \(\phi(S)=\{S_{1},\ldots,S_{k}\}\). We claim that \(\mu(\{S\}\cup\phi(S))=\mu^{+}(\{S\}\cup\phi(S))\), for each \(S\in\mathcal{F}^{\prime}\cap 2^{(X\setminus\{1\})\times[k]}\). That is, as \(p_{1}\) increases, the difference is canceled out within the tuple \((S,S_{1},\ldots,S_{k})\). Indeed, for \(S=\{(x_{1},i_{1}),\ldots,(x_{t},i_{t})\}\subseteq(X\setminus\{1\})\times[k]\), \[\mu(\{S\}\cup\phi(S))\] \[= \frac{1}{k^{t}}\prod_{i\in\{x_{1},\ldots,x_{t}\}}p_{i}\prod_{j\in X \setminus\{x_{1},\ldots,x_{t}\}}(1-p_{j})+\sum_{l=1}^{k}\frac{1}{k^{t+1}}\prod _{i\in\{1,x_{1},\ldots,x_{t}\}}p_{i}\prod_{j\in X\setminus\{1,x_{1},\ldots,x_ {t}\}}(1-p_{j})\] \[= \left(\frac{1}{k^{t}}\prod_{i\in\{x_{1},\ldots,x_{t}\}}p_{i}\prod _{j\in X\setminus\{1,x_{1},\ldots,x_{t}\}}(1-p_{j})\right)((1-p_{1})+p_{1})\] \[= \left(\frac{1}{k^{t}}\prod_{i\in\{x_{1},\ldots,x_{t}\}}p_{i}\prod _{j\in X\setminus\{1,x_{1},\ldots,x_{t}\}}(1-p_{j})\right)\left((1-p_{1}^{ \prime})+p_{1}^{\prime}\right)\] \[= \mu^{+}(\{S\}\cup\phi(S)).\] Clearly, \(\phi(S)\) and \(\phi(S^{\prime})\) are disjoint subsets of \(\mathcal{F}^{\prime}\), for any \(S\neq S^{\prime}\), and are disjoint from \(2^{(X\setminus\{1\})\times[k]}\). Let \[\mathcal{D}:=\mathcal{F}^{\prime}\setminus\bigcup_{S\in\mathcal{F}^{\prime} \cap 2^{(X\setminus\{1\})\times[k]}}(\{S\}\cup\phi(S)).\] Then \(\mathcal{D}\) (if not empty) and \(\{S\}\cup\phi(S)\) for all \(S\in\mathcal{F}^{\prime}\cap 2^{(X\setminus\{1\})\times[k]}\) form a partition of \(\mathcal{F}^{\prime}\), and each \(T\in\mathcal{D}\) has a member with '1' as its first coordinate. Hence, if \(\mathcal{D}\neq\emptyset\), then for each \(T=\{(1,i_{1}),(x_{2},i_{2}),\ldots,(x_{t},i_{t})\}\in\mathcal{D}\), \[\mu^{+}(T)-\mu(T) \tag{3}\] \[= \frac{1}{k^{t}}\cdot p_{1}^{\prime}\prod_{i\in\{x_{2},\ldots,x_{t }\}}p_{i}\prod_{j\in X\setminus\{1,x_{2},\ldots,x_{t}\}}(1-p_{j})-\frac{1}{k^{t} }\prod_{i\in\{1,x_{2},\ldots,x_{t}\}}p_{i}\prod_{j\in X\setminus\{1,x_{2}, \ldots,x_{t}\}}(1-p_{j})\] \[= \frac{1}{k^{t}}\cdot(p_{1}^{\prime}-p_{1})\prod_{i\in\{x_{2}, \ldots,x_{t}\}}p_{i}\prod_{j\in X\setminus\{1,x_{2},\ldots,x_{t}\}}(1-p_{j})>0,\] as \(p_{i}\in(0,1)\) for all \(i\). Now we are ready to look at the difference between \(\mu^{+}\) and \(\mu\) on the entire family \(\mathcal{F}^{\prime}\). \[\begin{split}&\mu^{+}(\mathcal{F}^{\prime})-\mu(\mathcal{F}^{ \prime})\\ =&\sum_{S\in\mathcal{F}^{\prime}\cap(2^{(X\setminus \{1\})\times[k]}}\left(\mu^{+}(\{S\}\cup\phi(S))-\mu(\{S\}\cup\phi(S))\right)+ \sum_{T\in\mathcal{D}}\left(\mu^{+}(T)-\mu(T)\right)\\ =&\sum_{T\in\mathcal{D}}\left(\mu^{+}(T)-\mu(T) \right)\geq 0,\end{split} \tag{4}\] and by (3), the equation holds only if \(\mathcal{D}=\emptyset\). Thus by symmetry among \(X\), (4) shows that \(\mu_{p_{1},\ldots,p_{k}}(\mathcal{F})\) increases as \(p_{i}\) increases for each \(i\in[k]\); and hence, for any \(0<p<p^{\prime}<1\), \[\mu^{\prime}_{p^{\prime}}(\mathcal{F}^{\prime})=\mu_{p^{\prime},\ldots,p^{ \prime}}(\mathcal{F}^{\prime})\geq\mu_{p^{\prime},\ldots,p^{\prime},p}( \mathcal{F}^{\prime})\geq\cdots\geq\mu_{p^{\prime},p,\ldots,p}(\mathcal{F}^{ \prime})\geq\mu_{p,\ldots,p}(\mathcal{F}^{\prime})=\mu^{\prime}_{p}(\mathcal{ F}^{\prime}).\] This shows the monotonity, and it is left to prove the strictness. Since \(\mathcal{F}^{\prime}\neq\emptyset\), it has a minimal element; and since \(\mathcal{F}^{\prime}\neq 2^{X^{\prime}}\), \(\emptyset\) is not its minimal element. Thus, without loss of generality, we may assume '\(1\)' is in a minimal element \(M\in\mathcal{F}^{\prime}\), then \(M\) is in the set \(\mathcal{D}\) at the iteration of increasing \(p_{1}\) to \(p^{\prime}_{1}\). Thus by (3) and (4), \[\mu^{\prime}_{p^{\prime}}(\mathcal{F}^{\prime})\geq\mu_{p^{\prime},p,\ldots,p} (\mathcal{F}^{\prime})>\mu_{p,\ldots,p}(\mathcal{F}^{\prime})=\mu^{\prime}_{p} (\mathcal{F}^{\prime}).\] Hence, \(\mu^{\prime}_{p}(\mathcal{F}^{\prime})\) is strictly monotonic increasing as \(p\) increases. ## 3. Proof of Theorem 4 Write \((k)_{r}:=k(k-1)\cdots(k-r+1)\) for \(k,r\in\mathbb{N}\). The proof of Theorem 4 uses the fact that the rainbow colorings of a hyperedge \(E\) are evenly distributed, that is, there are exactly \((k)_{|E|}\) such colorings given \(k\) colors to use. We use this to reduce the problem to its uncolored version. Note that a similar idea was observed (and used) in [4]. We need the following result in [9]. **Theorem 6**.: _[_9_, Theorem 1.6]_ _There is an absolute constant \(K>0\) such that for any \(r\)-bounded, \(\kappa\)-spread multi-hypergraph \(\mathcal{H}\) on \(X\), a uniformly random \(((K\kappa^{-1}\log r)|X|)\)-element subset of \(X\) belongs to \(\langle\mathcal{H}\rangle\) with probability \(1-o_{r\to\infty}(1)\)._ For an edge \(E\) in a multi-hypergraph \(\mathcal{H}\), let \(m_{\mathcal{H}}(E)\) be its multiplicity, namely, the number of times it appears in \(\mathcal{H}\). Proof of Theorem 4.: Let \(\mathcal{H}^{\prime}\) be the hypergraph on \(X^{\prime}\) where \(S^{\prime}=\{(x_{1},i_{1}),(x_{2},i_{2}),\ldots,(x_{t},i_{t})\}\) is an edge in \(\mathcal{H}^{\prime}\) if and only if \(\{x_{1},x_{2},\ldots,x_{t}\}\) is an edge in \(\mathcal{H}\) and \(i_{1},i_{2},\ldots,i_{t}\) are distinct numbers in \([k]\), and the multiplicity of \(S^{\prime}\) in \(\mathcal{H}^{\prime}\) is equal to the multiplicity of \(\{x_{1},x_{2},\ldots,x_{t}\}\) in \(\mathcal{H}\), that is, \(m_{\mathcal{H}^{\prime}}(S^{\prime})=m_{\mathcal{H}}(\{x_{1},x_{2},\ldots,x_{t }\})\). We define an auxiliary multi-hypergraph \(\mathcal{H}^{\prime\prime}\) by the following: for each \(S^{\prime}\) in \(\mathcal{H}^{\prime}\), we include \((k-|S^{\prime}|)_{r-t}\) copies of \(S^{\prime}\) in \(\mathcal{H}^{\prime\prime}\). That is, \(\mathcal{H}^{\prime\prime}\) and \(\mathcal{H}^{\prime}\) have the same set of members (edges), and for every \(S^{\prime}\in\mathcal{H}^{\prime\prime}\), \(m_{\mathcal{H}^{\prime\prime}}(S^{\prime})=(k-|S^{\prime}|)_{r-t}m_{\mathcal{H} ^{\prime}}(S^{\prime})\). Note that for each edge \(\{x_{1},x_{2},\ldots,x_{t}\}\) in \(\mathcal{H}\), there are \((k)_{t}\) choices for \(i_{1},i_{2},\ldots,i_{t}\) such that \(S^{\prime}=\{(x_{1},i_{1}),(x_{2},i_{2}),\ldots,(x_{t},i_{t})\}\) is an edge in \(\mathcal{H}^{\prime}\). Thus each edge \(\{x_{1},x_{2},\ldots,x_{t}\}\) in \(\mathcal{H}\) corresponds to \((k)_{t}\cdot(k-t)_{r-t}=(k)_{r}\) edges in \(\mathcal{H}^{\prime\prime}\). Hence we have \(|\mathcal{H}^{\prime\prime}|=(k)_{r}|\mathcal{H}|\). Let \(S=\{(x_{1},i_{1}),(x_{2},i_{2}),\ldots,(x_{s},i_{s})\}\) be a subset of \(X^{\prime}\). We may assume that \(x_{1},x_{2},\ldots,x_{s}\) are distinct in \(X\) and \(i_{1},i_{2},\ldots,i_{s}\) are distinct in \([k]\), as otherwise \(\mathcal{H}^{\prime\prime}\cap\langle S\rangle=\emptyset\). Let \[S\subseteq T=\{(x_{1},i_{1}),(x_{2},i_{2}),\ldots,(x_{s},i_{s}),(x_{s+1},i_{s+1} ),\ldots,(x_{t},i_{t})\}\subseteq X^{\prime}\] and note that \(T\in\mathcal{H}^{\prime\prime}\) if and only if \(T^{*}=\{x_{1},x_{2},\ldots,x_{t}\}\) is an edge of \(\mathcal{H}\) and \(\{i_{1},i_{2},\ldots,i_{t}\}\) consists of \(t\) distinct numbers in \([k]\). So, for each edge \(T^{*}\) in \(\mathcal{H}\), there are \((k-s)_{t-s}\) choices for \(\{i_{s+1},\ldots,i_{t}\}\), each of which corresponds to \((k-t)_{r-t}\) (same) edges. Then each \(\mathcal{H}\cap\langle\{x_{1},x_{2},\ldots,x_{s}\}\rangle\) corresponds to \((k-s)_{t-s}\cdot(k-t)_{r-t}=(k-s)_{r-s}\) edges in \(\mathcal{H}^{\prime\prime}\cap\langle S\rangle\). Thus we have \[|\mathcal{H}^{\prime\prime}\cap\langle S\rangle|=\sum_{\{x_{1},x_{2},\ldots,x_{ s}\}\subseteq T^{*}\in\mathcal{H}}(k-s)_{r-s}\leq\frac{|\mathcal{H}|}{\kappa^{s}}(k-s) _{r-s}=\frac{|\mathcal{H}^{\prime\prime}|}{\kappa^{s}(k)_{s}}\leq\frac{e^{s}| \mathcal{H}^{\prime\prime}|}{(k\kappa)^{s}},\] where we used that \((k)_{s}\geq(k/e)^{s}\). Hence, \(\mathcal{H}^{\prime\prime}\) is \((k\kappa/e)\)-spread. Let \(K\) be the constant from Theorem 6. Let \(C=2eK\) and let \[p\geq\frac{C\log r}{\kappa}=\frac{(C/e)\log r}{\kappa/e}=\frac{2K\log r}{ \kappa/e}.\] Then by Theorem 6, a uniformly random \((p|X^{\prime}|/2k)\)-element subset of \(X^{\prime}\) contains an edge in \(\mathcal{H}^{\prime\prime}\) with probability \(1-o_{r\to\infty}(1)\). Standard concentration arguments give that \(|X^{\prime}_{p}|\geq p|X^{\prime}|/2k\) holds with probability \(1-o_{|X|\to\infty}(1)\). Conditioning on this, we can take a random subset of size exactly \(p|X^{\prime}|/2k\). Since when conditioning on the size of the outcome, the binomial distribution reduces to the hypergeometric distribution, we obtain that with probability \(1-o_{r\to\infty}(1)\), \(X^{\prime}_{p}\) contains a desired edge in \(\mathcal{H}^{\prime\prime}\) (and hence in \(\mathcal{H}^{\prime}\) as well). ## 4. Proof of Theorem 3 We prove the following result that couples our two models, the randomly color model and the transversal model, following an ingenious idea of McDiarmid [13]. **Lemma 7**.: _Let \(X\) be a finite set and let \(\mathcal{H}\) be an \(r\)-bounded hypergraph on \(X\). For \(k\geq r\), let \(X^{\prime}=X\times[k]\). Let \(\mathcal{H}^{\prime}\) denote the family consisting of all possible rainbow copies of \(\mathcal{H}\) using the color set \([k]\). Let \(p\in(0,1)\) and \(p^{\prime}=p/k\). Consider the following two events:_ * _Event T:_ \(X^{\prime}_{p^{\prime}}\) _contains an edge of_ \(\mathcal{H}^{\prime}\)_._ * _Event C: Given a random coloring on_ \(X\) _where the elements of_ \(X\) _are independently and uniformly colored from a set_ \(\mathcal{C}=[k]\)_. Then_ \(X_{p}\) _contains a rainbow edge of_ \(\mathcal{H}\)_._ _Then \(\mathbb{P}[T]\leq\mathbb{P}[C]\)._ Proof.: We define intermediate random samplings "between" those two events. Let \(n=|X|\), and we may assume \(X=[n]\). We define the random sets \(X^{0},X^{1},\ldots,X^{n}\) by the following: In \(X^{i}\), for each \(j\leq i\), we include \((j,s)\) with probability \(p\), where \(s\) is a uniformly chosen color in \([k]\) for \(j\) in Event C; for each \(j>i\), we include \((j,s)\) in \(X^{i}\) with probability \(p^{\prime}\) independent for all colors \(s\) in \([k]\). Then, in particular, \(X^{0}=X^{\prime}_{p^{\prime}}\) as in Event T, and \(X^{n}\) contains an edge of \(\mathcal{H}^{\prime}\) if and only if Event C occurs. Now it suffices to show \[\mathbb{P}[X^{i}\text{ contains an edge of }\mathcal{H}^{\prime}]\geq\mathbb{P}[X^ {i-1}\text{ contains an edge of }\mathcal{H}^{\prime}]\] for all \(i\in[n]\). Note that \(X^{i}\) and \(X^{i-1}\) only differ at the pairs with \(i\) as the first coordinate. We can divide into the following three cases: * \(X^{i-1}\setminus\{(i,s):s\in[k]\}\) contains an edge of \(\mathcal{H}^{\prime}\) (i.e., there exists such an edge not using vertex \(i\)) * \(X^{i-1}\cup\{(i,s):s\in[k]\}\) does not contain an edge of \(\mathcal{H}^{\prime}\) (i.e., there does not exist such an edge of \(\mathcal{H}^{\prime}\) even with all colors for vertex \(i\) available.) * Not in the case of (a) or (b). That is, \(X^{i-1}\) contains an edge of \(\mathcal{H}^{\prime}\) or not depending on the occurrence of the second coordinate (the color) associated with vertex \(i\). We consider conditional probabilities. Note that in case (a), \(X^{i-1}\) contains an edge of \(\mathcal{H}^{\prime}\), and that edge also shows up in \(X^{i}\). Thus \[\mathbb{P}[X^{i}\text{ contains an edge of }\mathcal{H}^{\prime}\ |\ case(a)]= \mathbb{P}[X^{i-1}\text{ contains an edge of }\mathcal{H}^{\prime}\ |\ case(a)]=1.\] Similarly, \[\mathbb{P}[X^{i}\text{ contains an edge of }\mathcal{H}^{\prime}\ |\ case(b)]= \mathbb{P}[X^{i-1}\text{ contains an edge of }\mathcal{H}^{\prime}\ |\ case(b)]=0.\] In case (c), for an arbitrary instance of \(X^{i-1}\setminus\{(i,s):s\in[k]\}\), say \(\tilde{X}\), we define \[D=D_{\tilde{X}}:=\left\{c\in[k]:\tilde{X}\cup\{(i,c)\}\text{ contains an edge of }\mathcal{H}^{\prime}\right\},\] that is, the set of colors for \(i\) that completes an edge of \(\mathcal{H}^{\prime}\). Note that \(D\neq\emptyset\) as \(k\geq r\) and we are not in case (b). Then \[\mathbb{P}\left[X^{i-1}\text{ contains an edge of }\mathcal{H}^{\prime}\ \Big{|}\ \tilde{X}\right]=1-(1-p^{\prime})^{|D|},\] that is, at least one color in \(D\) occurs for \(i\). On the other hand, \[\mathbb{P}\left[X^{i}\text{ contains an edge of }\mathcal{H}^{\prime}\ \Big{|}\ \tilde{X}\right]=p\cdot\frac{|D|}{k},\] that is, the assigned color for \(i\) is in \(D\). Note that \(p^{\prime}=p/k\), \(|D|\leq k\), and \(p<1\) imply \[p\cdot\frac{|D|}{k}-\left(1-(1-p^{\prime})^{|D|}\right) =p\cdot\frac{|D|}{k}-1+(1-p/k)^{|D|}\] \[=\sum_{j=1}^{\infty}\left((p/k)^{2j}\binom{|D|}{2j}-(p/k)^{2j+1} \binom{|D|}{2j+1}\right).\] Since \(p|D|/k<1\), we have \[(p/k)^{2j}\binom{|D|}{2j}-(p/k)^{2j+1}\binom{|D|}{2j+1}\geq(p/k)^{2j}\binom{|D |}{2j}\left(1-\frac{p|D|}{k(2j+1)}\right)>0.\] Thus we have for any instance \(\tilde{X}\) in case (c), \[\mathbb{P}\left[X^{i}\text{ contains an edge of }\mathcal{H}^{\prime}\ \big{|}\ \tilde{X}\right]\geq\ \mathbb{P}\left[X^{i-1}\text{ contains an edge of }\mathcal{H}^{\prime}\ \big{|}\ \tilde{X}\right],\] giving that \[\mathbb{P}[X^{i}\text{ contains an edge of }\mathcal{H}^{\prime}\ |\ case(c)]\geq \mathbb{P}[X^{i-1}\text{ contains an edge of }\mathcal{H}^{\prime}\ |\ case(c)].\] Therefore, we have \[\mathbb{P}[X^{i}\text{ contains an edge of }\mathcal{H}^{\prime}]\geq \mathbb{P}[X^{i-1}\text{ contains an edge of }\mathcal{H}^{\prime}],\] and inductively on \(i\), we conclude that \(\mathbb{P}[T]\leq\mathbb{P}[C]\). Now we are ready to prove Theorem 3. Proof of Theorem 3.: Take \(C\) from Theorem 4. Let \(p\geq C\log\ell/\kappa\) and \(p^{\prime}=p/k\). Theorem 4 applied with \(r=\ell\) says that \(X^{\prime}_{p^{\prime}}\) contains an edge of \(\mathcal{H}\) with probability \(1-o_{\ell\to\infty}(1)\). Hence by Lemma 7, if we color \(X\) uniformly at random with \([k]\) and take each \(x\in X\) with probability \(p\) independently, then with probability \(1-o_{\ell\to\infty}(1)\) the outcome contains a rainbow edge of \(\mathcal{H}\). ## 5. Proof of Theorem 2 Recall that the second inequality is given by Theorem 3 combined with the observation of Talagrand. So it remains to prove the first inequality. Note that it suffices to show \(q_{f}(\mathcal{F})\leq p_{c}(\mathcal{F})\leq p_{c}^{k}(\mathcal{F})\) where the first inequality is known. For a small \(\delta>0\), take a small \(\varepsilon>0\) such that \(\mu_{p_{c}(\mathcal{F})-\delta}(\mathcal{F})\leq 1/2-\varepsilon\). Then given an integer \(k\geq\ell(\mathcal{F})\), let \(a\in\mathbb{N}\) be sufficiently large such that if we color \(X\) with \(ak\) colors, then with probability at least \(1-\varepsilon\) all members of \(X\) receive distinct colors (therefore any edge is automatically "rainbow"). Since we can first choose a random \(k\)-coloring of \(X\) and then randomly split each color class into \(a\) further ones uniformly at random, we have \(p_{c}^{ak}(\mathcal{F})\leq p_{c}^{k}(\mathcal{F})\). Now suppose we randomly color \(X\) using \(ak\) colors and then choose a binomial random subset of \(X\) with probability \(p=p_{c}(\mathcal{F})-\delta\). If all members of \(X\) receive distinct colors, then we know that the probability that we obtain a (rainbow) edge of \(\mathcal{F}\) is at most \(1/2-\varepsilon\). Thus, this process produces a rainbow edge of \(\mathcal{F}\) with probability at most \((1/2-\varepsilon)(1-\varepsilon)+\varepsilon=1/2-(\varepsilon/2-\varepsilon^{2 })<1/2\), that is, \(p_{c}(\mathcal{F})-\delta<p_{c}^{ak}(\mathcal{F})\leq p_{c}^{k}(\mathcal{F})\). Taking \(\delta\to 0\) shows that \(p_{c}(\mathcal{F})\leq p_{c}^{k}(\mathcal{F})\). ## 6. Applications Here we collect some quick implications by combining Theorem 3 and some known spreadness computations. The following remark is useful in our applications. _Remark_.: Suppose \(\nu\) is a \(\kappa^{-1}\)-spread measure on \(2^{X}\) supported on \(\mathcal{H}\). Since \(\mathbb{Q}\) is dense in \(\mathbb{R}\), we may assume that \(\nu\) takes values in \(\mathbb{Q}\) (up to an arbitrarily small error term in the spread value, which could be taken care of by adjusting the constants). Now we remark that the assumption \(\mathcal{H}\) is \(\kappa\)-spread can be replaced by that there exists a \(\kappa^{-1}\)-spread probability measure (distribution) on \(\mathcal{H}\), by taking multiple edges in \(\mathcal{H}\) where \(\nu\) corresponds to the uniform distribution on the new edge set. ### Rainbow spanning trees The authors of [15] proved the following result for bounded degree spanning trees. **Lemma 8** (Lemmas 7.2 & 7.3 in [15]).: _For every \(\Delta\in\mathbb{N}\) and \(\delta>0\) there exists \(C_{0}=C_{0}(\Delta,\delta)>0\) such that the following holds for sufficiently large integer \(n\). For every \(n\)-vertex graph \(G\) with \(\delta(G)\geq(1/2+\delta)n\) and every tree \(T\) on \(n\) vertices with \(\Delta(T)\leq\Delta\) there exists a \((C_{0}/n)\)-spread distribution on graph embeddings of \(T\) into \(G\)._ Combining the above result with Theorem 3, one can derive the following result on the threshold of a randomly colored binomial random subgraph of a given graph with a large minimum degree to contain a rainbow copy of a given bounded-degree spanning tree. **Theorem 9**.: _For every \(\Delta\in\mathbb{N}\) and \(\delta>0\) there exists \(C=C(\Delta,\delta)>0\) such that the following holds for sufficiently large integer \(n\). Suppose that \(G\) is an \(n\)-vertex graph satisfying \(\delta(G)\geq(1/2+\delta)n\) and \(T\) is an \(n\)-vertex tree with \(\Delta(T)\leq\Delta\). Suppose the edges of \(G\) are randomly colored with \(q\geq n-1\) colors. Then with probability \(1-o_{n\to\infty}(1)\), \(G_{C\log n/n}\) contains a rainbow copy of \(T\), where \(G_{p}\) denotes the spanning subgraph of \(G\) where each edge of \(G\) is retained with probability \(p\)._ Proof.: We may assume \(n\) is sufficiently large. Then by Lemma 8, there exists a \((C_{0}/n)\)-spread distribution on graph embeddings of \(T\) into \(G\). Let \(\mathcal{H}\) be a hypergraph whose vertices are the edges of \(G\) and edges are the graph embeddings of \(T\) into \(G\). Then \(\mathcal{H}\) is \((n-1)\)-uniform and has a \((C_{0}/n)\)-spread distribution. By Theorem 3, there exists a constant \(C_{1}\) such that if \[p\geq C_{1}\log(n-1)\frac{C_{0}}{n},\] then with probability \(1-o_{n\to\infty}(1)\), \(X_{p}\) contains a rainbow edge of \(\mathcal{H}\). Let \(C=C_{1}C_{0}\), and then \(C\log n/n>C_{1}\log(n-1)\frac{C_{0}}{n}\). Therefore, with probability \(1-o_{n\to\infty}(1)\), \(G_{C\log n/n}\) contains a rainbow copy of \(T\). ### Rainbow matching in hypergraphs All hypergraphs considered in this subsection are simple, i.e., no repeats allowed. Let \(\mathcal{H}\) be a \(k\)-uniform hypergraph with vertex set \(V\). For any \(T\subseteq V\), we use \(d_{\mathcal{H}}(T)\) to denote the _degree_ of \(T\) in \(\mathcal{H}\), i.e., the number of edges of \(\mathcal{H}\) containing \(T\). For a positive integer \(\ell\) such that \(1\leq\ell<k\), define \(\delta_{\ell}(\mathcal{H}):=\min\left\{d_{\mathcal{H}}(T):T\in\binom{V}{\ell}\right\}\) to be the minimum \(\ell\)_-degree_ of \(\mathcal{H}\). For integers \(\ell,k,n\) satisfying \(1\leq\ell<k\) and \(n\in k\mathbb{N}\), let \(t(n,k,\ell)\) be the smallest \(d\) such that every \(n\)-vertex \(k\)-uniform hypergraph with \(\delta_{\ell}(\mathcal{H})\geq d\) contains a perfect matching. Define the \(\ell\)_-degree (Dirac) threshold_ for perfect matchings in \(k\)-uniform hypergraphs to be \[\delta_{\ell,k}^{+}:=\lim_{k\mid n,n\to\infty}\frac{t(n,k,\ell)}{\binom{n}{k- \ell}}.\] In the independent works of [12, Theorem 1.5] and [15], the authors showed that there is a spread distribution on the perfect matchings of the \(k\)-uniform hypergraphs satisfying the Dirac condition. **Lemma 10**.: _[_12, 15_]_ _Let \(k,\ell\) be integers such that \(1\leq\ell<k\) and let \(\varepsilon>0\). There exists \(C_{0}=C_{0}(\ell,k,\varepsilon)\) such that the following holds for sufficiently large integer \(n\). Let \(\mathcal{H}\) be an \(n\)-vertex \(k\)-uniform hypergraph such that \(k|n\) and \(\delta_{\ell}(\mathcal{H})\geq(\delta_{\ell,k}^{+}+\varepsilon)\binom{n}{k- \ell}\). Then there exists a probability measure on the set of perfect matchings in \(\mathcal{H}\) which is \((C_{0}/n^{k-1})\)-spread._ With Theorem 3, we have the following corollary. The proof follows the lines in the above subsection and is omitted. **Corollary 11**.: _Let \(k,\ell\) be integers such that \(1\leq\ell<k\) and let \(\varepsilon>0\). There exists \(C=C(\ell,k,\varepsilon)\) such that the following holds for sufficiently large integer \(n\). Let \(\mathcal{H}\) be an \(n\)-vertex \(k\)-uniform hypergraph such that \(n\in k\mathbb{N}\) and \(\delta_{\ell}(\mathcal{H})\geq(\delta_{\ell,k}^{+}+\varepsilon)\binom{n}{k- \ell}\). Suppose the edges of \(\mathcal{H}\) are randomly colored with \(q\) colors, where \(q\geq n/k\). Then with probability \(1-o_{n\to\infty}(1)\), \(\mathcal{H}_{C\log n/n^{k-1}}\) contains a rainbow perfect matching._ The sharp minimum \((k-1)\)-degree condition for the above results is proven in [12, Theorem 1.6]. Together with Theorem 3 this also gives the sharp minimum \((k-1)\)-degree condition for the robustness of rainbow perfect matchings. Here we omit the very similar statements. Finally, very recently in [3] Bell and Frieze studied the rainbow threshold for \(r\)-th power of Hamilton cycles in random graphs, for \(r\geq 2\). They showed that \(n^{-1/r}\) is the threshold if the number of colors is at least \((1+o(1))rn\). Our result (Theorem 3) implies an upper bound on the threshold \(n^{-1/r}\log n\) but works at the optimal number of colors \(rn\). ## 7. Concluding Remarks There are at least three possible further problems left after this paper. First, it is natural to try to prove a rainbow version of Theorem 1, that is, to remove the extra logarithmic factor in (2). Following the proof idea of this note, it suffices to prove a "transversal version", which we did not manage to do. However, it would also follow from a conjecture of Talagrand [17, Problem 6.3], who suggested that \(q(\mathcal{F})\geq q_{f}(\mathcal{F})/K\) for some absolute constant \(K\) and every increasing \(\mathcal{F}\). Second, it is not clear to us how to obtain tight(er) results on rainbow structures when the term \(\log n\) can be reduced in the uncolored version. For example, for the \(K_{r}\)-factor case, the uncolored version can be resolved (see [9, Section 7]) by using a nice coupling result of Riordan [16] and converting the problem to the threshold for perfect matchings in random \(r\)-uniform hypergraphs. However, since we have colored edges rather than \(r\)-tuples, it is not clear to us how to transfer our problem using Riordan's coupling. The third problem is on the general transversal version of the problem. If the host graphs are just random (hyper)graphs, then we are just taking i.i.d. copies of random subgraphs of a complete graph (as the base graph) and consider transversal copies of our target subgraph. However, a more general version allows the base (hyper)graph to be different while our Theorem 4 is only applicable when the host graphs are the same. For instance, in a general transversal version of Theorem 9 (also Corollary 11), one can take \(G_{1},\ldots,G_{n-1}\) as \(n-1\) (not necessarily distinct and not necessarily the same) graphs each of which satisfies \(\delta(G_{i})\geq(1/2+\delta)n\), and consider a transversal spanning tree in the union of their sparsifications (that is, a spanning tree that contains exactly one edge from the sparsification of each \(G_{i}\)). See [1] for detailed discussions and results for the case of Hamilton cycles. ## Acknowledgements The authors are indebted to Asaf Ferber and Huy Pham for their stimulating discussions at an earlier stage of this project.
2307.01574
Secondary gas in debris discs released following the decay of long-lived radioactive nuclides, catastrophic or resurfacing collisions
Kuiper-like belts of planetesimals orbiting stars other than the Sun are most commonly detected from the thermal emission of small dust produced in collisions. Emission from gas, most notably CO, highlights the cometary nature of these planetesimals. Here we present models for the release of gas from comet-like bodies in these belts, both due to their thermophysical evolution, most notably the decay of long-lived radioactive nuclides and collisional evolution, including catastrophic and gentler resurfacing collisions. We show that the rate of gas release is not proportional to the rate of dust release, if non-catastrophic collisions or thermal evolution dominate the release of CO gas. In this case, care must be taken when inferring the composition of comets. Non-catastrophic collisions dominate the gas production at earlier times than catastrophic collisions, depending on the properties of the planetesimal belt. We highlight the importance of the thermal evolution of comets, including crucially the decay of long-lived radioactive nuclides, as a source of CO gas around young (<50Myr) planetary systems, if large (10-100s kms) planetesimals are present.
Amy Bonsor, Mark C. Wyatt, Sebastian Marino, Björn J. R. Davidsson, Quentin Kral, Philippe Thebault
2023-07-04T09:02:39Z
http://arxiv.org/abs/2307.01574v2
Secondary gas in debris discs released following the decay of long-lived radioactive nuclides, catastrophic or resurfacing collisions ###### Abstract Kuiper-like belts of planetesimals orbiting stars other than the Sun are most commonly detected from the thermal emission of small dust produced in collisions. Emission from gas, most notably CO, highlights the cometary nature of these planetesimals. Here we present models for the release of gas from comet-like bodies in these belts, both due to their thermophysical evolution, most notably the decay of long-lived radioactive nuclides and collisional evolution, including catastrophic and gentler resurfacing collisions. We show that the rate of gas release is not proportional to the rate of dust release, if non-catastrophic collisions or thermal evolution dominate the release of CO gas. In this case, care must be taken when inferring the composition of comets. Non-catastrophic collisions dominate the gas production at earlier times than catastrophic collisions, depending on the properties of the planetesimal belt. We highlight the importance of the thermal evolution of comets, including crucially the decay of long-lived radioactive nuclides, as a source of CO gas around young (\(<50\)Myr) planetary systems, if large (10-100s kms) planetesimals are present. keywords: (stars:) circumstellar matter \(<\) Stars comets: general \(<\) Planetary Systems methods: numerical \(<\) Astronomical instrumentation, methods, and techniques ## 1 Introduction Belts of comets or asteroids, similar to the Kuiper belt, are detected around stars other than the Sun, from the thermal emission of small dust produced in collisions between the larger planetesimals (_e.g._ Wyatt, 2008). The volatile-rich, cometary, nature of the planetesimals in many of these systems is witnessed by the detection of gaseous emission, most commonly CO (Hughes et al., 2018). This gas has now been detected for tens of planetary systems around AFGM-type stars (_e.g._ Dent et al., 2014; Marino et al., 2016; Lieman-Sifiry et al., 2016; Moor et al., 2017; Kral et al., 2020). In most cases where the spatial distribution of the gas is resolved, it is associated with the position of the dust belts (_e.g._ Matra et al., 2017). Whilst some planetary systems with high masses of CO could be primordial, left-over from the proto-planetary disc phase (Kospal et al., 2013), a secondary origin can readily explain the CO in the many gas-poor debris systems (Kral et al., 2017), as well as in some systems with higher levels of CO (_e.g._ Kral et al., 2019). In other words, the observed gas is not hydrogen-rich gas leftover from the gas-rich protoplanetary disc, but hydrogen-poor gas released from planetesimals in gas-poor debris discs. The CO in this hydrogen-poor gas would only survive on short timescales (\(\sim 130\) yrs) due to UV photodissociation (Visser et al., 2009). Instead it is best explained by secondary gas released from the (icy) comets that also produce the dusty material observed in the infrared (Zuckerman and Song, 2012; Dent et al., 2014). Mechanisms suggested for the release of gas include UV desorption (Grigorieva et al., 2007), collisions (Czechowski and Mann, 2007), radiogenic heating (Davidsson, 2021) and heating from stellar irradiation (Kral et al., 2021). In the Solar System, activity triggered by the increased stellar radiation as comets are scattered close to the Sun leads to a cometary tail, with a wide range of species detected including H\({}_{2}\)O, CO\({}_{2}\), CO, CH\({}_{3}\)OH, H\({}_{2}\)CO, HCN, H\({}_{2}\)S and CS\({}_{2}\)(Cochran et al., 2015). Cometary tails are dominated by water, whilst a few show spectra dominated by hypervolatile species including N\({}_{2}\)(Biver et al., 2018, _e.g._ comet C/2016 R2)). The recent New Horizons fly-by of Arrokoth placed upper limits on the release of hyper-volatiles (CO, N\({}_{2}\) and CH\({}_{4}\)) (Gladstone et al., 2022), whilst CO is regularly detected in comets in the inner Solar System (Mumma and Charnley, 2011). If the Kuiper-belt is still releasing CO gas today, it is likely too small to be detectable even with in-situ missions like New Horizons (Kral et al., 2021). The survival of CO as CO ice in Solar System comets following exposure to stellar irradiation or the decay of long-lived radioactive nuclides including \({}^{40}\)K, \({}^{232}\)Th, \({}^{235}\)U and \({}^{238}\)U is unclear and the CO is potentially trapped in alternate reservoirs including amorphous water ice and CO\({}_{2}\) ice as suggested by Lisse et al. (2022); Davidsson (2021); Gasc et al. (2017); Prialnik et al. (1987) and references therein. This work considers the potential mechanisms that lead to the production of gas in exo-planetary systems, within debris, or cometary belts. The aim is to quantify the gas production rates, such that the conditions (timescales) on which the different mechanisms dominate can be considered. The early release of hyper-volatiles (CO) due to thermophysical evolution driven by heating from the decay of long-lived radioactive nuclides will be compared to the release of hypervolatiles from collisions, both catastrophic and non-catastrophic. Here we focus on the dramatic resurfacing collisions that occur more frequently than catastrophic collisions for those large planetary bodies that are held together by their gravitational strength. Many of the conclusions, however, apply to less violent cratering collisions. Whilst comets likely form during the gas disc phase and the evolution here plays an important role (_e.g._ Simon et al., 2022), this work starts with fully formed planetesimals in a gas-poor environment, akin to a fully formed planetary system. This work compares two models for the release of volatiles, the first due to collisions and the second due to radiogenic heating. Whilst in a realistic system both processes might act together, the purpose here is to compare the two processes. The work starts in SS2 by considering collisions, firstly presenting the properties of this planetesimal belt (SS2.1). This is followed by details of the collision model for the evolution of the planetesimal belt and the release of volatiles in SS2.3, including mass conservation (SS2.4) and the release of CO following collisions (SS2.5). Results from the numerical model for the gas production from planetesimal belts due to collisions are presented in SS3. This is followed by the presentation of a simple model for the volatile release due to radiogenic heating (SS4) and results for the gas production are compared to those from collisions in SS4.1. The validity of the model is discussed in (SS5.1), followed by a discussion of whether (and when) radiogenic heating or collisions dominate the gas production in debris belts (SS5.2), the importance of resurfacing collisions, compared to catastrophic collisions (SS5.3), how the observed gas production can provide insights regarding the size of the largest planetesimals present in debris belts (SS5.4) and the composition of comets, as derived from gas detection (SS5.5). The paper is concluded in SS6. ## 2 Volatile release from collisions The aim here is to quantify the potential release of volatiles, notably CO, in collisions, for comparison with the release from thermophysical evolution. This work considers that many exoplanetary systems have belts of comets or asteroids similar to the Solar System; the key difference being that these planetesimal belts may occur at any radial location and contain significantly more material, as consistent with observations of bright debris discs. In this work, the term planetesimal will be used to refer to the small planetary bodies that are part of the planetesimal belt, regardless of their volatile content and size. Crucially here we do not assume that the release of volatiles follows the dust evolution of a collisional planetesimal belt, but also consider that volatiles are released in both catastrophic and non-catastrophic collisions. Whilst cratering collisions may play a notable role in the release of CO gas (see discussion in SS5.1), we focus here on resurfacing or shattering collisions. That is, the collisions that regularly occur to the largest planetesimals, where sufficient energy is imparted to shatter, but not to overcome self-gravity and disperse the fragments, leading to the formation of a rubble pile. Here we present a framework to follow the collisional evolution of a debris belt, alongside a potential model for how volatiles are released in collisions. Given uncertainties in exactly how volatiles are released in collisions, the model is presented in such a manner that it could be readily adapted to an updated model for the collisional production of gas. Full details of the variables used in this work can be found in Table 1. ### Properties of the planetesimal belt The planetesimal belt is characterised by a range of properties, including its total initial mass in dust or solids, \(m_{\rm s,tot}(0)\), its radial location, \(r\) and width \(dr\). The properties of the planetesimals themselves are characterised based on their masses, \(M_{k}\), a size distribution, with slope \(\alpha\), from a minimum, \(M_{\rm min}\) to a maximum, \(M_{\rm max}\), planetesimal mass, a density \(\rho_{k}\) and a CO content, or volatile mass fraction, \(f_{v,k}\). In this work, the volatile is considered to be CO, but the model could be applied to trace the evolution of any volatile, including water ice, and solids refers to the dusty component of comets. In order to numerically follow the evolution of the belt, the mass in the planetesimal belt is distributed between bins, labelled by the mass in individual solid planetesimals in each \(k\)th bin, \(M_{\rm s,k}\). The planetesimals in each bin have a total mass which is the sum of the mass in dust or solids, \(M_{\rm s,k}\) and volatiles, \(M_{\rm v,k}\), such that \(M_{k}=M_{\rm s,k}+M_{\rm v,k}\). The bins are logarithmically spaced in \(M_{\rm s,k}\), and \(k=1\) labels the bin of largest mass. The logarithmic bin spacing \(\delta\) is defined such that \(1-\delta=\frac{M_{\rm s,k}}{M_{\rm v,k}}\). The initial number of planetesimals in each mass bin is assumed to scale as: \[n_{k}(M)=KM_{\rm s,k}^{-\alpha}, \tag{1}\] which is equivalent to the commonly used \(n(D)dD\propto D^{-\alpha^{\prime}}dD\), where \(\alpha=(\alpha^{\prime}+2)/3\)(see _e.g._ Wyatt et al., 2011; Dohnanyi, 1969), such that when \(\alpha^{\prime}\)=7/2, \(\alpha=11/6\). Thus, the total mass of solid planetesimals in each mass bin, is given by integrating the mass in solids across the bin between \(M_{\rm s,k}\) and \(M_{\rm s,k}(1-\delta)\) to give: \[m_{\rm s,k}(0)=KM_{\rm s,k}^{2-\alpha}, \tag{2}\] assuming \(\delta<<1\), where \(K=\frac{m_{\rm s,tot}(0)}{\Sigma_{\rm min}^{\rm max}M_{\rm s,k}^{2-\alpha}}\) and \(i_{\rm max}\) labels the bin of the smallest planetesimal present in the collisional cascade. The planetesimals in the \(k\)th bin have an average density, \(\rho_{k}\) and diameter \(D_{k}\), where \(M_{k}=\frac{\sigma\rho_{k}D_{k}^{3}}{6}\). The diameter of the planetesimal, \(D_{k}\) in each bin, as well as their average density, \(\rho_{k}\), can change as a function of time as the planetesimals lose volatiles, whilst the mass in solids, \(M_{s,k}\) remains constant. Thus, the volatile fraction of each planetesimal, \(f_{\rm v,k}\), also changes as a function of time and is denoted: \[f_{\rm v,k}=\frac{M_{\rm v,k}}{M_{\rm s,k}+M_{\rm v,k}}. \tag{3}\] The average density is given by: \[\rho_{k}=\frac{1}{((1-f_{\rm v,k})/\rho_{s}+f_{\rm v,k}/\rho_{\rm v})}, \tag{4}\] where \(\rho_{s}\) is the density of the dust or solid component and \(\rho_{\rm v}\) is the density of the volatile component. The number of colliders in the \(k\)-th bin is given by the ratio of the total mass in solids to the mass of each planetesimal in solids, as the total diameter or mass in volatiles in the \(k\)-th bin may change, \(n_{k}=\frac{m_{s,k}}{M_{s,k}}\). ### Conditions for catastrophic/resurfacing collisions Catastrophic collisions are those with sufficient energy to disrupt a planetesimal, leaving no remnant larger than half the mass of the original planetesimal. The incident energy must be above the specific incident (impact) energy required to cause a catastrophic collision, or the dispersal threshold, \(Q_{D}^{*}\). A power-law form for the dispersal threshold is assumed, following work on collision outcomes by Benz & Asphaug (1999); Durda et al. (1998), such that: \[Q_{D}^{*}=Q_{a}D^{-a}+Q_{b}D^{b}, \tag{5}\] where \(a\) and \(b\) are both positive constants related to the planetesimal's material and gravitational strength, respectively. Following Wyatt et al. (2011), we take \(Q_{a}=620\) J kg\({}^{-1}\), \(a=0.3\), \(Q_{b}=5.6\times 10^{-3}\) J kg\({}^{-1}\), and \(b=1.5\), where \(D\) is in m, noting that these parameters are applicable to basalt rather than a mixture of ice and basalt. This is plotted in Fig. 1 as a function of the planetesimal's total diameter, \(D_{k}\) or mass, \(M_{k}\), assuming an average density of \(\rho_{k}=1\)g cm\({}^{-3}\). For small planetesimals, the dispersal threshold is dominated by the planetesimal's material strength, whereas for larger planetesimals, gravity dominates. Resurfacing collisions occur when there is sufficient energy to disrupt a planetesimal, but insufficient energy to disperse the fragments subsequently. Resurfacing collisions occur when the incident energy is above the specific incident energy required for shattering, \(Q_{S}^{*}\), but below the specific incident energy required for dispersion, \(Q_{D}^{*}\), where: \[Q_{S}^{*}=Q_{a}D^{-a}. \tag{6}\] This shattering threshold is plotted, alongside the dispersal threshold, on Fig. 1. The minimum in the dispersal threshold occurs for diameters of size \(D_{W}\), where \[D_{W}=\left(\frac{aQ_{a}}{bQ_{b}}\right)^{\frac{1}{(a+b)}}. \tag{7}\] Resurfacing collisions can occur for all sizes, however, the incident energy range for which \(Q_{D}^{*}>Q>Q_{S}^{*}\), the condition required for a resurfacing collision to occur, becomes vanishingly small for small diameter particles. In the numerical model a cut-off is introduced, where no resurfacing collisions are assumed to occur if there are less than three mass bins between minimum mass bin above which resurfacing collisions would occur, labelled \(i_{\rm wk}\) and the minimum mass above which catastrophic collisions would occur, labelled \(i_{\rm wk}\). Or in other words, no resurfacing collisions occur for \(M_{\rm min,r}\). This avoids resurfacing collisions from switching on and off around diameter bins of a few hundred kms (depending on the properties of the planetesimal belt). For these bodies, the smallest bodies that have sufficient energy to shatter a planetesimal of mass, \(M_{\rm min},r\), also have sufficient energy to catastrophically destroy it. Figure 1: The dispersal threshold, \(Q_{D}^{*}\) (solid line) and shattering threshold, \(Q_{S}^{*}\) (dashed line), as a function of the planetesimal’s total diameter, \(D_{b}\) or mass \(M_{k}\), calculated using the parameterised form of Eq. 5 and Eq. 6 from Benz & Asphaug (1999); Durda et al. (1998). The light-blue shaded region indicates those collisions that will be catastrophic, whilst the darker blue region those collisions that are shattering. Figure 2: A cartoon to illustrate how the movement of material following collisions is traced in the model presented in §2.3. +c refers to the mass gained from catastrophic collisions, -c to that lost from catastrophic collisions, shown by the black arrows and -r to the mass lost from resurfacing collisions, shown by the red arrows ### Collision Model In order to model the collisional evolution of the planetesimal belt, we follow Wyatt et al. (2011), with the additional ability to trace both solids and volatiles. A numerical method is utilised. The total mass in each bin is traced at each timestep, \(\delta_{t}\). A particle-in-a-box approach is used to model the rate of collisions between planetesimals in different mass bins, tracing both catastrophic and re-surfacing collisions. Following each catastrophic collision, mass is redistributed amongst smaller fragments, until it eventually becomes sufficiently small to be blown out of the system by radiation pressure (see Fig. 2). In such a manner, the planetesimal belt is depleted in mass as a function of time. Additionally to Wyatt et al. (2011), the model presented here traces the volatile content of planetesimals separately to their refractory (solid) content. In other words, the total mass in solids in the \(k\)-th bin is given by \(m_{\rm a,k}\), whilst the total mass in volatiles is given by \(m_{\rm v,k}\). At each timestep, volatiles can be released to gas and the total mass in gas is also traced \(m_{\rm gas}\). A full list of the variables used in this model can be found in Table 1. #### 2.3.1 Collision Rates The rate of collisions between particles in the \(k\)-th bin, with particles in the \(i\)-th bin is determined by the cross-sectional area for collisions, \(\frac{\pi}{4}\left(D_{k}+D_{i}\right)^{2}\), the relative velocity, \(v_{\rm rel}\), and the volume through which the planetesimals are moving, \(V\). Planetesimals in the \(k\)-th bin can only collide catastrophically with particles larger than a certain size, which corresponds to those planetesimals in size bins with indices less than \(i_{\rm ck}\), such that: \[R_{k}^{c}=\Sigma_{i=1}^{i_{\rm ck}}\frac{n_{i}}{4}{\left(D_{k}+D_{i}\right)^{ 2}}\frac{\pi v_{\rm rel}}{V}, \tag{8}\] where \(n_{i}\) is the number of colliders in the \(i\)-th bin and \(V\) is the volume through which the planetesimals are moving, given by (Sykes, 1990): \[V=8\pi r^{2}e\sin(I)(1+\frac{e^{2}}{3}), \tag{9}\] where \(r\) is the belt radius, \(e\) the average eccentricity and \(I\) the maximum inclination of particles. The index \(i_{ck}\) labels the bin containing planetesimals of mass, \(M_{\rm ck}\), the smallest bodies that can cause catastrophic collisions to planetesimals in the \(k\)-th bin, where: \[M_{ck}=\left(\frac{2Q\dot{\rm D}}{v_{\rm rel}^{2}}\right)M_{k}. \tag{10}\] We note here that if the minimum mass that can catastrophically destroy planetesimals is larger than the size of the planetesimals, _i.e._\(M_{\rm ck}>M_{k}\) the premise of the model breaks down. This is because in such a simple model, the largest planetesimals would no longer evolve collisionally, whilst in reality they would lose mass due to e.g. cratering collisions. We note that the approximations break down for targets larger than 30km (\(D>30\)km) for a belt at 100au, with \(v_{\rm rel}=e\,v_{\rm k}\), where \(e=0.1\) and the model for \(Q_{D}\) used here. The rate of resurfacing collisions is calculated in a similar manner, with only colliders too small to result in catastrophic collisions (\(i<i_{\rm ck}\)) and sufficiently large to cause resurfacing collisions, \(i_{rk}\) considered: \[R_{k}^{r}=\Sigma_{i=l_{\rm ck}}^{i_{\rm ck}}\frac{n_{i}}{4}{\left(D_{k}+D_{i} \right)^{2}}\frac{\pi v_{\rm rel}}{V}, \tag{11}\] where \(i_{rk}\) labels the bin of mass \(M_{rk}\) that contains the smallest impactors that can cause resurfacing collisions, where: \[M_{rk}=\left(\frac{2Q_{\rm S}^{*}}{v_{\rm rel}^{2}}\right)M_{k}. \tag{12}\] The average lifetime of a planetesimal of diameter, \(D\), against catastrophic collisions can be calculated as \(t_{c}=\frac{1}{R_{k}^{c}}\) (Eq. 8), and similarly for resurfacing collisions as \(t_{r}=\frac{1}{R_{k}^{c}}\) (Eq. 11). ### Mass Conservation The mass in solids in the collisional cascade is depleted with time as catastrophic collisions grind down the planetesimals into dust that is lost from the system. The mass in volatiles can be lost to gas from planetesimals of all sizes. However, at every timestep the total mass is conserved, such that the rate of change in the \(k\)th bin of the total mass in solids (volatiles) \(\dot{m}_{\rm a,k}\) (\(\dot{m}_{\rm v,k}\)) is given by: \[\dot{m}_{\rm a,k} = \dot{m}_{\rm s,k}^{+c}-\dot{m}_{\rm s,k}^{-c}, \tag{13}\] \[\dot{m}_{\rm v,k} = \dot{m}_{\rm v,k}^{+c}-\dot{m}_{\rm v,k}^{-c}-\dot{m}_{\rm v,k}^{-c},\] (14) \[\dot{m}_{\rm g} = \Sigma_{k=1}^{k_{\rm max}}(\dot{m}_{\rm g,k}^{+c}+\dot{m}_{\rm v, k}^{-c})+\Sigma_{k=k_{\rm max}}^{\infty}\dot{m}_{\rm g,k}^{+c}, \tag{15}\] where \(\dot{m}_{\rm s,k}^{-c}\) (\(\dot{m}_{\rm v,k}^{-c}\)) is the rate at which the total mass in solids (or volatiles) in the \(k\)-th bin is lost to catastrophic collisions, \(\dot{m}_{\rm s,k}^{+c}\) is the rate at which the mass in solids is gained from catastrophic collisions of larger bodies, \(\dot{m}_{\rm s,k}^{+c}\) is the rate at which mass in volatiles is gained from catastrophic collisions of larger bodies, noting that this accounts for the volatiles mass lost to gas, \(\dot{m}_{\rm g,k}^{+c}\) is the mass lost to gas directly as the \(k\)-th bin gains mass in volatiles from catastrophic collisions in larger bins and \(\dot{m}_{\rm v,k}^{-c}\) is the rate of mass loss in volatiles from velocities from re-surfacing collisions. The mass in gas, \(m_{\rm g}\), is increased by mass loss from volatiles to gas in both catastrophic and resurfacing collisions, from the material received in the \(k\)th bin at a rate of \(\dot{m}_{\rm g,k}^{+c}\) and \(\dot{m}_{\rm v,k}^{+c}\), respectively. Additionally, some gas is produced when volatile fragments directly to particles smaller than the minimum present in the collisional cascade, which leads to the additional term, \(\Sigma_{k=k_{\rm max}}^{\infty}\dot{m}_{\rm g,k}^{+c}\). The mass loss rate for solids is given by \[\dot{m}_{\rm s,k}^{-c}=m_{\rm s,k}R_{k}^{c}, \tag{16}\] Figure 3: The fraction of mass from collisions in the \(i\)-th bin that ends up in the \(k\)-th bin, or the redistribution function, for \(\delta=0.01\) and \(\alpha=11/6\). whilst that for volatiles, exclusively due to catastrophic collisions, is given by \[\dot{m}_{\nu,k}^{-c}=m_{\nu,k}R_{k}^{c}. \tag{17}\] The mass rate gained for solids is given by \[\dot{m}_{s,k}^{+c}=\Sigma_{i=1}^{i=k}F(k-i)\;\dot{m}_{s,i}^{-c}, \tag{18}\] where \(F(k-i)\) is the fraction of the mass leaving the \(i\)-th bin from collisions that goes into the \(k\)-th bin, or the redistribution function, which we assume to be scale independent. We assume that fragments produced in catastrophic collisions have a range of masses from the largest fragment, with \(\frac{M_{s,i}}{2}\), which falls in the bin labelled \(i_{lr}\), to the smallest body considered, which falls in the bin labelled by \(i_{max}\), which we assume to be much smaller than \(\frac{M_{s,i}}{2}\). This is a good approximation for collisions destroying bodies with \(D\gg\) cm as the smallest particles in the disc will be mm-sized or smaller. Thus, the \(k\)-th bin can only gain mass from catastrophic collisions between objects with a mass \(2M_{s,k}\) or greater, labelled by \(i_{mk}=k-\frac{ln(2)}{k}\). Thus, the mass rate gained for solids in the \(k\)-th bin is calculated by summing over the contributions from the largest mass bin, \(i=1\), down to \(i_{mk}\), which labels the bin of mass \(2M_{s,i}\). We assume that the size distribution of fragments is given by Eq. 1, where \(\alpha>1\) and the separation between bins \(\delta\ll 1\). We consider the fragmentation of a body that is a uniform mixture of volatiles (ices) and solids, and that, therefore, all the fragments retain the same uniform mixture. This leads to a redistribution function, where we assume the same exponent for the power-law, \(\alpha\), as the size distribution, given by : \[F_{s}(k-i)=\eta^{(\alpha-1)}(1-\delta)^{(k-i)(1-\alpha)}\,\delta(1-\alpha) \tag{19}\] This is based on Eq. 20 of Wyatt et al. (2011), where \(\delta\) is now the spacing between mass bins and not radial bins, \(\eta_{\rm max}=1/2\), such that \(\alpha^{\prime}=3\alpha-2\), where \(\alpha^{\prime}\) are the parameters used in Wyatt et al. (2011) and \(\delta\sim 3\delta^{\prime 3}\). This function plotted in Fig. 3, truncated at \((k-i)=\frac{ln(2)}{\delta}=69\), which for \(\delta=0.01\) labels the bin of solid mass, \(M_{s,i}/2\), or the largest fragment of a catastrophic collision. By definition all of the mass leaving the \(i\)-th bin ends up in bins between \(k=i_{lr}\) and \(k=\infty\), such that \[\Sigma_{k=i_{lr}}^{\infty}F(k-i)=1. \tag{20}\] In a similar manner, the rate of mass gain for volatiles is given by : \[\dot{m}_{\nu,k}^{+c}=\Sigma_{i=1}^{i=k}F_{\nu}(k-i,f_{\nu,i})\dot{m}_{\nu,i}^ {-c}, \tag{21}\] where the \(F_{\nu}(k-i,f_{\nu,i})\) is the redistribution function for volatiles. This is explicitly a function of the volatile fraction of the disrupting bodies, \(f_{\nu,i}\), as this accounts for the possibility that some of these volatiles may be lost soon after the collision. This is related to the redistribution function for solids, in that: \[F_{\nu}(k-i,f_{\nu,i})=F_{s}(k-i)\,(1-\chi_{k}^{c}(f_{\nu,i})), \tag{22}\] where \(\chi_{k}^{c}(f_{\nu,i})\) is the fraction of the volatile mass lost to gas as soon as the fragment is created, calculated in the following section, which is a function of the total mass of the newly formed body, \(M_{k}\), and the volatile fraction of the original planetesimal in the \(i\)-th bin, \(f_{\nu,i}\). In a similar manner, a fraction \(\chi_{k}^{c}(f_{\nu,i})\) of the volatiles in the fragment gained by the \(k\)-th bin from catastrophic collisions is released directly to gas, from catastrophic collisions between larger bodies, is released directly to gas. The rate at which this occurs in the \(k\)-th bin is given by: \[\dot{m}_{\rm g,k}^{+c}=\Sigma_{i=1}^{i=k}F_{s}(k-i)\,\chi_{k}(f_{\nu,i})\, \dot{m}_{\nu,i}^{-c}. \tag{23}\] The rate of mass lost for volatiles, due to re-surfacing collisions, is given by: \[\dot{m}_{\nu,k}^{-c}=\chi_{k}^{r}(f_{\nu,k})m_{\nu,k}R_{k}^{r}. \tag{24}\] ### Release of CO following collisions The exact role of collisions in releasing volatiles from cometary bodies is not clear. Experimental work is best at probing collisions between small particles (Blum & Wurm, 2008; Simon et al., 2022) and tracking of volatile species is limited. In the Solar System, whilst activity and degassing in individual active comets can be monitored, linking this to a comet's collision history is challenging. Nonetheless, collisions will clearly play a role in releasing volatiles from comets. Collisions can transfer heat to planetesimals, which in turn leads to volatile loss (_e.g._ Jutzi & Benz, 2017; Davidsson, 2023). Collisions also expose new surface, such that volatiles can be lost via sublimation or UV desorption. Rather than attempting to model these processes in detail, here we produce a simple model in which we consider the efficiency at which volatiles are lost in an individual collision to be a function of the surface area of the colliders. The model is set up in such a way that this prescription could be updated in the future. The model is equivalent to the devolatisation of a thin layer of depth, \(h\). The depth of this layer may be relatively large, for example for comets sufficiently close to the star that sublimation acts on long timescales (tens of metres), but may be very small for example for comets far from the star, if thermal or UV desorption of the fresh surface layer is the only mechanism to release volatiles (less than millimeters), as the rest of the CO would remain trapped. #### 2.5.1 Model for release of volatiles in Catastrophic Collisions We consider a simple model in which following a catastrophic collision the fractional release of volatiles is proportional to the surface area of the fragments produced, or in other words volatiles are released from the equivalent of a surface layer of depth, \(h\). Although we acknowledge here that this layer may not truly be a thin surface surrounding the whole comet, but instead be focused around the impact site. We consider the mass in volatiles released to gas due to fragments produced by catastrophic collisions of solid mass, \(M_{s,k}\), to be given by the mass in volatiles found in the layer, \(h\), which can be calculated by subtracting the volatile mass of the smaller planetesimal (\(\propto(R-h)^{3}\)) from the total volatile mass of the planetesimal (\(\propto R^{3}\)). Thus, the mass released to gas by fragments entering the \(k\)-th bin is given by: \[\delta M_{g,k}^{c}=4\pi\rho_{k}f_{\nu,k}\left(\left(\frac{3M_{h}}{4\pi\rho} \right)^{2/3}h-\left(\frac{3M_{h}}{4\pi\rho}\right)^{1/3}h^{2}+\frac{h^{3}}{3}\right) \tag{25}\] such that the fractional release of volatiles, which can never be greater than one, is given by: \[\chi_{k}^{c}=\frac{\delta M_{v,k}}{f_{v,k}M_{k}}. \tag{26}\] The solid lines on Fig. 4 shows the fraction of volatiles arriving in the \(k\)-th bin from the disruption of larger planetesimals that are released directly to gas, for different assumptions about the depth of the layer, \(h\). The fractional release decreases with increasing particle size for small particles, as the assumption of a constant depth, \(h\), accounts for a larger fraction of the body. Alternatively, the fractional release to volatiles due to the catastrophic destruction of planetesimals of mass, \(M_{k}\), can be considered as the sum of the mass lost from all fragments produced and is shown by the dotted lines on Fig. 4. #### 2.5.2 Model for the release of volatiles in Resurfacing Collisions We consider that a large planetesimal of mass \(M_{k}\), following a resurfacing collision is split into fragments of mass \(M_{f}\) (diameter, \(D_{f}\)), the largest of whom has mass \(M_{f}/2\), which occurs in the bin labelled \(i_{\rm frag}\). Each fragment individually loses volatiles to gas from an outer layer of depth, \(h\), such that the volatile loss, \(\delta M_{\rm g,f}\), is given by Eq. 25. The mass released to gas from volatiles in a collision of a body of mass, \(M_{k}\) is, thus, the sum of the mass of fragments in each mass bin multiplied by the fraction of that mass released to gas from volatiles \(\chi_{f}\): \[\delta m_{\rm g,k}^{r}=\frac{f_{\rm v,k}}{(1-f_{\rm v,k})}\sum_{f=i_{\rm frag }}^{j_{\rm max}}m_{s,f}\,\chi_{f}, \tag{27}\] where \(\chi_{f}\) comes from Eq. 26, \(m_{s,f}=K^{\prime}\,M_{s,f}^{2-\alpha}\) and the constant of normalisation \(K^{\prime}\) is given by the mass in solids of a planetesimal in the \(k\)th bin, \((1-f_{\rm v,k})M_{k}=K^{\prime}\,\Sigma_{i=i_{\rm frag}}^{i_{\rm max}}M_{s,i} ^{2-\alpha}\). Thus, the fraction of the volatile mass released to gas by a resurfacing collision is given by: \[\chi_{k}^{r}=\frac{\Sigma_{f=i_{\rm frag}}^{i_{\rm max}}M_{s,f}^{2-\alpha} \chi_{f}}{\Sigma_{i=i_{\rm frag}}^{i_{\rm max}}M_{s,i}^{2-\alpha}}. \tag{28}\] The dot-dashed lines on Fig. 4 shows the fractional release of volatiles to gas from resurfacing collisions of planetesimals in the \(k\)th bin. ### Numerical Simulations The CO production from a planetesimal belt is calculated numerically at every timestep by splitting the planetesimal belt into logarithmically spaced solid mass bins of width \(\delta\), using the size distribution (Eq. 1). The dust and gas production from collisions are calculated using Eqs. 15, with a numerical method invoked to trace the mass in solids and the mass in volatiles in every bin \(m_{\rm s,k}\) and \(m_{\rm v,k}\), as a function of time. This solves Eqs. 15, tracking the volatiles lost to gas at each timestep. The timestep is selected such that the mass lost by the smallest bin is less than half of the mass initially in the smallest bin. \(F_{*}\) is normalized such that Eq. 19 applies exactly, counting only bins up to a large number \(2n_{\rm bin}\), where \(n_{\rm bin}\) is the total number of bins used. Mass conservation is assured by tracking the total mass in solids or volatiles at every output and adding the very small additional mass lost to gas or dust. The evolution of the solids in the code is benchmarked against Wyatt et al. (2011) using an initial mass distribution with \(\alpha=0.86\) (\(\alpha^{\prime}=-3.6\)), a bin width of \(\delta=0.02\), and a belt between 7.5 and 11au to match Wyatt et al. (2011). Results of the simulations for the collisional evolution of solids and volatiles in planetesimal belts The evolution of the mass in solids and volatiles in the fiducial simulation around a solar mass star with a belt between 75 and 125au, containing \(100M_{\oplus}\) of planetesimals in sizes between 11nm and 30km, a volatile assumed to be CO fraction of 4% and volatiles lost from a depth \(h=10cm\) following each collision, is shown in the top panel of Fig. 5. The belt location is chosen to beyond both the water-ice and CO snow-line for a solar luminosity star, such that thermal heating from the star can be neglected. Both the solids and volatiles start with an initial \(\alpha=11/6\) profile (see SS2.1), but this is quickly lost as collisions code the smallest bodies in the belt. In the size domain where collisional steady-state has been reached, the size distribution slightly departs from the initial 11/6 value. This is an expected result that follows from the size-dependence of \(Q_{D}^{*}\). We find a steady state slope of \(\sim 1.88\) in the strength-dominated domain and \(\sim 1.78\) in the gravity regime, which agrees with the theoretical prediction of Wyatt et al. (2011). The kink or wave in the size distribution at a few mms re Figure 4: The fractional release of volatiles directly to gas following catastrophic or resurfacing collisions, as a function of the fragment mass or diameter, \(D_{k}\), for different assumptions regarding the thin surface layer of depth \(h=100\mu\)m, 1mm, 1cm or 10cm from which volatiles are released. The solid lines show the fraction of the volatile mass arriving in the \(k\)th bin due to catastrophic collisions of larger bodies that is released directly to gas (Eq. 26). The dot-dashed lines show the fractional release of volatiles to gas from resurfacing collisions of bodies of mass, \(M_{k}\) (Eq. 28). The dotted lines, which overlap with the dot-dashed lines, show the fractional release to gas following catastrophic collisions of bodies of mass, \(M_{k}\). The bump seen at small diameters is due to the finite cut-off at \(D_{\rm min}=1\)mm, which may be artificial, if a more realistic disc extended down to smaller sizes. The bump seen at small diameters is due to the finite cut-off at \(D_{\rm min}=1\)mm, which may be artificial, if a more realistic disc extended down to smaller sizes. Fig. 5 and Fig. 6) at D = 200m which occurs due to the onset of resurfacing collisions. The sharp nature of this transition may not be realistic, but is an unavoidable consequence of the finite bin size used and the assumed sharp transition between collisions that do not realise any volatiles and fully shattering collisions. In reality, not only would there be a range of collision velocities, rather than the single value used here, but the transition from non-shattering to shattering collisions would occur gradually, encompassing a significant domain where collisions result in cratering rather than the assumed sharp transition at \(irk\) used in these models (see also discussion in Section 5.1). ### The release of volatiles from collisions Key for comparison with observations of gas in debris disc systems is the gas production rate due to collisions. This is traced in the numerical simulations using Eq. 15. The top panel of Fig. 7 plots the rate at which volatiles are released (\(\dot{m}_{\rm gas}\)) as a function of time for a belt with the fiducial properties and \(h=10\)cm with only catastrophic collisions (blue dotted) compared to catastrophic and resurfacing collisions (orange dashed). The release of gas from catastrophic collisions follows that of dust from solids (black solid line). Catastrophic collisions release volatiles at a lower rate than resurfacing collisions, whilst the rate at which gas is released by resurfacing collisions depends on their efficiency (compare \(h=1\)cm to \(h=10\)cm). The bottom panel of Fig. 7 shows the ratio of gas to dust production. At early times (\(<10\)Myr), volatiles are lost from the smallest bodies (\(D<30\)m), as shown in the middle panel of Fig. 6 and, therefore, extra gas is released from volatiles relative to solids, as shown by the dotted black line in the bottom panel of Fig. 7. This evolution could be seen as part of initialising the simulation. At later times (tens of Myrs), catastrophic collisions release gas at the same fraction of the rate at which dust is released and therefore, \(\frac{\dot{m}_{\rm gas}}{\dot{m}_{\rm gas}}\) tends to a constant value, in this case just below the initial volatile fraction of the planetesimals. When resurfacing collisions are included, however, the initial release of gas can be larger than the volatile fraction multiplied by the rate at which dust is produced, such that the fraction of gas to dust release remains above 4% or \(f_{v}(0)\). If the collision rate is so high that resurfacing collisions deplete the largest planetesimals of volatiles at late time, as in the simulations with \(m_{s,tot}(0)=1,000M_{\oplus}\) the gas to dust ratio can fall below the initial volatile fraction (see Fig. 8). The rate at which dust (and also volatiles) are released depends on the properties of the planetesimal belt, most notably the total initial mass, \(m_{s,\rm tot}(0)\) and the size of the largest planetesimals, \(D_{\rm max}\), as these influence the catastrophic collision timescale of the largest bodies present (Eq. 8). Both gas and dust are released at a higher rate, for example, in more massive belts, as shown in Fig. 8, which shows the release of gas and dust in the fiducial simulation, compared to an order of magnitude higher and lower total initial planetesimal belt mass (noting that on sufficiently long timescales the dust evolution tends to the same level, independent of initial belt mass - see Wyatt et al. (2007a) for details). For comparison the top plot of Fig. 8 indicates the mass in CO detected in a selection of bright debris discs as a function of the system age, noting that the scaling of the axes is arbitrary. Details of the sample can be found in Table 2. The intention of this plot is to indicate that most systems with CO detection are young1, which is also when collisions release gas at the highest rate. Footnote 1: Most surveys of CO gas have targeted primarily young systems with the most massive debris discs, (_e.g._Moor et al., 2017; Lieman-Sifry et al., 2016; Kral et al., 2020) Figure 7: The gas and dust production rate as a function of time (upper panel) and the ratio of the gas production rate to dust production rate as a function of time (lower panel), for the fiducial simulation with \(D_{\rm max}=30\)km and \(h=10\)cm, with both resurfacing (RC) and catastrophic (CC) collisions (orange dashed line) and with only catastrophic collisions (blue dotted line). The green dot-dashed line shows both resurfacing (RC) and catastrophic collisions (CC) with \(h=1\)cm. The dust production rate (solid line top panel), where dust is here considered to be all objects smaller than 1mm in diameter, is the same for all simulations. The dotted horizonal line in the bottom panel indicates the initial volatile fraction of 4%. Gas is released faster when resurfacing collisions are included, such that catastrophic collisions dominate the release of gas at late times when large planetesimals are volatile depleted (further discussion in §3.1). ## 4 Volatiles release due to heating from long-lived radioactive nuclides Whilst comets are in general formed in the cool outer regions of the disc, their interior temperatures can increase due to the stellar irradiation and the decay of radioactive materials in their interior. This leads to outgassing of volatiles, notably hypervolatiles such as CO and N\({}_{2}\). CO can be directly released from the sublimation of CO ice, or from where it is potentially trapped in amorphous water ice or CO\({}_{2}\) ice (Davidsson, 2021; Lisse et al., 2022). Whilst heating from the star can lead to release of CO (Kral et al., 2020; Davidsson, 2021) and the decay of short-lived radioactive nuclides are important in the Solar System (Prialnik et al., 1987), this work focuses on radiogenic heating from long-lived nuclides, such as \({}^{40}\)K, \({}^{232}\)Th, \({}^{235}\)U and \({}^{238}\)U. In particular, the location of the fiducial belt at 100au around a solar-type star was chosen such that stellar irradiation is unlikely to be the most significant contribution to heating for large comets. Here, we consider a simplistic model with a population of comets (planetesimals) in the belt, as described in SS2.1. This model provides an estimate for the total CO released by considering that every comet releases CO at a constant rate up until a time, \(t_{\rm release}(D)\), after which there is no further CO release. In total a fraction, \(f_{\rm release}(D)\), of the CO in an individual comet is released. Physically, this corresponds to the time at which the comet reaches its maximum heating due to the decay of long-lived radioactive nuclides, before their decay limits further heating. In reality the CO release from an individual comet may be peaked at earlier times, decreasing towards a time, \(t_{\rm release}\) (see discussion in SS5.2). The value of \(t_{\rm release}(D)\) is essentially a free parameter of the model presented, but we use representative values based on more detailed simulations for Solar System comets by Davidsson (2021), where a 203km comet releases CO for approximately 25Myr, whilst a 74km comet releases CO for 30Myrs. Davidsson (2021) focuses on the release of CO from amorphous water ice due to the decay of long-lived radioactive nuclides, rather than radiation-driven processes, although the models include time-variant protosolar heating, long-lived radionuclide heating, radial and latitudinal solid-state and radiative heat conduction, sublimation of CO ice, release of CO during segregation of CO\({}_{2}\):CO mixtures, sublimation of CO\({}_{2}\) ice, crystallization of amorphous water ice and release of entraped CO and CO\({}_{2}\), radial and latitudinal diffusion of CO and CO\({}_{2}\) vapours (including mass and heat transport), and secondensation of CO and CO\({}_{2}\) vapour when applicable. The models of Davidsson (2021) trace the thermophysical evolution of comets with a CO mass fraction of 4%, a dust:ice mass ratio of 4 and an initial CO:H\({}_{2}\)O molar ratio of 0.155, assuming Solar System abundances of long-lived radioactive nuclides and the stellar irradiation received at 23 au from the Sun. Although these models are performed closer to the Sun at 23au than our fiducial simulations at 100au, the dominant heat is the decay of long-lived radioactive nuclides. Full details can be found in Davidsson (2021), including the thermal properties used for the comets. This model finds that CO trapped in amorphous water ice is not released from comets smaller than 68km due to radiogenic heating, whilst bodies larger than this limit cannot escape crystallization due to heating from long-lived radioactive nuclides, which leads to the release of CO to gas. This is a sharp transition in the model, as the budget of long-lived radioactive nuclides increases above a limit required to provide sufficient heating for crystallization of water ice. Thus, assuming the time at which all CO is released increases linearly with size \(D\): \[t_{\rm release}(D)=K_{0}+K_{1}\,D, \tag{29}\] where \(K_{0}=3.8\times 10^{7}\) yr and \(K_{1}=-39\)yr m\({}^{-1}\). In the models of Davidsson (2021), the 203km comet releases 93% of its total CO, whilst the 74km comet stops thermally evolving with 30% of its total CO still present. The fraction of the total CO released is estimated assuming a linear function: \[f_{\rm release}(D)=C_{0}+C_{1}\,D \tag{30}\] where \(C_{0}=53\) and \(C_{1}=2.3\times 10^{-4}\)m\({}^{-1}\). Both the time and the fraction of CO released are shown as a function of diameter in Fig. 9. No dependence on the distance to the star is Figure 8: The gas (dashed) and dust (solid) production rate as a function of time (upper panel) and the ratio of the gas production rate to dust production rate as a function of time (lower panel) for the fiducial simulation, varying the total initial disc mass (\(m_{\rm s,tot}(0)=10,100,1000M_{\oplus}\)) The dotted horizontal line on the bottom plot indicates the initial volatile fraction of 4%. The left-hand axis of the top plot additionally shows the mass in CO detected as a function of age for a selection of debris systems with CO detections (see Table 2), noting that the ages have not been scaled for the finite proto-planetary disc lifetime. The CO mass is assumed to be depleted on a timescale of \(\sim 120\)yrs, indicating the many systems where shielding must occur (Marino et al., 2020). assumed in this simple model. We acknowledge that the exact timescales on which CO release continues, the proportion of the total CO released and the minimum size for which long-lived radioactive nuclides can heat sufficiently that any CO is released would all vary with many of the free parameters used in the (Davidsson, 2021) models, see SS5.1. In order to implement the release of volatiles due to thermophysical evolution, each mass bin is considered to release volatiles at a constant rate up until a time \(t_{\rm release}^{k}\). The rate at which mass in the \(k\)-th bin releases volatiles is given by the total mass released divided by the time period over which it is released, \[\dot{m}_{k,\rm a}=\frac{f_{v,k}KM_{\rm s,k}^{2-\alpha}\delta f_{\rm release}^{k }}{(1-f_{v,k})t_{\rm release}^{k}}, \tag{31}\] where \(K\) is the constant defined in Eq. 2. We note that this equation is based on the premise that there is no evolution of the mass in the belt, such that the power-law for the size distribution with constant \(\alpha\) continues to apply. The total volatile release is the sum over all bins, noting that volatiles are only released up to a time, \(t_{\rm release}^{k}\), and that no volatiles are released for bodies smaller than \(D<68\)km, as these bodies do not have a sufficient budget of long-lived radioactive nuclides to lead to mobilisation of CO: \[\dot{m}_{\rm a}=\Sigma_{k=1}^{\rm ieg}Z_{k}(t)\,\dot{m}_{k,\rm a}, \tag{32}\] where \(i_{68}\) labels the smallest mass bin to release CO, in this case \(D=68\)km, \(Z_{k}\) is a function which equals 1 for times, \(t\), shorter than \(t_{\rm release}^{k}\) and otherwise zero. We acknowledge that this basic model falls short of more detailed thermal evolution models, but it is intended to provide an indication of the probable levels of gas release, which can be compared with the release from collisions. A comparison between the volatile release from thermophysical evolution (long-lived radioactive nuclides) and collisions. The model described in SS4 is used to quantify the gas released from a planetesimal belt in which all (many) of the comets are active, with the focus being the late time thermal evolution due to the decay of long-lived radioactive nuclides, leading to the release of CO trapped in CO\({}_{2}\) or amorphous water ice. Fig. 10 shows the total CO production rate from the fiducial belt (see SS2.6), assuming an average rate. This release in gas depends crucially on the total mass in large planetesimals. It is only planetesimals larger than \(D=68\)km that contain a sufficient budget of long-lived radioactive nuclides in this model to contribute to the release of CO. This limit depends on the efficiency of heating or cooling and the total budget of long-lived radioactive nuclides, but is unlikely to change by orders of magnitude for Solar System budgets of long-lived radioactive nuclides. The CO in the 68km planetesimals is released at the latest times, which in this model means that no CO is released due to thermal evolution after around 30 Myrs, noting that in reality this is unlikely to be a hard cutoff and the exact time at which it occurs may not be well predicted by this model (see further discussion in SS5). Crucially in this simple model the gas released depends on the population of large planetesimals. Fig. 10 shows that the gas production rate depends on the mass in large planetesimals present. When the largest planetesimal is increased from 100km (purple dotted line) to 1,000km (brown dotted line), whilst maintaining the same mass in dust, the gas production rate increases by over an order of magnitude. This is plausibly an important test of whether large planetesimals are present in the planetesimal belt. Both radiogenic heating and collisions release volatiles at Figure 10: The release of gas from resurfacing and catastrophic collisions (dashed lines) and radiogenic heating (dotted lines) as a function of time, for the fiducial planetesimal belt (see §2.6 for full parameters) centred at 100au, varying the size of the largest planetesimal, \(D_{\rm max}\) from 30km (no gas production due to radiogenic heating: orange) to 100km (purple) and 1,000km (brown). The total initial belt mass, \(m_{\rm s,tot}(0)\), is scaled with \(D_{\rm max}\) to retain the same initial dust production rate, indicating a potential test for the size of the largest planetesimal, based on different predicted gas production rates. The right-hand axis additionally shows the mass in CO detected as a function of age for a selection of debris systems with CO detections (see Table 2, as in Fig. 8.) Figure 9: This figure shows the model assumptions for the time period during which CO is released, \(t_{\rm release}\), as a function of planetesimal diameter, due to thermal evolution (decay of long-lived radioactive nuclides), as well as the fraction, \(f_{\rm release}\), plotted on the y-axis, of the total CO present released. Further details §4. rates which are within the same order of magnitude (see Fig. 10). The gas production rates are broadly dominated by the availability of CO within the planetesimals in the belt within this simple model. ## 5 Discussion This work presents a simple model that aims to quantify the production of gas (most notably CO) from planetesimal belts, based on the parameters of the planetesimal belt. CO is released at early times due to thermal evolution powered by the decay of long-lived radioactive nuclides. This is compared to the release of CO due to both resurfacing and catastrophic collisions, following the collisional evolution of the planetesimal belt. The models point to the importance of thermal evolution in young planetary systems (\(<30\)Myr), whilst collisional gas production can be maintained on Gyr timescales. In the following sections, we first discuss the validity of the model presented, highlighting the many simplifying assumptions. We then discuss whether the model presented here can be used to distinguish the dominant method of gas production in debris systems and whether that is related to radiogenic heating, resurfacing or catastrophic collisions. ### Validity of Model The biggest simplification of the model presented here is the lack of any attempt to model the interior structure of the cometary bodies. This is crucial for the comet's thermal evolution, the location of gases within the comet, the structure of the comet and thus, its ability to release gas during collisions. Whilst there have been significant advancements in our understanding of cometary interiors in recent years (_e.g._ Steckloff et al., 2021; Malamud et al., 2022; Davidson, 2021), several key open questions remain and simulations are computationally expensive even for a single comet. The aim here is to consider the population of planetesimals as a whole. Key changes that would influence these models include the thermal conductivity of the comets. The model presented here is based upon the timescales found in the simulations of (Davidsson, 2021) who adjust the thermal conductivity as tabulated for H5 ordinary chondrite, amorphous, cubic and hexagonal water ice (Yomogida and Matsui, 1983; Klinger, 1980; Kuehrt, 1984), CO and CO\({}_{2}\)(Giauque and Egan, 1937) for the anticipated porosity of comets. This matches timescales predicted for the removal of hypervolatiles from Arrokoth (Steckloff et al., 2021). A reduced thermal conductivity would minimise the energy re-radiated in the infrared, such that heating is faster and CO loss occurs earlier. This would make it harder for CO loss to be sustained on long timescales, as required to explain gas detections in older exo-planetary systems. We acknowledge the importance of these poorly known parameters and the exact details of the thermal evolution model in determining the release of CO. A second important limitation of the model regards the interior structure of the comet, in particular, the location of hypervolatiles following initial thermal evolution. A typical outcome of models with radiogenic heating is that activity removes volatiles from the core, leaving a cold volatile-bearing mantle intact (Davidsson, 2021; Malamud et al., 2022). Collisions can then release CO from this cold outer crust whenever they occur. In particular this would influence the simple model for the release of gas from collisions, presented in SS2.5. This model is clearly an over simplification of reality, retaining only the baseline assumption that the release of gas is proportional to the surface area of the fragments. If different collision strengths or velocities are more/less efficient at releasing volatiles, this would significantly influence the overall gas production rate predicted by the models. The interior structure of the comets has a crucial influence on the ability of collisions to break them apart. This is modelled here by the simplistic prescription for \(Q_{D}^{*}\) based on SPH modelling of collisions of icy and rocky bodies from Benz and Asphaug (1999), however, this may take a different form if comets are formed predominantly via pebble accretion or are highly porous (Blum et al., 2017; Jutzi and Benz, 2017; Jutzi et al., 2017; Davison et al., 2010; Krirov and Booth, 2018). The timescales involved in the collision models presented here depend crucially on the strength of the planetesimals, thus, these timescales could potentially increase (decrease) significantly in an improved collision model. However, in the collision models, these timescales would scale in the same manner for both gas and dust production, such that even in this case, the model presented here has the power to distinguish whether the gas production is purely related to catastrophic collisions. The efficiency at which collisions release volatiles is parameterised in a very simple manner in the model presented here (see SS2.5.1 and SS 2.5.2). It is based upon the premise that the fractional release of volatiles (\(\chi_{k}\)) depends on the surface area of the planetesimal fragments. Whilst this appears a reasonable broad assumption as most thermal processes depend on the surface area available for heating or cooling, it is clear that the story is more complex for collisions which occur, for example, only in a particular region of the planetesimal or in a non-axisymmetric manner. The mode of heat transport through the body during such collisions is unclear. In this work this lack of knowledge is parameterised by the free parameter \(h\). However, we acknowledge that this free parameter may have values which are orders of magnitude different depending on whether the key process is thermal heating from stellar irradiation, UV desorption or the heat released during collisions. The situation for resurfacing collisions is even less clear. The model presented here assumes a size distribution of fragments which each lose volatiles before the body reaccumlates, with the absence of knowledge of the timescale for this re-accumulation parameterised in the free parameter \(h\). Again, both the timescale, the size distribution and the symmetric nature of this process are poorly understood. We also note that in a realistic system it is unlikely that volatile release is equally efficient from the fragments that reaccumulate to form large bodies in the gravity regime in both resurfacing and catastrophic collisions and from the fragments produced in catastrophic collisions, or in other words \(h\) is unlikely to take the same value for both collision types, as assumed here. Accounting for this potential additional volatile loss would only have a relatively minor effect, leading to earlier gas release from the belt. The collision models presented here ignore the possibility of cratering collisions, which is clearly an oversimplification. De tailed numerical investigations have indeed shown that cratering impacts can play a significant role in the collisional evolution of debris discs, leaving an imprint in the system's global particle size distribution (Thebault and Augereau, 2007). In the present case, we expect such collisions to increase the release of volatiles to gas by chipping away at the outer layers of planetesimals. This means that the presented results do probably underestimate the level of gas production by collisions at early times, while probably overestimating the duration of the gas production phase (because the reservoir of volatiles would be drained more quickly). The inclusion of cratering impacts would probably also smooth out any unphysical transitions in the size distribution (as seen for example in Fig. 5). In the present model there is indeed, for a given collisional target, an abrupt jump around the projectile mass \(M_{r}k\), between \(M>M_{rk}\) projectiles that can fully shatter the target into a distribution of fragments that will all release gas and \(M<M_{rk}\) projectiles that have zero effect on the target. An additional crucial simplification of the models is the assumption that the largest post-impact fragment is always half the mass of the target, whereas in reality this size decreases strongly depending on the energy imparted (_e.g._ Leinhardt and Stewart, 2009). The models for the thermal evolution of comets are based on a Solar System-like budget of long-lived radioactive nuclides. It is thus, crucial to question whether Solar System-like budgets are likely to be typical across exoplanetary systems. Whilst there is a base-line contribution to long-lived radioactive nuclides from the nuclear supply of the galaxy, their budget can be enriched by supernovae, either directly, or by enrichment of the star-forming molecular cloud. These processes are ubiquitous and all exoplanetary systems will be enriched to a certain degree in long-lived radioactive nuclides. Abundances of Thorium in sun-like stars suggest that most exoplanetary systems around sun-like star have similar, if not higher abundances of long-lived radioactive nuclides (Unterborn et al., 2015; Botelho et al., 2019). The story for the more volatile \({}^{40}K\) may, however, differ, with galactic chemical evolution models suggesting that Solar System-levels of \({}^{40}K\) occur in about 1/80 exoplanetary systems (Fatuzzo and Adams, 2015). This is interesting to note, as whilst a reduced budget of long-lived radioactive nuclides would increase the minimum size heated sufficiently to lead to CO out-gassing, it is plausible that larger comets could continue to release CO on longer timescales than in the models presented here. The models presented here ignore the presence of short-lived radioactive nuclides, such as \({}^{26}\)Al, as the budget of these in exoplanetary systems is unknown, which some studies suggesting that Solar System-like budgets are typical, whilst others that very few systems are enriched at similar levels as the Solar System (_e.g._Gounelle, 2015; Lichtenberg et al., 2016; Kuffmeier et al., 2016; Young, 2014). The decay of short-lived radioactive nuclides was potentially important for Solar System comets (_e.g._Parhi and Prialnik, 2023; Mousis et al., 2017), although the presence of amorphous water ice could suggest a limited budget of \({}^{26}\)Al at formation (Prialnik et al., 1987). The model presented here essentially ignores any thermophysical evolution prior to the end of the gas disc lifetime and assumes that the planetesimals are fully formed at this point. We acknowledge here that planetesimal belts may not be collisionally active (_i.e. stirred_) at the end of the gas disc lifetime, with rather a continued period of growth prior to self-stirring, as discussed in detail in (_e.g._Kennedy and Wyatt, 2010). The model presented here treats the thermophysical evolution and collisions as separate processes. In reality both processes may act on the same bodies. In which case, the late time collisional gas production may be significantly depleted due to the fact that volatiles have been lost from the largest planetesimals due to thermal evolution. The retention of some volatiles in 68-100km planetesimals (see Fig. 9) and all volatiles in smaller planetesimals allows for the continued gas production from collisions on long timescales, as relevant for example for Fomalhaut. ### Radiogenic heating or collisions? This paper highlights three main channels for the secondary production of gas in debris discs; radiogenic heating, catastrophic or gentler resurfacing collisions. The model presented here for the thermophysical evolution focuses on the heating due to the decay of long-lived radioactive nuclides, whilst we acknowledge here that for comets sufficiently irradiated by their host stars, external heating may also play a role. All three processes are able to sustain the release of CO at levels similar to those required to explain the observations, around \(10^{-6}-10^{-4}M_{\oplus}\) dissociating in \(\sim 100\)yrs2, i.e \(10^{-8}-10^{-6}M_{\oplus}\)yr\({}^{-1}\) (see Fig. 8), depending on the properties of the planetesimal belt. The key difference between the decay of long-lived radioactive nuclides and collisions here are the timescales, with radiogenic heating only leading to gas production at early times. Thus, whilst the decay of long-lived radioactive nuclides can explain the detection of gas in the majority of debris systems which are around young (\(<30\)Myr) stars, the detection of CO gas in older planetary systems, provides a key test. These include systems such as 49 Ceti at 40Myr or HD 21997 at 45Myr, where there is sufficient uncertainty in whether shielded CO could have survived or radiogenic heating could continue on slightly longer timescales than those used in the model presented here. However, we have to acknowledge uncertainties in the model presented here. Significantly older systems such as Fomalhaut at 440Myr (Mamajek, 2012), on the other hand, render the current release of gas from the decay of long-lived radioactive nuclides or the survival of CO from an earlier epoch unlikely. Footnote 2: In belts with higher CO masses, CO likely has a longer photodissociation timescale as it is self-shielded or shielded by other species (Marino et al., 2020), therefore it is difficult to determine the CO release rate in those systems or assess whether it is of secondary origin. In the model used here the maximum timescale for CO release from radiogenic heating depends on when the maximum temperature of the planetesimals is reached, which in turn depends on the exact structure and cooling of the cometary bodies. In the models of Davidsson (2021) a decrease in the dust:water-ice ratio, accompanied by an increase in the CO:H\({}_{2}\)O would reduce the rate of heating from LLRs and increase the timescale for which CO could be released (by increasing the total amount of CO to be released). Whether this could be increased sufficiently to produce gas at the rate required for Fomalhaut after 440Myr is not clear. An alternate explanation for the gas production in the Fomalhaut system is the release of volatiles following heating by stellar irradiation. Davidsson (2021) show that a 200km body continues to lose CO from CO ice for 200Myr when irradiated by the Sun at 23 au, in a similar manner to that mentioned in Kral et al. (2021). If this CO is released, the same irradiation would occur at 93 au in the Fomalhaut system (\(L_{*}=16.6L_{\odot}\)), whilst at the location of the Fomalhaut belt (130-150au), the irradiation is reduced to just under half and in principle CO loss could continue for just over twice as long (\(\sim\)400-500Myr). However, the rate of release of CO would potentially be lower. A simple estimate finds a constant average rate of \(10^{-9}M_{\oplus}\) yr\({}^{-1}\), assuming that the total mass in the Fomalhaut system in bodies up to 300km is \(63M_{\oplus}\)(Krivov and Wyatt, 2021), with a CO mass fraction of 4%. This low release rate may just be able to explain the low mass of CO (\(10^{-7}M_{\oplus}\)Matra et al. (2017b), dissociating in \(\sim 100\)yr). On the other hand, if CO remains present within the planetesimals in the Fomalhaut belt, collisions will continue to release CO for over 440Myr. Assuming an initial CO fraction of 4%, the belt's location (143au), width (13.6au) and predicted total planetesimal mass (\(1.8M_{\oplus}\)) in bodies up to 0.3km (Krivov and Wyatt, 2021), the collisional gas production (both catastrophic and resurfacing collisions) would be \(10^{-9}\)\(M_{\oplus}\)yr\({}^{-1}\). Thus, whilst the decay of long-lived radioactive nuclides is unlikely to continue on sufficiently long timescales to explain the gas production observed at Fomalhaut, if the Fomalhaut planetary system had a substantially lower initial budget of long-lived radioactive nuclides (such that volatiles can survive in large planetesimals on long timescales), it remains plausible that the low (compared to other debris discs with CO detection) CO mass observed at Fomalhaut could be released due to the stellar irradiation slowly heating and mobilising the CO ice. This explanation, however, depends crucially on the presence of large (hundreds of km) planetesimals. Collisions, are able to sustain a low rate of gas production for the age of Fomalhaut without the presence of large planetesimals. Thus, this work suggests that both thermal evolution and collisions lead to the release of gas in debris disc systems, with thermal evolution dominating at early times, but less likely to explain CO in old planetary systems such as Fomalhaut. ### Resurfacing or Catastrophic Collisions It is currently difficult to find observational evidence that the gas observed in debris systems is produced in resurfacing collisions, rather than catastrophic collisions. One prediction of the models presented here is that the rate of gas production from catastrophic collisions is proportional to the dust production rate (infrared emission), whilst for resurfacing collisions it depends additionally on the population of large planetesimals. At face value the observed population of debris discs with and without CO detections could be seen as evidence in support of resurfacing collisions as there does not appear to be a direct correlation between infrared emission and mass in CO detected, see Marino et al. (2020) for a detailed discussion. However, this lack of correlation can plausibly be explained by two things. Firstly, the observed CO may be shielded and thus, not proportional to the CO production rate. Secondly, the CO production rate from catastrophic collisions may be proportional to the dust production rate, but will depend additionally on other parameters which can vary between systems, such as the CO fraction of the planetesimals. Thus, it is not currently possible to use the observed population to rule out the potential importance of resurfacing collisions in CO production. From a theoretical perspective, however, it seems likely that non-catastrophic collisions, not just resurfacing collisions, but cratering or other non-catastrophic collisions occur in planetesimal belts. Whilst this model does not explicitly include cratering collisions, these collisions could potentially release CO at earlier times, as the more frequent cratering collisions chip away at the outer layers of the planetesimals. However, the bulk of the CO, trapped in the deep interior would still need to wait for a shattering or resurfacing collision to be released. ### How big are the largest planetesimals in debris discs? This is a crucial question, as highlighted for example by Krivov and Wyatt (2021), which determines the long-term evolution of debris systems. Whilst the Solar System's debris belt contains large (D\(\sim 1,000\)kms) planetesimals such as Pluto, the presence of large (\(D>100\)km) planetesimals contradicts observations that indicate fewer old systems have high infrared luminosities from dusty planetesimal belts, as predicted by the collision evolution of belts containing only small planetesimals (Su et al., 2006; Wyatt et al., 2007; Krivov and Wyatt, 2021). Additionally, the mass budget in planetesimals required for the brightest observed debris systems would exceed that of the solid component of proto-planetary discs or exoplanet population. Gas production from radiogenic heating depends crucially on the population of large planetesimals and thus, provides, a test for the size of these bodies in debris discs. This is clearly seen in Fig. 10, where the gas production rate is significantly higher, for the same dust production rate, when the population of larger planetesimals is increased. As resurfacing collisions also only occur for large planetesimals, if planetesimal belts do not contain a population of large (\(>\) tens of km) planetesimals, catastrophic collisions are more likely to dominate the observed release of CO. In the collisional production model, there is another important parameter controlling the system's evolution, namely the average eccentricity \(e\) of planetesimal orbits. This parameter indeed determines mutual impact velocities and thus the outcome of collisions. We took \(e=0.1\),the typical value usually assumed for debris-producing discs (Thebault, 2009), but lower values have been inferred for some discs, which would lead to a less intense but longer-lasting collisional activity (Lohne et al., 2012). ### The composition of comets, as derived from gas observations The detection of individual gases released from comets in debris discs provides a unique opportunity to probe the composition of comets in exoplanetary systems, in comparison to our Solar System, as in _e.g._Matra et al. (2017b). This work highlights the difficulties in using observed CO as a probe of the total CO content of planetesimals. Thermal evolution is likely to have played a significant role in reducing the initial volatile content of comets, even during the primordial disc phase, for comets both in the Solar System and in exoplanetary systems (Davidsson, 2021; Lichtenberg and Krijt, 2021). Additionally, the models presented here, notably Figs. 7, 8, show that when resurfacing collisions are considered the ratio of the gas to dust production rate can be significantly above (at early times) or below (at late times) the CO content of the planetesimals. ## 6 Conclusions The observation of gas in traditionally gas-poor debris disc systems provides crucial clues regarding the evolution of volatiles within planetary systems. Here, we compare a model that predicts the secondary release of gas from planetesimal belts due to heating from the decay of long-lived radioactive nuclides to a model for the collisional production of CO in both catastrophic and resurfacing collisions. The release of gas from catastrophic collisions follows the dust evolution of the belt, whilst non-catastrophic collisions, such as resurfacing (or shattering) collisions in large (hundreds of kms) planetesimals contribute to the early release of gas at higher rates than with only catastrophic collisions. We predict the gas release from collisions as a function of properties of the planetesimal belt. The release of gas from resurfacing collisions depends crucially on the presence of large (hundreds of km) planetesimals and means that the observed rate of CO release compared to the dust production is not always a good probe for the CO content of the comets. Radiogenic heating from the decay of isotopes such as \({}^{40}\)K, \({}^{232}\)Th, \({}^{235}\)U and \({}^{238}\)U can lead to the heating of comets and CO gas production rates comparable to those required to explain the observations, if planetesimal belts contain tens to hundreds of kilometer planetesimals. Radiogenic heating has the potential to explain the CO observed in all young (\(<\) 50Myr) planetary systems, whilst the presence of CO gas in older planetary systems, most notably Fomalhaut at 440Myr (Matra et al., 2017), is readily sustained by collisions. We highlight the potential importance of the slow penetration of stellar irradiation to the deep interiors of comets, as suggested by Kral et al. (2021), particularly for old planetary systems, such as Fomalhaut. ## 7 Data availability The data and codes used in this manuscript can be found at [https://github.com/abonsor/coll_gas](https://github.com/abonsor/coll_gas) ## 8 Acknowledgements AB acknowledges the support of a Royal Society University Research Fellowship, URF\(\backslash\)R1\(\backslash\)211421. SM is supported by a Royal Society University Research Fellowship (URF-R1-221669). We acknowledge fruitful discussions with Uri Malamud and Jurgen Blum. Parts of the research were carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration
2307.16008
A many-channel FPGA control system
We describe a many-channel experiment control system based on a field-programmable gate array (FPGA). The system has 16 bit resolution on 10 analog 100 MS/s input channels, 14 analog 100 MS/s output channels, 16 slow analog input and output channels, dozens of digital inputs and outputs, and a touchscreen display for experiment control and monitoring. The system can support 10 servo loops with 155 ns latency and MHz bandwidths, in addition to as many as 30 lower bandwidth servos. We demonstrate infinite-impulse-response (IIR) proportional-integral-differential (PID) filters with 30 ns latency by using only bit-shifts and additions. These IIR filters allow timing margin at 100 MS/s and use fewer FPGA resources than straightforward multiplier-based filters, facilitating many servos on a single FPGA. We present several specific applications: H\"ansch-Couillaud laser locks with automatic lock acquisition and a slow dither correction of lock offsets, variable duty cycle temperature servos, and the generation of multiple synchronized arbitrary waveforms.
Daniel T. Schussheim, Kurt Gibble
2023-07-29T15:32:17Z
http://arxiv.org/abs/2307.16008v1
# A many-channel FPGA control system ###### Abstract We describe a many-channel experiment control system based on a field-programmable gate array (FPGA). The system has 16 bit resolution on 10 analog 100 megasamples-per-second (MS/s) input channels, 14 analog 100 MS/s output channels, 16 slow analog input and output channels, dozens of digital inputs and outputs, and a touchscreen display for experiment control and monitoring. The system can support 10 servo loops with 155 ns latency and MHz bandwidths, in addition to as many as 30 lower bandwidth servos. We demonstrate infinite-impulse-response (IIR) proportional-integral-differential (PID) filters with 30 ns latency by using only bit-shifts and additions. These IIR filters allow timing margin at 100 MS/s and use fewer FPGA resources than straightforward multiplier-based filters, facilitating many servos on a single FPGA. We present several specific applications: Hansch-Couillaud laser locks with automatic lock acquisition and a slow dither correction of lock offsets, variable duty cycle temperature servos, and the generation of multiple synchronized arbitrary waveforms. R = title]Department of Physics, The Pennsylvania State University, University Park, Pennsylvania 16802, USA [cor1]Author to whom correspondence should be addressed: [email protected] ## I Introduction Field programmable gate arrays (FPGA's) are customizable and reconfigurable alternatives to analog electronics to control modern physics experiments. FPGA's often include fast digital logic, digital signal processing (DSP), data transceivers, other hardware elements and reconfigurable interconnections. Combined with high-speed analog-to-digital converters (ADC's) and digital-to-analog converters (DAC's), FPGA's are attractive options for implementing flexible high-speed servos, especially ones that benefit from conditional and dynamic features that are cumbersome to implement with discrete analog components. FPGA's have been widely used for laser and cavity frequency stabilization [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], for phase and frequency metrology [11, 12] and laser frequency comb stabilization [13, 14], and for timing pattern generators [15, 16]. FPGA servos can provide MHz bandwidths, which are often limited by the latencies of the high-speed ADC's and DAC's that sample at 100 MS/s or higher. A number of high-speed FPGA control systems have been demonstrated that implement one or two servos [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 13, 14], four servos [17], in addition to a scalable system where an FPGA synchronizes multiple daughter boards, each with its own FPGA that supports two high-speed servos [4]. For slower servos, with sample rates of several MS/s, control systems with as many as 8 servos on a single FPGA have been implemented [18, 19, 20]. Systems with many RF inputs, with one or more FPGA's, have been constructed for precise control of RF waveforms for particle accelerators [21, 22, 23] and the control of superconducting qubits [24]. A number of these systems use FPGA's integrated into a system-on-chip (SoC) [3, 4, 5, 6, 7, 8, 9, 10, 12, 14, 15, 16, 17, 21], which include a processor, facilitating floating point operations, flexible programming, and the implementation of Ethernet and USB communication protocols. Here, we demonstrate a many-channel FPGA system (MCFS) that uses a single FPGA to implement as many as 10 independent fast servos at 100 megasamples per second (MS/s) (see Fig. 1 and Table 1). This MCFS also supports up to 30 slow servo loops, either with analog inputs and outputs or analog inputs and digital outputs. Using a single FPGA facilitates interconnections between multiple servos and with the experiment control, and consumes less power per servo than SoC implementations and systems that use multiple FPGA's. Our system can perform a significant fraction of the tasks in a variety of contemporary experiments, including current atomic physics experiments; we use it to stabilize several lasers and cavities for second-harmonic and doubly-resonant sum-frequency generation [25, 26, 2], to laser-cool and trap cadmium [27, 28, 29]. We implement multiple feedback controllers in an FPGA with low-latency digital proportional-integral-differential (PID) gain servos [1, 3, 4, 7, 8, 10] using fast and efficient infinite-impulse response (IIR) filters [30]. Although some applications, such as high-Q notch filters, require precise filter coefficients, the gain margins of PID servos are often of order 2. Therefore, gain steps and filter coefficients that are \(2^{n}\) often have sufficient precision. Multiplications by coefficients that are \(2^{n}\) are simple and fast bit-shift operations that do not use large multipliers. With one more optional bit-shift and addition for each filter term, our PID gains have a resolution of 25% or better, with coefficients of \(2^{-n}(1+\{-\%,0,\%,\%\})\),... 0.875, 1, 1.25, 1.5, 1.75, 2, 2.5... The contributions to the IIR coefficients for PID gains and any pole or zero frequencies are separable. This approach uses a smaller fraction of FPGA resources than multiplier-based filters and can have timing margin at 100 MS/s. Below we describe our hardware, these bit-shift-addition IIR filters, and several applications that are well-suited for an FPGA control system. One is a servo with automatic locking [18, 1] for a build-up cavity for second-harmonic and doubly resonant sum-frequency generation. Here, Hansch-Couillaud stabilization [31] is enhanced with a slow dither lock to correct lock offsets and their drifts. This lock includes a synthesized dither and a low-resource lock-in amplifier. Another application is a temperature servo for optical cavities and nonlinear crystals that uses a variable-duty-cycle digital output. Finally, we describe synchronized 100 MS/s arbitrary waveform generators that control the laser frequency and intensity for a cadmium magneto-optical trap (MOT) using the narrow 67 kHz wide 326 nm intercombination line [29]. Our MCFS uses a remote touchscreen interface to display current and historical system status and to accept control inputs. Our open-source baseboard design and its associated Verilog software are available online [32]. \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline **Input/Output** & **\# of channels** & **Sample rate (MS/s)** \\ \hline \hline Fast ADC & 10 & 100 \\ \hline Fast DAC & 14 & 100 \\ \hline Slow ADC & 16 & 0.125 \\ \hline Slow DAC & 16 & 0.05 \\ \hline Digital I/O & 6+8 & 100 \\ \hline Digital Input & 22 & 2 and 3 \\ \hline Digital Output & 26 & 2 and 3 \\ \hline \end{tabular} \end{table} Table 1: Inputs and outputs of the many-channel FPGA system depicted in Fig 1. The fast and slow analog-to-digital converters (ADC’s) and digital-to-analog converters (DAC’s) have 16 bit resolution. The channels sampled by the slow ADC’s can be selected, for example, all channels at 0.125 MS/s or two channels at 1 MS/s. ## II Hardware Our many-channel FPGA system uses a commercial FPGA module[33] that plugs into a baseboard that we developed. The FPGA module has 216 accessible FPGA input/output (I/O), which is sufficient to control the numerous ADC's and DAC's on the baseboard. The FPGA has 25,350 logic slices, each containing 4 look-up tables and 8 flip-flops; 600 DSP slices containing a pre-adder, a 25x18 multiplier, a ternary adder and an accumulator; and 325 36 kb RAM blocks. Other pin-compatible modules with more FPGA resources are available that could accommodate additional software features. Our 6-layer, 8" x 12" baseboard has 5 two-channel 16-bit 100 MS/s fast ADC's, and 7 two-channel 16-bit 100 MS/s fast DAC's[34]. These converters have 70 ns and 55 ns latency, and use only 10 or 17 FPGA I/O for each 2-channel converter. As in previous FPGA control systems[12, 14, 6, 7, 1], the latency of the fast ADC's and fast DAC's are the dominant limitation to the servo bandwidths. In addition to the fast converters, this MCFS has two eight-channel 16-bit slow ADC's and two eight-channel 16-bit slow DAC's (see Table 1). The slow analog channels are useful for lower bandwidth signals and require only 7 and 5 FPGA I/O for the 16 slow ADC and 16 slow DAC channels. The analog inputs and outputs are buffered with operational amplifiers. The fast inputs have 10 MHz Figure 1: Schematic of the many-channel FPGA system. An FPGA module and a custom baseboard provide 10 channels of 100 MS/s 16-bit analog-to-digital converter (ADC) inputs and 14 channels of 100 MS/s 16-bit digital-to-analog converter (DAC) outputs. The baseboard also has 16 channels each of multiplexed slow ADC’s (125 KS/s) and slow DAC’s (50 KS/s), fast digital input/output (I/O) that could interface with additional slow ADC’s, and more than 20 digital shift register I/O at 2-3 MS/s, driven by a 50 MHz bus using only 7 FPGA I/O. The FPGA and its software can implement 10 laser/cavity PID servos with automatic lock acquisition and 9 or more variable duty cycle temperature servos, and can be monitored and controlled via the touchscreen display. Our baseline FPGA program has nine laser and cavity servos, eight variable duty cycle temperature servos, an arbitrary waveform synthesizer (Arb. Wave. Syn.) and digital signal processing (Sig. Proc.), a touchscreen display and control interface, and logic to reassign servo and system parameters via a serial data input. bandwidths with a \(\pm\)4 V range, the fast outputs have 5 MHz bandwidths and a \(\pm\)18 V range, the slow inputs have 160 kHz bandwidths and a \(\pm\)10 V range, and the slow outputs have 10 kHz bandwidths and a \(\pm\)18 V range. The amplifiers and their feedback components are on the opposite side of the board as the ADC's and DAC's, shielding them from digital signals and providing access, e.g., for bandwidth and range modifications, when the baseboard is mounted in its enclosure. The MCFS also has 6 channels of buffered 100+ MS/s digital I/O, 22 channels of 2 MS/s digital inputs, 26 channels of 2-3 MS/s digital outputs, and 8 channels of unbuffered digital I/O on a FPC connector that could be used for additional slow ADC's. A remote backlit, 3.5" color LCD touchscreen[35] connects to the baseboard via an SPI data bus. The baseboard also has a USB and an Ethernet connector. The baseboard design reduces digital-analog and analog-analog crosstalk. Ground planes fill much of the unused space on the six layers of the baseboard. Adjacent chips are separated from one another with gaps in the ground planes, especially to guide the return currents of high-speed digital lines. Vias connect the ground planes of each layer to reduce potential differences across ground plane gaps. The ground planes also shield analog signals and power planes from high-speed digital signals. Power is supplied to the baseboard, and in turn to the FPGA module, from a separate circuit board that is fed by a single +15 V input, which drives switching regulators[32] to power the digital electronics and linear regulators for the analog circuits. The switching regulators use frequencies between 0.38 and 1.1 MHz, e.g., to be safely above typical oscillation frequencies of atoms trapped in optical lattices. We mount the MCFS in an aluminum chassis box, providing heat sinking, radio-frequency shielding, and signal connections for the experiment. Because the FPGA module consumes the highest power of all of the baseboard components, we mount it with a small air gap to an aluminum heat spreader on the side of the box. The FPGA temperature is typically 70 \({}^{\circ}\)C with this passive heatsinking, safely below its 100 \({}^{\circ}\)C maximum. The ADC's and DAC's temperatures are lower, of order 50 \({}^{\circ}\)C, via their heatsinking to the baseboard and convective air currents to the chassis box. ## III. Infinite-Impulse Response Filters We construct low-latency IIR PID's by summing the outputs of three parallel filters, a 1\({}^{\text{st}}\)-order proportional (P) filter with a high-frequency roll-off, a 1\({}^{\text{st}}\)-order integral (I) filter that includes an optional low frequency gain limit, and a 2\({}^{\text{nd}}\)-order differential (D) filter (Fig. 2). To implement many PID controllers with the MCFS, we use bit-shift-addition IIR filters, which use a smaller fraction of the available FPGA resources than comparable multiplier-based filters. In our design for the configuration of Table II, including real-time adjustability of all parameters, the proportional and integral filters each use a minimum of 1066 (1.1%) FPGA logic slice look-up-tables and the differential uses 1261 (1.2%), for a total of 3.4%. For comparison, multiplier-based filters would use 14 (2.3%) DSP slices each for P and I, and 20 (3.3%) for D, for a total of 8.0%. Filters using bit-shifts have multiplier coefficients of 2\({}^{-n}\), and with an additional single bit-shift-addition, each filter term gives at least 25% resolution, i.e., 2\({}^{-n}\)(1 + {-}\(\nicefrac{{1}}{{6}},0,\nicefrac{{4}}{{7}}\), \(\nicefrac{{2}}{{3}}\)). These shift-add filters allow timing margin at 100 MS/s with one clock cycle of latency,[36] whereas our straightforward implementation of multiplier-based IIR filters did not have timing margin.[1, 8] In our PID's, the D contribution to the filter output has no additional latency, and we pipeline the addition of the P and I, which delays their contributions by one clock cycle to retain timing margin. Since 1\({}^{\text{st}}\)-order filters are a subset of 2\({}^{\text{nd}}\)-order filters, below we first describe a 2\({}^{\text{nd}}\)-order D filter and then 1\({}^{\text{st}}\)-order P and I filters, and finally discuss eliminating truncation instabilities of 2\({}^{\text{nd}}\)-order filters. ### A PID IIR filters IIR filters are a recursive, discrete-time algorithm that approximates a continuous transfer function with linear combinations of the most recent and prior input(s), and the prior output(s). The output of a general \(2^{nd}\)-order IIR filter is: \[y_{0}=a_{1}y_{1}+a_{2}y_{2}+b_{0}x_{0}+b_{1}x_{1}+b_{2}x_{2}\,\] where \(y_{n}\)'s are outputs, \(x_{n}\)'s are inputs, and \(a_{n}\)'s and \(b_{n}\)'s are filter coefficients. The subscripts on the \(x_{n}\)'s and \(y_{n}\)'s indicate previous or current values; \(y_{0}\) is the current output, \(y_{1}\) is the previous output and \(y_{2}\) preceded \(y_{1}\). The filter coefficients, \(a_{n}\) and \(b_{n}\), determine the transfer function,[30] and \(a_{2}=0=b_{2}\) in first-order filters. A transfer function for a differential gain \(D\) with a high frequency roll-off (see Fig. 2) is: \[H_{D}(s)=\frac{D(2\pi f_{0})^{2}s}{(2\pi f_{0})^{2}+s(y+s)},\] where \(s=2\pi if\), \(f_{0}\) is the roll-off frequency and \(\gamma\) is the damping for a filter quality factor \(Q=2\pi f_{0}/\gamma\). The filter coefficients are: \[a_{1} =2-\widetilde{\omega}^{2}-\widetilde{\gamma},\] \[a_{2} =-1+\widetilde{\gamma},\] \[b_{0} =\frac{\widetilde{D}}{2},\] Figure 2: Gain and phase of a PID transfer function. The PID output (black, solid curve) is the sum of a \(1^{\text{st}}\)-order integral filter (gray dotted curve), including with an optional low-frequency gain cap \(1/2\pi\)\(f_{L}\) (red dashed curve), a \(1^{\text{st}}\)-order proportional filter (green dot-dashed curve) with a high-frequency roll-off \(f_{H}\), and a \(2^{\text{nd}}\)-order differential filter (blue, dot-dot-dashed curve) with a high-frequency roll-off \(f_{0}\) and damping \(\gamma\). \[b_{1} =0,\] \[b_{2} =-\frac{\overline{D}}{2}.\] Here, \(\widetilde{\omega}\equiv 2\pi f_{0}T/[1+\gamma T/2+(2\pi f_{0}T)^{2}/4]^{1/2}\), \(\widetilde{\gamma}\equiv\gamma T/[1+\gamma\,T/2+(2\pi f_{0}T)^{2}/4]\), and \(\overline{D}\equiv\widetilde{\omega}^{2}D/T\), where \(1/T\) is the filter update rate. The coefficients \(a_{n}\) and \(b_{n}\) separate into gain and frequency terms, \(\overline{D},\widetilde{\omega}^{2}\) and \(\widetilde{\gamma}\), and the IIR output becomes: \[y_{0}=y_{1}-\widetilde{\omega}^{2}y_{1}+dy-\widetilde{\gamma}dy+\frac{ \overline{D}}{2}dx\ . \tag{1}\] Here \(dy=y_{1}-y_{2}\) is the difference of the previous two outputs and \(dx=x_{0}-x_{2}\) is the difference of the current input and that from two clock periods earlier [37]. We highlight that the differential gain \(\overline{D}\) multiplies only \(dx\), and not \(y_{1}\) or \(dy\), whereas the filter high-frequency roll-off coefficients \(\widetilde{\omega}\) and \(\widetilde{\gamma}\) multiply \(y_{1}\) and \(dy\) and not \(dx\), beyond \(\widetilde{\omega}\) scaling the gain. As discussed in more detail in the next section, the desired filter frequencies require a higher precision of \(y_{0}\) than do the gain coefficients, and this naturally allows sub-LSB input servo resolution. The filter output (1) is the sum of the differential gain contribution, and contributions from the frequency roll-off and filter damping coefficient. Instead of multiplying by the coefficients \(a_{n}\) and \(b_{n}\), the terms for \(\overline{D},\widetilde{\omega}^{2}\) and \(\widetilde{\gamma}\) in (1) can be simply implemented with bit-shifts when precisions of factors of 2 are sufficient. For example, a gain \(\overline{D}\) of \(2^{-14}\) is a right bit-shift of \(dx\) by 14: \(dx\gg 14\). For more precise PID contributions, we first optionally add a term with an additional bit-shift before applying the overall shift; \((dx+dx\gg 2)\gg 14\) yields \(\overline{D}=1.25\cdot 2^{-14}\). This gives two fractional bits of precision, \(2^{-n}(1+\{-\%,0,\%,\%\})\), which increases as...0.5, 0.625, 0.75, 0.875, 1, 1.25, 1.5, 1.75, 2..., and similarly for \(\widetilde{\omega}^{2}\) and \(\widetilde{\gamma}\). Along these lines, bit-shifts can be used for coarse scaling, combined with multipliers to retain precision [6, 6], to reduce the required size of the multipliers. Inverting the above expressions gives \(D\), \(f_{0}\) and \(\gamma\) in terms of the bit-shifts in (1), \(\overline{D}/2\), \(\widetilde{\omega}^{2}\), and \(\widetilde{\gamma}\). The differential gain is \(D=\overline{D}T/\widetilde{\omega}^{2}\) with a high frequency roll-off \(f_{0}=\widetilde{\omega}/2\,\pi\,T/[1-\widetilde{\gamma}/2-\widetilde{\omega}^ {2}/4]^{1/2}\), and damping \(\gamma=\widetilde{\gamma}/T/(1-\widetilde{\gamma}/2-\widetilde{\omega}^{2}/4)\), where \(\overline{D},\ \widetilde{\omega}^{2},\ \widetilde{\gamma}=2^{-n}(1+\{-\%,0,\%, \%\})\). We note that \(f_{0}\) and \(\gamma\) become nonlinear in \(\widetilde{\gamma}\) and \(\widetilde{\omega}\) for large \(\widetilde{\gamma}\) and \(\widetilde{\omega}\). To have timing margin at 100 MS/s, we use two fractional bits of precision for \(\overline{D}\) and \(2^{-n}\) precision for \(\widetilde{\omega}^{2}\) and \(\widetilde{\gamma}\), which gives \(2^{-1/2}\), 1, \(2^{1/2}\), 2,... resolution for \(\widetilde{\omega}\). Although the implementation timing report may not show timing margin for differential filters that have two fractional bits of precision for \(\widetilde{\omega}^{2}\) and \(\widetilde{\gamma}\), we nonetheless observed reliable operation at 100 MS/s. Further, if the differential gain \(\overline{D}\) remains adjustable and the high-frequency roll-off and damping are fixed, \(\overline{D},\widetilde{\omega}^{2}\) and \(\widetilde{\gamma}\) can all have two fractional bits of precision with timing margin at 100 MS/s. For the update rates of our temperature servos, this filter has timing margin with adjustable 25% precision on all terms. We similarly follow the above steps for the D filter for first-order P and I filters, with transfer functions: \[H_{P}(s)=\frac{p}{1+s/2\pi f_{H}}\ \text{and}\] \[H_{I}(s)=\frac{I}{2\pi f_{L}+s}.\] Here, \(P\) is the proportional gain, \(f_{H}\) is a high-frequency roll-off, \(I\) is the integral gain, which can include a low-frequency integral gain limit of \(I/2\,\pi\,f_{L}\). These P and I filters have functionally identical coefficients: \[a_{1}=1-\widetilde{\omega}_{H/L}\] \[b_{0}=b_{1}=\frac{\bar{G}}{2}\] where, \(\bar{\omega}_{H/L}\equiv 2\,\pi\,f_{H/L}T/\left(1+2\,\pi\,f_{H/L}T/2\right)\) and \(\bar{G}\equiv\bar{\omega}_{H}P\) or \(I\,T\) for the P and I filters. The filter output can then be written as: \[y_{0}=y_{1}-\bar{\omega}_{H/L}y_{1}+\frac{\bar{G}}{2}\,sx \tag{2}\] where \(sx=x_{0}+x_{1}\). We implement (2) with bit-shifts and additions, as for the D filter above. Inverting the expressions gives \(P=\bar{G}/\bar{\omega}_{H}\), \(I=\bar{G}/T\) and roll-off frequencies \(f_{H/L}=\bar{\omega}_{H/L}/2\,\pi\,T/\left(1-\bar{\omega}_{H/L}/2\right)\), where \(\bar{\omega}_{H/L}=2^{-n}(1+\{-\nicefrac{{1}}{{8}},0,\nicefrac{{1}}{{4}}, \nicefrac{{1}}{{2}}\})\) and \(f_{H/L}\) are again nonlinear in \(\bar{\omega}_{H/L}\). These filters can have timing margin at 100 MS/s with adjustable parameters that have two fractional bits of precision. Our minimum PID latency is t = 155 ns: 125 ns from the fast ADC and DAC conversions, 10 ns for the fast ADC firmware, 10 ns from the fast DAC firmware, and 1 clock cycle, 10 ns, from the PID filters. If the servo is stable with n/2 phase margin, the maximum servo bandwidth is then 1/4 r = 1.6 MHz. \begin{table} \begin{tabular}{|p{14.2pt}|p{14.2pt}|p{14.2pt}|p{14.2pt}|} \hline & \multicolumn{2}{|p{14.2pt}|}{**Minimum Gain**} & \multicolumn{2}{|p{14.2pt}|}{**Frequency Response**} \\ \hline \(I\) & 0.18 rad s\({}^{-1}\) & \multirow{2}{*}{\(f_{L}\)} & 0, 7.2 \(\mu\)Hz \\ \cline{3-3} \cline{5-5} \(P\) & 4032 & & 32 MHz \\ \cline{3-3} \cline{5-5} \(D\) & 1.8x10\({}^{-9}\) & \multirow{2}{*}{\(f_{H}\)} & (0, 7.2 \(\mu\)Hz \\ \cline{3-3} \cline{5-5} \(D\) & 4.0x10\({}^{5}\) rad\({}^{-1}\)s & & 2.7 kHz \\ \cline{3-3} \cline{5-5} @ \(Q\) & 1.2x10\({}^{12}\) rad\({}^{-1}\)s & \multirow{2}{*}{\(f_{0}\)} & 32 MHz \\ \cline{3-3} \cline{5-5} \(\approx\) 1 & & & \(\gamma\) & 3 s\({}^{-1}-2\times 10^{8}\) s\({}^{-1}\) \\ \hline \end{tabular} \end{table} Table 2: PID gains and frequencies for 100 MS/s filters. These values are for I and P filters with 16+9+32 bits and a D filter with 16+9+16 bits, as discussed in the text. The PID gains and \(f_{L/H}\) can be zero, and the minimum nonzero values are given. The minimum \(P\) gain depends on \(f_{H}\), and the table shows the minimum nonzero values of \(P\) at the minimum and maximum \(f_{H}\). Similarly, \(D\) depends on \(f_{0}\) and \(\gamma\), and the minimum values of \(D\) are shown for the minimum and maximum \(f_{0}\) for \(Q\approx\) 1. Normally, the maximum gains are not a limitation when servos have LSB resolution and use a high-frequency filter clock. B Fractional bits, filter stability, and rounding IIR filters that sample much faster than the servo bandwidth produce less aliasing and a more linear servo response. A straightforward implementation of the above PID filters then requires using words in the filter that are longer than our 16-bit input and output word to allow low-frequency integral gain limits and high-frequency roll-offs that are far below the sampling rate. The gain and frequency ranges for internal words with 16 + 9 + 32 = 57 bits for our P and I filters, and 16 + 9 + 16 = 41 bits for the D filter, are given in Table 2 for 100 MS/s. Here, the 16 most significant bits correspond to the inputs and outputs from the ADC's and DAC's. The inputs to the PID filters have 9 fractional bits of precision, allowing sub-LSB corrections to the PID inputs. Finally, to enable low filter frequencies, the PID filters have an additional 32 or 16 internal fractional bits. Here, the 9 servo fractional bits and the 32 or 16 internal fractional bits both extend the lower range of filter frequencies, whereas only the 32 or 16 internal fractional bits yield lower gains. Therefore, increasing an unnecessarily small minimum filter gain can allow higher input resolution for a given filter internal word size. With the ranges in Table 1, our PID's have timing margin at 100 MS/s. For comparison, a straightforwardly implemented multiplier-based filter with the same parameter ranges and \(2^{-n}(1+\{-\%,0,\%,\%\})\) precision requires 56-bit filter coefficients, which are long enough that straightforwardly implemented filters do not have timing margin at 100 MS/s. Second and higher-order filters can be unstable as errors accumulate due to the truncation of least-significant bits. For example, the term - \(\tilde{\gamma}dy\) in (1) of the D filter yields a slow decay of \(dy\). This decay ceases when - \(\tilde{\gamma}dy\) is smaller than the least-significant bit (LSB) of the 41-bit internal filter word. The filter thus would continue to add \(dy\) in (1) to make the new output \(y_{0}\), which will normally cause \(y_{0}\) to grow until it overflows. To avoid this accumulation error, we assign \(\tilde{\gamma}dy\) to be \(\pm 1\) LSB of the 41-bit word when \(0<\pm\tilde{\gamma}dy<1\). Finally, we round numbers before truncating the LSB's when applying right bit-shifts; we first add \(2^{s-1}\) before dividing by \(2^{s}\), a right bit-shift of \(s\)[38]. ## IV Selected MCFS APPLICATIONS ### A. Hansch-Couillaud stabilization with a slow dither lock correction We use Hansch-Couillaud (HC) cavity locks to stabilize several laser frequencies and optical cavity lengths in our laser system. HC locks have low loss and high bandwidth, but can suffer from slow lock offset drifts, for example due to temperature dependent birefringences. To correct lock offsets and their drifts, we augment HC locks with slow dither locks to the peak transmission, minimum reflection, or peak sum-frequency generation (SFG) output of a cavity[39]. Dither locks of lasers and optical cavities add frequency modulation at the dither frequency, as well as intensity modulation at twice the modulation frequency that is proportional to the square of a small dither amplitude. Here, because the cavity is primarily locked by the higher bandwidth HC lock, only a small dither amplitude is required to correct lock offsets, and thus it produces a very small intensity modulation. In our locks, the amplitude of the dither is well below the root-mean-square (RMS) noise level of the closed-loop error signal within a typical servo bandwidth of 40 kHz, and even well below the noise in a 1 kHz bandwidth for a dither frequency of order 1 kHz. We normally use dither lock servo bandwidths of order 20 mHz and the MCFS further includes logic to inhibit dithers, for example, when lasers are pulsed for laser-induced fluorescence detection. We implement laser and cavity servos with automatic lock acquisition[10, 11, 12, 13, 14, 15, 7, 8] and a slow dither lock correction. To acquire lock, a servo output is scanned until a cavity transmission, reflection, or SFG output passes a threshold, at which point a PID filter is enabled. A feature we find very helpful is displaying each servo's lock status with one of three colors, indicating that the servo is unlocked, locked for longer than 5 seconds, or recently locked, having been unlocked within the last 5 seconds. To correct lock offsets, a synthesized dither is added to the Fast Error Signal in Fig. 3(a), modulating the transmission, reflection, or SFG output, which is then demodulated by a lock-in amplifier to form the slow error signal with high long-term stability. This slow error signal is integrated to correct any offset of the Fast Error Signal. The dither is synthesized from a simple stepped waveform [dotted green curve in Fig. 3(b)] that has no 3\({}^{\text{rd}}\) harmonic and reduced 5\({}^{\text{th}}\) and 7\({}^{\text{th}}\) harmonics. Integrating it twice (dashed blue and solid black) reduces the higher odd harmonics to form a nearly sinusoidal dither, ranging from 93 uHz to 1.67 MHz. We use a simple demodulation waveform (red dashed) that also contains no 3\({}^{\text{rd}}\) harmonic. Similar integrations demodulate the quadrature 1\({}^{\text{st}}\) harmonic and the in-phase and quadrature 2\({}^{\text{nd}}\) and 3\({}^{\text{rd}}\) harmonics. We note that incorporating bit-shift-addition operations, or a multiplier, instead of this simple 3 level demodulation would slightly increase the demodulated signal-to-noise and further reduce the sensitivity to 5\({}^{\text{th}}\) and higher odd harmonics. The cavity lock for our SFG of 361 nm light, from 1083 nm and its second harmonic, 542 nm, is another example of the flexibility that an FPGA affords. We use the above HC lock with its slow dither correction to lock a doubly resonant enhancement cavity to the 542 nm light. Because the 542 nm is the 2\({}^{\text{nd}}\) harmonic of the 1083 nm light, the locked enhancement cavity largely tracks the frequency of the 1083 nm input and only a slow correction of its frequency is required, provided by an acousto-optic modulator driven by a voltage-controlled oscillator (VCO). We therefore use a dither lock to lock the 1083 nm light to the enhancement cavity. However, the slow dither lock of the 542 nm lock can interfere with the 1083 nm dither lock. To avoid this, we configure the FPGA to alternately dither the 542 nm error signal or the 1083 nm frequency, while inhibiting the other. Here, we use the intensity of the 361 nm SFG light to enable the PID's and for both dither locks, thereby maximizing the SFG output [39]. As for other locks, we inhibit both dithers for laser-induced fluorescence detection. ## Appendix B Variable duty cycle temperature servo We implement several servos using the slow ADC's and digital outputs to control the temperatures of non-linear crystals, a reference cavity and a heated Cd oven. Such systems often have thermal response times of order 0.1 s to 100 s and variable duty cycle (VDC) servos can easily be implemented with the FPGA. As compared to linear current regulation, pulse width modulation uses less power, with negligible added temperature noise for pulse periods much shorter than the Figure 3: (a) Schematic of a cavity lock with a correction from a slow dither lock. The cavity frequency is scanned and, when the cavity transmission (Trans.) or reflection (Ref.) passes a threshold, the PID filter is enabled. A dither is added to the Fast Error Signal and the resulting modulation of the transmission or reflection is demodulated (Demod.). This is then integrated to give the Correction of the offset of the Fast Error Signal. (b) Modulation waveforms. The dither is synthesized from the dotted green curve, by integrating it twice (dashed blue and solid black), producing a dither with no 3rd and reduced higher odd harmonics. Adjusting the coarse time steps provides dither frequencies from 93 \(\upmu\)Hz to 1.67 MHz. The demodulation waveform (red dashed) also contains no 3rd harmonic. system's response time. With a single FPGA controlling multiple servos, it is straightforward to synchronize the delays of the pulses of multiple servos to provide load diversity for a single power source. Figure 4 depicts a VDC temperature PID servo that produces a constant frequency output with an adjustable duty cycle. As discussed above, fixing filter coefficients, such as filter roll-off frequencies, \(f_{L},f_{H},f_{0}\) and damping \(\gamma\), yields more timing margin and significantly reduces the required resources. Often, the frequency response only changes significantly when the plant being controlled is substantially modified so adjustable \(f_{H},f_{0}\) and \(\gamma\) are not needed. Further, the frequency response of the plant determines the ratio of the proportional and integral gain, and the ratio of the differential and proportional. We therefore include a multiplier after the sum of the PID gains in Fig. 4 that allows the overall gain to be adjusted even when \(f_{L},f_{H},f_{0}\) and \(\gamma\), as well as the \(P\), \(I\) and \(D\) gains, are not adjustable.[40] This saves significant resources and has timing margin for low filter clock frequencies. A seven-bit (signed) multiplier allows the gains to be adjusted in steps of 1/16, from % to greater than 2 with greater than 25% precision. We use a 125 kHz clock for our temperature servos, which naturally gives lower ranges for the filter frequencies and smaller gains, and matches the sample rate of the slow ADC's when all channels are sampled sequentially. Using shift-register outputs to switch heater currents uses only a few high-speed FPGA outputs to control multiple temperature servos. However, with a typical 1 kHz VDC frequency, our 2 MS/s shift-register update rate corresponds to a duty cycle resolution of 0.05%. We increase this resolution by a factor of 16, when averaged over 16 cycles of 1 kHz, by successively adding {0, 15, 1, 13, 3, 11, 5, 9, 7, 8, 6, 10, 4, 12, 2, 14}/16 to the PID output, before the output is rounded to an integer number of 2 MS/s samples. This sequence minimizes the noise by modulating the LSB slowly, and the most-significant fractional bit on every 1 kHz cycle. As an example, consider a PID output of 82.664%, corresponding to 1,653.28 samples at 2 MS/s during each 1 kHz VDC cycle. Successively adding the above sequence over 16 cycles of 1 kHz truncates the PID output 12 times to 1,653 cycles and rounds four times to 1,654, for an average of 1,653.25 cycles. ### C. Arbitrary waveform generation The MCFS's 14 channels of 100 MS/s DAC's can generate multiple synchronized arbitrary waveforms with 10 ns resolution. Figure 5 shows three synchronized waveforms generated by a counter-driven state machine. This approach allows longer high-sampling-rate waveforms than possible with memory-based AWG's. We use the AWG to control the laser frequency (blue-solid) and intensity (green-dashed) and trigger a magnetic field gradient driver to trap neutral cadmium using its 326 nm, 67 kHz wide intercombination transition [29]. Note that the MCFS allows the frequency modulation during the loading stage of the magneto-optical trap (MOT) to always end (and begin) without an abrupt frequency step. We use the two-level trigger (magenta-dotted) to synchronize the reversal of the MOT magnetic field gradient for background subtraction. A touchscreen display button conveniently allows changes between waveforms for several configurations of the experiment. Figure 4: Variable duty cycle temperature servo. A slow ADC reading a temperature sensor (TS), relative to an optional setpoint and offset, produces an error signal for a PID servo. The servo output is added to a preset to drive a variable duty cycle digital shift-register output, which pulses current through a heater at a typical rate of 1 kHz. To avoid thermal shocks, before the PID is enabled, the preset increases slowly, on a timescale of order minutes. To sensitively detect the fluorescence of trapped atoms, we implement a gated integrator with background subtraction. In Fig. 5, during the "+" detection phase, with no laser FM, the fluorescence signal is integrated for a time \(\Delta\)t\({}_{\text{int}}\) = 16.6716 ms, approximately one 60 Hz cycle. In the subsequent \(\Delta\)t\({}_{\text{int}}\) interval, the laser frequency is tuned to the blue of the transition to expel the cold atoms from the trap and then the background is integrated in the next interval of \(\Delta\)t\({}_{\text{int}}\) "-", and subtracted from the gated integration of the fluorescence. This difference of gated integrations is stored in Block RAM and can be read from the FPGA. Additionally, the MOT magnetic field gradient is reversed after each trapping and detection sequence and the difference of gated integrations from one cycle to the next are subtracted and stored, representing the difference in fluorescence for a trapping or anti-trapping MOT magnetic field gradient. These gated integrations with background subtraction and the difference of successive integrations are also connected to fast DAC's and can be displayed on an oscilloscope in real-time. ## V Conclusion We demonstrate a many-channel system using a single FPGA to control a large number of experimental sub-systems, including high-speed PID laser and cavity locks, temperature controllers, synchronized arbitrary waveform generation, and the experiment configuration with a remote touchscreen display. We also demonstrate an enhanced Hansch-Couillaud cavity lock, where offsets are corrected with a very small amplitude dither-lock, as well as variable-duty-cycle temperature servos. Implementing PID IR filters with bit-shifts and additions allows real-time adjustment of servo Figure 5: Three synchronized 100 MS/s arbitrary waveforms, adjustable in real-time, to control a laser frequency (blue-solid) and intensity (green-dashed), and trigger MOT field gradients (magenta-dotted) to laser-cool neutral cadmium. The laser light is frequency modulated with an acousto-optic modulator at 50.5 kHz for approximately 400 ms during the MOT loading phase, and then shifted to higher frequency (lower voltage) during a clearing pulse. We use a state machine architecture to produce synchronized long arbitrary waveforms. gains with 25% precision, with timing margin at 100 MS/s, and uses fewer FPGA resources than multiplier-based filters. A number of options can provide more available logic, including transferring more operations to the many available DSP slices in our design and using pin-compatible FPGA modules with significantly more resources. Hard-coding PID roll-off frequencies,[40]\(f_{L}\),\(f_{H}\),\(f_{0}\) and \(\gamma\), with 25% precision uses half as many look-up tables while retaining real-time adjustment of the PID gains and thereby the zeroes of the PID transfer function. Restricting the ranges of gains, fixing the relative PID gains and allowing only an overall gain adjustment, or less precision of the gain or high frequency roll-offs, all save additional FPGA resources. Our default configuration, with arbitrary waveform generation and DSP, has nine cavity servos and two temperature servos that are fully adjustable, and six temperature servos with fixed PID parameters and adjustable overall gains. Additionally, the operations of PID's that update at less than 100 MS/s, such as the temperature servos, could be pipelined so that a single PID filter sequentially implements multiple temperature servos. Finally, the proportional, integral and differential filters can be pipelined to use the same logic slices[1] and the internal word lengths of the filters can be shortened if the ranges in Table II are not required. Thus, as many as 10 fast servos and 30 slow servos, after adding a daughter board with 24 additional slow ADC channels, could be implemented on a single FPGA with this control system. The open-source software and hardware files for this system are available[32] to facilitate extending and customizing this many-channel FPGA system for a variety of applications. ## Acknowledgements We gratefully acknowledge many suggestions from Avrum Warshawsky, contributions of Lam Tran, helpful conversations with Marco Pomponio, and financial support from the National Science Foundation. ## Data Availability The supporting files for this open-source many-channel FPGA system are available at [https://github.com/GibbleLab/FPGA](https://github.com/GibbleLab/FPGA). ## APPENDIX: Input and Output Noise The analog input and output noise of the MCFS are shown in Fig. 6 and are primarily set by the ADC and DAC noise levels. To measure the noise of the ADC's in Figs. 6(a-c), the inputs were terminated and their outputs were read by FPGA debugging probes. In Fig. 6(d-f) the DAC's were programmed to output 0 and their noise was measured with a fast ADC. The measurement noise level of the fast ADC's in Figs. 6(d-f) is 4/18 of that in Fig. 6(a,b), after accounting for the 4 V input and 18 V output ranges. The average measured RMS noise levels are 3.7 LSB for the fast ADC's, 1.13 LSB for the fast DAC's in a 10 MHz bandwidth, 0.48 LSB for the slow ADC's, and 0.16 LSB for the slow DAC's in a 200 kHz bandwidth. The coherent peak in Figs. 6 at 380 kHz is from a -20 V switching supply on our power supply board. Its RMS amplitude in Fig. 6b) is 0.028 LSB, and an average of 0.015 LSB for the 10 fast ADC's, 0.050 LSB for the 14 fast DAC's, and 0.017 LSB for the 16 slow DAC's. The frequencies of the other switching supplies on our power supply board are greater than 600 kHz and below the noise levels in Fig. 6. The largest coherent peaks in Fig. 6f) are from glitches at multiples of the update rate of the slow DAC's, here at 50 KS/s. To reduce the glitch amplitude, the MCFS baseboard has 5\({}^{\text{th}}\)-order low-pass filters on the slow DAC outputs that strongly attenuate frequencies above 300 kHz, with less than n/4 phase lag at frequencies below 10 kHz. This yields an average glitch amplitude of 0.36 LSB from an average glitch impulse of \(-\)3.0 LSB\(\cdot\)us. To decrease crosstalk between the fast ADC and DAC channels, the MCFS baseboard has slots in the multiple ground and power planes and between adjacent channels and converters. We measure -70 dBc crosstalk for a 1 MHz full scale (\(\pm\)4 V) input of a fast ADC on the other channel of the same ADC, less than -80 dBc on channels of the other fast ADC's, and the attenuation is higher at lower frequencies. Finally, the distribution of the bipolar offset errors of the 14 fast DAC outputs have a standard deviation of 1.9 mV and a mean of 1.2 mV. An appropriate DAC channel can thus be selected to reduce the bipolar error. Figure 6: Input and output noise spectral densities. The fast ADC [blue in a) & b)] is used to measure the noise of the fast and slow DAC’s (d-f), and its noise floor is shown in (d-f), shifted by the 4 V/18 V ratio of the ranges of the inputs and outputs. The 380 kHz peak from a switching regulator has an RMS amplitude less than 0.034 LSB on all ADC’s and DAC’s. The peaks in f) are at multiples of the 50 KS/s sampling frequency of the slow DAC’s, due to intrinsic glitches of the slow DAC’s, and correspond to an average RMS amplitude of 0.12 LSB. All data were sampled at 100 MS/s with a fast ADC, except for c), which was sampled at the maximum 125 KS/s of the slow ADC’s. The data for e) and f) were additionally averaged with a 100-sample window and down-sampled at 1 MS/s.
2307.09128
Structural sensitivity of chaotic dynamics in Hastings-Powell's model
The classical Hastings-Powell model is well known to exhibit chaotic dynamics in a three-species food chain. Chaotic dynamics appear through period-doubling bifurcation of stable coexistence limit cycle around an unstable interior equilibrium point. A specific choice of parameter value leads to a situation where the chaotic attractor disappears through a collision with an unstable limit cycle. As a result, the top predator goes to extinction. Here we explore the structural sensitivity of this phenomenon by replacing the Holling type II functional responses with Ivlev functional responses. Here we prove the existence of two Hopf-bifurcation thresholds and numerically detect the existence of an unstable limit cycle. The model with Ivlev functional responses does not indicate any possibility of extinction of the top predator. Further, the choice of functional responses depicts a significantly different picture of the coexistence of the three species involved with the model.
Indrajyoti Gaine, Swadesh Pal, Poulami Chatterjee, Malay Banerjee
2023-07-18T10:23:27Z
http://arxiv.org/abs/2307.09128v1
###### Abstract ###### Abstract The classical Hastings-Powell model is well known to exhibit chaotic dynamics in a three-species food chain. Chaotic dynamics appear through period-doubling bifurcation of stable coexistence limit cycle around an unstable interior equilibrium point. A specific choice of parameter value leads to a situation where the chaotic attractor disappears through a collision with an unstable limit cycle. As a result, the top predator goes to extinction. Here we explore the structural sensitivity of this phenomenon by replacing the Holling type II functional responses with Ivlev functional responses. Here we prove the existence of two Hopf-bifurcation thresholds and numerically detect the existence of an unstable limit cycle. The model with Ivlev functional responses does not indicate any possibility of extinction of the top predator. Further, the choice of functional responses depicts a significantly different picture of the coexistence of the three species involved with the model. **Structural sensitivity of chaotic dynamics in** **Hastings-Powell's model** **Indrajyoti Gaine\({}^{a}\), Swadesh Pal\({}^{b}\), Poulami Chatterjee\({}^{c}\), Malay Banerjee\({}^{a,}\)1** Footnote 1: Corresponding author: [email protected] \({}^{a}\) Indian Institute of Technology Kanpur, Kanpur - 208016, India \({}^{b}\) MS2Discovery Interdisciplinary Research Institute, Wilfrid Laurier University, 75 University Ave W, Waterloo, N2L3C5, Ontario, Canada \({}^{c}\) Department of Mathematics, Jadavpur University, Kolkata, India **Keywords:** Stability; chaos; functional response; structural sensitivity. ## 1 Introduction In an ecosystem, various species interact with one another in a variety of ways, such as through mutualism, competition, and predation. Mathematical modelling helps to capture such phenomena and predicts their dynamics for the long term based on the current state of the population and knowledge of relevant ecological processes. Researchers have been using different types of mathematical modelling approaches to take care of these phenomena, e.g., ordinary differential equations (ODEs), partial differential equations, etc. Different mathematical forms are also used in the model to incorporate the underlying interactions. ODE models provide a framework to describe the dynamics of population growth over time, and these are commonly studied for single-species, two-species, and multi-species interactions within ecological systems. The equilibrium points and their stabilities are the two important factors investigated in the ODE model, which helps in understanding the local behaviour of an ecological system, in particular, the co-existing equilibrium point(s) since it provides information about the coexistence of species. Different types of coexistence behaviours occur in an ecological model, and they change through local and global bifurcations. For instance, the stable coexistence of a system shows constant dynamics over a long time; however, a regular oscillation gives synchronised periodic dynamics. These two coexistences can switch in between through a supercritical Hopf bifurcation [1]. Ecological systems are inherently nonlinear and can exhibit unexpected fluctuations and irregular oscillations, called chaos. It depends on the background parameters and initial conditions, small changes which lead to exponential deviations. It is known that the single- and two-species autonomous ODE models do not generate irregular oscillatory coexistence; however, models with three or more species can exhibit a wide range of oscillatory solutions, including quasi-periodic and chaotic. These two types of oscillations often explain the co-existence of all the species with varying amplitude. There are many kinds of complex dynamics in a population model, such as multiple attractors [2], catastrophic transitions [3], sub-harmonics of various periods [4], cascades of period doubling [5], and strange attractors [6]. According to Bazykin [7], a two-species model has stable equilibrium and stable cycles separated by unstable cycles. Though this model produces complicated dynamics, it does not produce complex oscillations like chaos, as shown in experimental research. In carefully monitored laboratory trials, cultures of our beetles (Tribolium castaneum) experience bifurcations in their dynamics when the demographic factors change, including a specific route to chaos [8]. Furthermore, a long-term experiment on a complex food web shows chaos in the ecology [9]. The chaotic dynamics in food-chain models emphasize the ecological systems' innate complexity and how challenging it is to forecast their behaviour. Researchers have observed chaotic dynamics in continuous systems, such as the two-prey-one-predator model [10], one-prey-two-predator model [11], two-sex model [12], three-species food chain models [13; 14; 15]. Chaos appears in mathematical models through periodic doubling [16; 17]. On the other hand, the disappearance of chaos does not always follow the same pattern [18]. In general, when chaos is suppressed, it is either through global bifurcations or crises. In the case of global bifurcations, the basin boundary collision happens between several basins of attraction (areas of the phase space where starting circumstances converge to a certain attractor or behaviour). The crisis happens between the collision of a chaotic attractor and a co-existing unstable fixed point or periodic orbit [18]. As a result, the system shifts to a sudden qualitative change in dynamics and exhibits constant or periodic behaviour for both cases following the disappearance. The systems' dynamics depend on the functional form or the parameter values involved in it. For instance, two types of population growth are considered in the literature: exponential and logistic growth. The growth rate for exponential growth is density-independent, and the logistic growth is density-dependent, the most common growth rate considered in ecological models. In addition, the functional response is also responsible for the resultant dynamics as it links between two connected trophic levels and plays a crucial role in shaping the dynamics of the system. Primarily, the functional response can be classified into two groups: prey-dependent [19], and prey-predator-dependent [20; 21]. Sometimes, two different types of functional response can show different dynamics, and it is based upon predatory interactions [22; 23]. Researchers have been exploring how different parametrization has significantly altered the model dynamics. For the past decades, they have focused on how different functional forms with similar geometrical shapes and likewise properties influence the model dynamics [24, 25, 26, 27]. A small change in functional form, here the functional response, can cause significant consequences in the dynamics of the model, referred to as'structural sensitivity' [28]. It has been studied on different ecological models, e.g., zooplankton feeding on multiple resources [29], nutrient-phytoplankton-zooplankton model [30], and complex marine ecosystem [31], etc. Different approaches also have been applied to study structural sensitivity, such as a probabilistic viewpoint [32], the discrete-time system [33], a statistical viewpoint [34], and many more. It is shown that if functional responses of a sub-model fit in the area of the co-existing stable state equilibrium point, then the structural sensitivity reduces [35]. Furthermore, researchers have shown that a small continuous change in the functional form leads to significant alterations in bifurcations from a three-dimensional point of view [36, 37]. In ecology, different functional forms are used in mathematical models to incorporate the same type of physical phenomena. The primary focus of this work is to see the behavioural change in the dynamics by varying their forms, here the functional responses, along with the background parameters in terms of structural sensitivity. We have chosen an appropriate parameterization of these functional responses, which follows the assumptions: zero at zero, strictly monotonically increasing, finite horizontal asymptote, and concave down [19]. These assumptions help to understand the threshold explicitly behind a regime shift of systems' dynamics. They are needed because sometimes, a model can predict many results without any functional forms, but comparing or understanding their qualitative characteristics may become challenging. In [38], authors have considered a two-species food chain model and studied local and global dynamics in the presence of different functional responses. Here, we extend this idea into a three-species model and study the sensitivity of chaotic dynamics. For this, we consider Hasting's model [13], which features a number of intriguing dynamics, including the "tea-cup" dynamics and the appearance of chaos through periodic orbit. In order to examine the model's structural stability, we take the parameter configurations in such a way that they look similar and consider the bifurcation parameter that is independent of these functional responses. The organization of the paper is as follows. The main structure of the model and the conditions for the functional response is given in Sect. 2. The model can have multiple equilibrium points depending on the parametric conditions. All the possibilities for the equilibrium points and their stability criteria are discussed in Sect. 3. The model changes its dynamics by changing the bifurcation parameter (the mortality rate of the top predator), keeping others fixed, and this happens due to different types of bifurcations such as saddle-node, transcritical, and Hopf. The analytical conditions for these bifurcations in terms of the general functional form are presented in Sect. 4. We have validated our theoretical findings by choosing the appropriate parametric values in Sect. 5. Mathematical Model We first consider a general form of a three-species model [13] with logistic growth in the prey population and linear death rates (\(d_{1}\) and \(d_{2}\)) for the intermediate and top predators as: \[\frac{dx}{dt} =x-x^{2}-f_{1}(x)y, \tag{1a}\] \[\frac{dy}{dt} =f_{1}(x)y-d_{1}y-f_{2}(y)z,\] (1b) \[\frac{dz}{dt} =f_{2}(y)z-d_{2}z, \tag{1c}\] where \(f_{1}(\cdot)\) and \(f_{2}(\cdot)\) are called the functional responses, and each \(f_{i}(i=1,2)\) satisfies the conditions: (I) zero at zero, i.e., \(f_{i}(0)=0\); (II) monotone increasing, i.e., \(f_{i}^{\prime}(u)>0\)\(\forall u\geq 0\); (III) finite horizontal asymptote, i.e., \(\lim_{u\to\infty}f_{i}(u)=f_{i}^{\infty}<\infty\); (IV) concave down, i.e., \(f_{i}^{\prime\prime}(u)<0\)\(\forall u\geq 0\). Researchers have been using different types of functional responses in ecological models to incorporate resource-consumer relations. Most of these functional responses satisfy only the first three conditions. But, we have added a condition (condition (IV)) for selecting those functional responses which have negative curvature. This extra condition gives us a more zoomed picture in the sensitivity analysis of the dynamics of the ecological model (1). In [6], authors studied the model (1) by considering both the functional responses as Holling type II with different parametric setups. In this work, theoretically and numerically, we study the dynamics of the model for different mathematical forms of functional responses, particularly Holling type II and Ivlev, and compare their results qualitatively and quantitatively. We choose the expressions of the functions \(f_{1}\) and \(f_{2}\) for Holling type II functional responses as \(f_{1}(x)=a_{1}x/(1+b_{1}x)\) and \(f_{2}(y)=a_{2}y/(1+b_{2}y)\), and for Ivlev functional responses as \(f_{1}(x)=\bar{a}_{1}(1-e^{-\bar{b}_{1}x})\) and \(f_{2}(y)=\bar{a}_{2}(1-e^{-\bar{b}_{2}y})\). ## 3 Equilibrium points and their stabilities Different approaches have been applied in the temporal model to find the nature of its solution(s). One of them is finding the equilibrium points and their stabilities. If an equilibrium point is locally stable, then the solution converges to itself, for the initial conditions lie in a certain neighbourhood around it. The linear stability analysis around the equilibrium point helps to find this type of local behaviour of the solution to the system. Before going to the stability, we first find the possible equilibrium points for the system (1), and these can be found by solving the system (1) with vanishing all derivatives, i.e., the solutions of the following algebraic equations: \[x-x^{2}-f_{1}(x)y =0, \tag{2a}\] \[f_{1}(x)y-d_{1}y-f_{2}(y)z =0,\] (2b) \[f_{2}(y)z-d_{2}z =0. \tag{2c}\] The equilibrium points depend on the functional responses as they are involved on the left-hand side of the system (2). We use analytical and geometric approaches to find the equilibrium points. The system (2) may have negative solutions, but we do not consider those as they correspond to negative densities, which are not feasible. From the expression of the algebraic system (2), we see that \(E_{0}=(0,0,0)\) is the trivial solution, and it is an equilibrium point for the model (1). Furthermore, \(E_{1}=(1,0,0)\) is also a solution to the system (2), which is an axial equilibrium point of the system (1). These two equilibrium points are independent of all possible forms of functional responses. There is a possibility of having a boundary equilibrium point of the system (1) of the form \(E_{b}=(x_{b},y_{b},0)\) where \(x_{b}=f_{1}^{-1}(d_{1})\) and \(y_{b}=x_{b}(1-x_{b})/d_{1}\). According to our assumptions, the function \(f_{1}\) is continuously strictly increasing and bounded above, so there may exist a unique solution of the equation \(f_{1}(x)=d_{1}\) if \(d_{1}<f_{1}^{\infty}\). In addition, the positivity condition for \(y_{b}\) gives the feasibility of the equilibrium point \(E_{b}\), which is \(x_{b}<1\). This implies that \(f_{1}^{-1}(d_{1})<1\), which further implies \(d_{1}<f_{1}(1)\). Combining all the conditions, the existence of the boundary equilibrium of the form \(E_{b}=(x_{b},y_{b},0)\) where \(x_{b}>0\) and \(y_{b}>0\) requires the condition \(d_{1}<\min\{f_{1}(1),f_{1}^{\infty}\}\). It is the only boundary equilibrium point that exists for the system (1). The considered system does not have any boundary equilibrium point on \(xz\)-plane where \(x\) and \(z\) are both non-zero. This is because \(y=0\) holds on \(xz\)-plane, which further implies \(z=0\) from (2c) and then we arrive at either at \(E_{0}\) or \(E_{1}\). Furthermore, the system does not have any boundary equilibrium on \(yz\)-plane where \(y\) and \(z\) are both non-zero as it corresponds to \(x=0\), which gives \(d_{1}y+d_{2}z=0\), and it is not possible. Now we discuss all possible numbers of interior equilibrium points for the system (1). The analytical and graphical approaches help us find such equilibrium points' existence and uniqueness. Before going into these, we first rearrange the algebraic equations (2) to satisfy an interior equilibrium point \(E_{*}=(x_{*},y_{*},z_{*})\) as follows: \[1-x =\tilde{f}_{1}(x)y, \tag{3a}\] \[f_{1}(x)-d_{1} =\tilde{f}_{2}(y)z,\] (3b) \[f_{2}(y)-d_{2} =0, \tag{3c}\] where \(\tilde{f}_{1}(x)=f_{1}(x)/x\) and \(\tilde{f}_{2}(y)=f_{2}(y)/y\). These functions \(\tilde{f}_{1}(x)\) and \(\tilde{f}_{2}(y)\) are well-defined as \(x\) and \(y\) both are positive for the interior equilibrium point, and they are both positive, follow from the definitions of \(f_{1}\) and \(f_{2}\). Since \(f_{2}(y)\) is a strictly increasing function and has an upper limit \(f_{2}^{\infty}\) (by the condition (II)), the third equation of (3) can have a unique positive root \(y=y_{*}\) if \(d_{2}<f_{2}^{\infty}\). Substituting this \(y_{*}\) into the first equation of (3) gives an algebraic equation in terms of \(x\), and its number of feasible solutions gives the maximum number of interior equilibrium points possible for the model. In this case, the solutions are the points of intersections between the line \(y=1-x\) and the curve \(y=y_{*}\tilde{f}_{1}(x)\) in the first quadrant. Indeed, the points on the line \(y=1-x\) in the first quadrant satisfy \(x<1\). The number of points of intersections depends on the characteristics of the function \(\tilde{f}_{1}(x)\). Here, we assume some properties on \(\tilde{f}_{i}\)'s (\(i=1,2\)) for finding the possible number of intersections: (a) \(\lim_{u\to 0}\tilde{f}_{i}(u)>0\), (b) \(\tilde{f}_{i}^{\prime}(u)<0\)\(\forall u\geq 0\), (c) \(\tilde{f}_{i}^{\prime\prime}(u)>0\)\(\forall u\geq 0\), and (d) \(\lim_{u\rightarrow\infty}\tilde{f}_{i}(u)=0\). We set \(\beta_{1}=\lim_{x\to 0}\tilde{f}_{1}(x)\) which is positive by the assumption (a). Also, the function \(\tilde{f}_{1}\) is decreasing and convex by the assumptions (b) and (c). Therefore, if \(y_{*}\beta_{1}<1\), then there exists a unique point of intersection between the curve \(y=y_{*}\tilde{f}_{1}(x)\) and the line \(y=1-x\) [see Fig. 1(a)]. On the other hand, for \(y_{*}\beta_{1}\geq 1\), there may exists at most two intersections between the line \(y=1-x\) and the curve \(y=y_{*}\tilde{f}_{1}(x)\) [see Figs. 1(b) and 2]. In particular, the equation \(1-x=y_{*}\tilde{f}_{1}(x)\) has two solutions for \(x\) when \(y_{*}\beta_{1}=1\) [see Fig. 1(b)]. Not only this, there could be two solutions to the equation \(1-x=y_{*}\tilde{f}_{1}(x)\) for \(y_{*}\beta_{1}>1\). Furthermore, this equation can exist as a unique solution \(x=x_{*}\), and in this case, the above-mentioned curve and line share the common tangent at \(x=x_{*}\). When this occurs, the condition \(y_{*}\tilde{f}_{1}^{\prime}(x_{*})=-1\) is satisfied and is referred to as a saddle-node bifurcation point at \(d_{2}=d_{2}^{S}\), and is covered in greater detail later in this section. For \(d_{2}>d_{2}^{S}\), any solution \(y_{*}\) for the equation \(f_{2}(y)=d_{2}\) satisfies the inequality \(y_{*}>y_{*}^{S}\) as \(f_{2}(y)\) is an increasing function. Now, for this \(y_{*}\), the equation \(1-x=y_{*}\tilde{f}_{1}(x)\) does not have any solution because \(1-x=y_{*}^{S}\tilde{f}_{1}(x)<y_{*}\tilde{f}_{1}(x)\) and \(\tilde{f}_{1}(x)>0\)\(\forall x>0\). On the other hand, for \(d_{2}<d_{2}^{S}\), the solution \(y_{*}\) for the equation \(f_{2}(y)=d_{2}\) satisfies the inequality \(y_{*}<y_{*}^{S}\) and the equation \(1-x=y_{*}\tilde{f}_{1}(x)\) exists two positive solutions for \(x\) as \(\tilde{f}_{1}(x)\) is convex and positive for \(x>0\). In this case, we denote two solutions as \(x_{\star}\) and \(x_{\bullet}\), and in general, at these points \(\tilde{f}_{1}(x)\) satisfies the conditions \(y_{*}\tilde{f}_{1}^{\prime}(x_{\star})<-1\) and \(y_{*}\tilde{f}_{1}^{\prime}(x_{\bullet})>-1\). Substituting the solution \(x=x_{*}\) (whenever it exists) for the equation \(1-x=y_{*}\tilde{f}_{1}(x)\) into the second equation of (3), one obtain \(z_{*}=(f_{1}(x_{*})-d_{1})/\tilde{f}_{2}(y_{*})\). Depending on the number of solutions Figure 1: Illustrations for the possible number of positive roots of the equation \(1-x-y_{*}\tilde{f}_{1}(x)=0\) for \(y_{*}\beta_{1}\leq 1\) in \([0,1]\): (a) one root for \(y_{*}\beta_{1}<1\) and (b) two roots for \(y_{*}\beta_{1}=1\). Figure 2: Illustrations for the possible number of positive roots of the equation \(1-x-y_{*}\tilde{f}_{1}(x)=0\) for \(y_{*}\beta_{1}>1\) in \([0,1]\): (a) no root, (b) exactly one root, and (c) two roots. for \(x\) for the equation \(1-x=y_{*}\tilde{f}_{1}(x)\), one could get the same number of solutions for \(z_{*}\). But, for each cases, the condition \(f_{1}(x_{*})>d_{1}\) has to be satisfied for the feasibility of \(z_{*}\). Suppose, two feasible solutions \(z_{\star}\) and \(z_{\bullet}\) exist for \(x_{\star}\) and \(x_{\bullet}\), respectively, and we denote the corresponding equilibrium points as \(E_{\star}=(x_{\star},y_{\star},z_{\star})\) and \(E_{\bullet}=(x_{\bullet},y_{\bullet},z_{\bullet})\) with \(y_{*}=y_{\star}=y_{\bullet}\). Therefore, the inequality \(y_{*}\tilde{f}_{1}^{\prime}(x_{\star})<-1\) implies \(1-2x_{\star}-y_{*}f_{1}^{\prime}(x_{\star})>0\) and the other inequality \(y_{*}\tilde{f}_{1}^{\prime}(x_{\bullet})>-1\) implies \(1-2x_{\bullet}-y_{\bullet}f_{1}^{\prime}(x_{\bullet})<0\). This further implies that both the interior equilibrium point satisfies \(1-2x_{*}-y_{*}f_{1}^{\prime}(x_{*})\neq 0\) whenever \(d_{2}<d_{2}^{S}\). It may happen that for some \(x_{*}\), the condition \(f_{1}(x_{*})=d_{1}\) is satisfied, and in this case, the non-trivial equilibrium point coincides with the boundary equilibrium point \(E_{b}\) on the \(xy\)-plane. We prove this bifurcation as a transcritical bifurcation, and it is discussed later in this section. Now we investigate the model's local stability at various equilibrium points. This can be found by studying the eigenvalues of the Jacobian matrix at each equilibria. We start with the trivial equilibrium point \(E_{0}=(0,0,0)\). The Jacobian matrix at \(E_{0}=(0,0,0)\) is \[\mathbf{J}_{E_{0}}=\left[\begin{array}{ccc}1&0&0\\ 0&-d_{1}&0\\ 0&0&-d_{2}\end{array}\right].\] The eigenvalues of this matrix are \(1\), \(-d_{1}\), and \(-d_{2}\). This shows that the Jacobian matrix has one positive and two negative eigenvalues. Therefore, the trivial equilibrium point \(E_{0}\) is a saddle point, and it has a \(2\)-dimensional stable manifold and a \(1\)-dimensional unstable manifold, represented as \(W^{s}(E_{0})\) and \(W^{u}(E_{0})\), respectively. The stability of \(E_{0}\) is independent of parametric restrictions. The Jacobian matrix evaluated at the axial equilibrium point \(E_{1}=(1,0,0)\) is: \[\mathbf{J}_{E_{1}}=\left[\begin{array}{ccc}-1&-f_{1}(1)&0\\ 0&f_{1}(1)-d_{1}&0\\ 0&0&-d_{2}\end{array}\right],\] which has the eigenvalues \(-1\), \(f_{1}(1)-d_{1}\), and \(-d_{2}\). In this case, the Jacobian matrix has two negative eigenvalues for all parameter values, and the sign of the remaining one depends on the value of \(f_{1}(1)\). If \(f_{1}(1)>d_{1}\), then \(E_{1}\) is a saddle point. On the other hand, for \(f_{1}(1)<d_{1}\), \(E_{1}\) is locally asymptotically stable with \(3\)-dimensional stable manifold. Similarly, we obtain the Jacobian matrix at the boundary equilibrium point \(E_{b}=(x_{b},y_{b},0)\) as: \[\mathbf{J}_{E_{b}}=\left[\begin{array}{ccc}1-2x_{b}-y_{b}f_{1}^{\prime}(x_{b })&-f_{1}(x_{b})&0\\ y_{b}f_{1}^{\prime}(x_{b})&0&-f_{2}(y_{b})\\ 0&0&f_{2}(y_{b})-d_{2}\end{array}\right].\] This Jacobian has an eigenvalue \(f_{2}(y_{b})-d_{2}\), and the other two eigenvalues are the eigenvalues of a matrix having trace \(1-2x_{b}-y_{b}f_{1}^{\prime}(x_{b})\) and determinant \(y_{b}f_{1}(x_{b})f_{1}^{\prime}(x_{b})\). Now, based on the properties of the functional responses \(f_{i}\) (\(i=1,2\)), we can conclude that the boundary equilibrium point \(E_{b}=(x_{b},y_{b},0)\) is locally asymptotically stable if \(f_{2}(y_{b})<d_{2}\) and \(1-2x_{b}-y_{b}f_{1}^{\prime}(x_{b})<0\), and unstable for the other circumstance. For the unstable case, the dimension of its stable (unstable) manifold is determined by the number of negative (positive) eigenvalues of the Jacobian matrix \(\mathbf{J}_{E_{b}}\). Finally, the Jacobian matrix evaluated at a typical interior equilibrium point \(E_{*}=(x_{*},y_{*},z_{*})\) is given by: \[{\bf J}_{E_{*}}=\left[\begin{array}{ccc}1-2x_{*}-y_{*}f^{\prime}_{1}(x_{*})&-f_{ 1}(x_{*})&0\\ y_{*}f^{\prime}_{1}(x_{*})&f_{1}(x_{*})-d_{1}-z_{*}f^{\prime}_{2}(y_{*})&-d_{2} \\ 0&z_{*}f^{\prime}_{2}(y_{*})&0\end{array}\right].\] The eigenvalues for this Jacobian matrix implicitly depend on the functional responses \(f_{i}\) (\(i=1,2\)) and the equilibrium point \(E_{*}=(x_{*},y_{*},z_{*})\), and they are the solutions for the characteristics equation \[\lambda^{3}+P_{2}\lambda^{2}+P_{1}\lambda+P_{0}=0,\] where \(P_{2}=-(1-2x_{*}-y_{*}f^{\prime}_{1}(x_{*})+f_{1}(x_{*})-d_{1}-z_{*}f^{\prime} _{2}(y_{*}))\), \(P_{1}=(1-2x_{*}-y_{*}f^{\prime}_{1}(x_{*}))(f_{1}(x_{*})-d_{1}-z_{*}f^{\prime} _{2}(y_{*}))+y_{*}f_{1}(x_{*})f^{\prime}_{1}(x_{*})+d_{2}z_{*}f^{\prime}_{2}(y _{*})\), and \(P_{0}=-d_{2}z_{*}f^{\prime}_{2}(y_{*})(1-2x_{*}-y_{*}f^{\prime}_{1}(x_{*}))\). After applying the Routh-Hurwitz criteria, we can say the interior equilibrium point \(E_{*}=(x_{*},y_{*},z_{*})\) is stable if \(P_{2}>0\), \(P_{0}>0\) and \(P_{1}P_{2}>P_{0}\) hold. ## 4 Temporal bifurcations In this section, we discuss different temporal bifurcations for the system (1) where it changes the dynamics qualitatively and quantitatively through the bifurcation points, in particular, saddle-node, transcritical and Hopf bifurcations. As previously stated, we find all the bifurcation thresholds in terms of the bifurcation parameter \(d_{2}\) along with the transversality conditions. In addition, we follow Sotomayor's theorem to verify the transversality conditions for these temporal bifurcations [1] with the following notation: \[{\bf F}((x,y,z);d_{2})\equiv\left[\begin{array}{c}x-x^{2}-f_{1}(x)y\\ f_{1}(x)y-d_{1}y-f_{2}(y)z\\ f_{2}(y)z-d_{2}z\end{array}\right].\] ### Saddle-node bifurcation As we have mentioned in the previous section, the curve \(y=y_{*}\tilde{f}_{1}(x)\) and the line \(y=1-x\) intersects at exactly one point in the first quadrant, and we have denoted the value for \(d_{2}\) for which it occurs as \(d_{2}^{S}\). Let us denoted that unique non-trivial equilibrium as \(E_{*}^{S}=(x_{*}^{S},y_{*}^{S},z_{*}^{S})\). It is obvious that \(d_{2}^{S}<f_{2}^{\infty}\), otherwise the solution \(y=y_{*}^{S}\) does not exist for the third equation of (3). Furthermore, at \(d_{2}=d_{2}^{S}\), the curve \(y=y_{*}^{S}\tilde{f}_{1}(x)\) and the line \(y=1-x\) shares the common tangent, which gives the condition \(y_{*}^{S}\tilde{f}_{1}^{\prime}(x_{*}^{S})=-1\). In this case, the Jacobian matrix at the non-trivial equilibrium point \(E_{*}^{S}\) can be written as \[{\bf J}_{E_{*}^{S}}=\left[\begin{array}{ccc}1-2x_{*}^{S}-y_{*}^{S}f^{\prime }_{1}(x_{*}^{S})&-f_{1}(x_{*}^{S})&0\\ y_{*}^{S}f^{\prime}_{1}(x_{*}^{S})&f_{1}(x_{*}^{S})-d_{1}-z_{*}^{S}f^{\prime }_{2}(y_{*}^{S})&-f_{2}(y_{*}^{S})\\ 0&z_{*}^{S}f^{\prime}_{2}(y_{*}^{S})&0\end{array}\right].\] Now, after simplify the condition \(y_{*}^{S}\tilde{f}_{1}^{\prime}(x_{*}^{S})=-1\), one could obtain \(1-2x_{*}^{S}-y_{*}^{S}f^{\prime}_{1}(x_{*}^{S})=0\), which causes the determinant of the Jacobian matrix \({\bf J}_{E_{*}^{S}}\) to be zero. This implies that the Jacobian matrix \({\bf J}_{E_{*}^{S}}\) has a zero eigenvalue at \(d_{2}^{S}\), which is simple, as the sum of the product of its eigenvalues is \(y_{*}^{S}f_{1}^{\prime}(x_{*}^{S})f_{1}(x_{*}^{S})+z_{*}^{S}f_{2}(y_{*}^{S})f_{2}^ {\prime}(y_{*}^{S})>0\). Therefore, \(E_{*}^{S}\) is a non-hyperbolic equilibrium point. The eigenvectors corresponding to the zero eigenvalues of \(\mathbf{J}_{E_{*}^{S}}\) and \([\mathbf{J}_{E_{*}^{S}}]^{T}\) are given by \[\mathbf{V}=\left[\begin{array}{c}1\\ 0\\ \frac{y_{*}^{S}f_{1}^{\prime}(x_{*}^{S})}{f_{2}(y_{*}^{S})}\end{array}\right] \text{ and }\mathbf{W}=\left[\begin{array}{c}1\\ 0\\ \frac{f_{1}(x_{*}^{S})}{z_{*}^{S}f_{2}^{\prime}(y_{*}^{S})}\end{array}\right],\] respectively. Furthermore, one finds the transversality conditions [1] as follows: \[\mathbf{W}^{T}\mathbf{F}_{d_{2}}((x_{*}^{S},y_{*}^{S},z_{*}^{S});d_{2}^{S})= \frac{-f_{1}(x_{*}^{S})}{f_{2}^{\prime}(y_{*}^{S})}\neq 0\] \[\text{ and }\mathbf{W}^{T}[D^{2}\mathbf{F}((x_{*}^{S},y_{*}^{S},z_{*}^{S});d_{2}^ {S})(\mathbf{V},\mathbf{V})]=-2-f_{1}^{\prime\prime}(x_{*}^{S})y_{*}^{S}\neq 0.\] This implies that the system undergoes a non-degenerate saddle-node bifurcation at \(d_{2}=d_{2}^{S}\). ### Transcritical bifurcation Here we prove that the interior equilibrium point \(E_{*}\) can be generated from the boundary equilibrium point \(E_{b}\) through a transcritical bifurcation. As discussed earlier, the interior equilibrium point coincides with the boundary equilibrium point when the solution \(x_{*}\) of the equation \(1-x-y_{*}\tilde{f}_{1}(x)=0\) satisfies the condition \(f_{1}(x_{*})=d_{1}\). Assume this occurs at \(d_{2}=d_{2}^{T}\); in this case, the Jacobian matrix of the system evaluated at \(E_{b}^{T}=(x_{b}^{T},y_{b}^{T},0)\) is given by: \[\mathbf{J}_{E_{b}^{T}}=\left[\begin{array}{ccc}1-2x_{b}^{T}-y_{b}^{T}f_{1}^ {\prime}(x_{b}^{T})&-f_{1}(x_{b}^{T})&0\\ y_{b}^{T}f_{1}^{\prime}(x_{b}^{T})&0&-f_{2}(y_{b}^{T})\\ 0&0&0\end{array}\right].\] This Jacobian matrix has rank 2 because \(f_{1}(x_{b}^{T})\) and \(f_{2}(y_{b}^{T})\) are both non-zero. This ensures that the zero is a simple eigenvalue of \(\mathbf{J}_{E_{b}^{T}}\), which implies that \(E_{b}^{T}\) is a non-hyperbolic equilibrium point. The eigenvectors corresponding to simple zero eigenvalues of \(\mathbf{J}_{E_{b}^{T}}\) and \([\mathbf{J}_{E_{b}^{T}}]^{T}\) are given by: \[\mathbf{V}=\left[\begin{array}{c}1\\ \frac{1-2x_{b}^{T}-y_{b}^{T}f_{1}^{\prime}(x_{b}^{T})}{f_{1}(x_{b}^{T})}\\ \frac{y_{b}^{T}f_{1}^{\prime}(x_{b}^{T})}{f_{2}(y_{b}^{T})}\end{array}\right] \text{ and }\mathbf{W}=\left[\begin{array}{c}0\\ 0\\ 1\end{array}\right],\] respectively. As we have seen that the interior equilibrium point coincides with boundary equilibrium \(E_{b}\) at \(d_{2}=d_{2}^{T}\), and at the same time, \(1-2x_{b}^{T}-y_{b}^{T}f_{1}^{\prime}(x_{b}^{T})=1-2x_{*}-y_{*}f_{1}^{\prime}(x _{*})\neq 0\). Using these, we obtain the following transversality conditions: \[\mathbf{W}^{T}\mathbf{F}_{d_{2}}((x_{b}^{T},y_{b}^{T},0);d_{2}^{T})=0,\] \[\mathbf{W}^{T}[D\mathbf{F}((x_{b}^{T},y_{b}^{T},0);d_{2}^{T})(\mathbf{V})]\,= \frac{-y_{b}^{T}f_{1}^{\prime}(x_{b}^{T})}{f_{2}(y_{b}^{T})}=\frac{-y_{b}^{T}f_ {1}^{\prime}(x_{b}^{T})}{d_{2}}\,\neq\,0,\] \[\text{ and }\mathbf{W}^{T}[D^{2}\mathbf{F}((x_{b}^{T},y_{b}^{T},0);d_{2}^{T})( \mathbf{V},\mathbf{V})]\,=2f_{2}^{\prime}(y_{b}^{T})\left(\frac{1-2x_{b}^{T}-y _{b}^{T}f_{1}^{\prime}(x_{b}^{T})}{f_{1}^{\prime}(x_{b}^{T})}\right)\left(\frac {y_{b}^{T}f_{1}^{\prime}(x_{b}^{T})}{f_{2}(y_{b}^{T})}\right)\neq 0.\] This implies that the system undergoes a non-degenerate transcritical bifurcation at \(d_{2}=d_{2}^{T}\). ### Hopf bifurcation Sometimes, a system shows periodic behaviour, which can be predicted through the Hopf bifurcation. We can study such Hopf-bifurcation theoretically by knowing the characteristic equation of the Jacobian matrix around the co-existing equilibrium point with the help of Liu's criterion. In this work, we have considered \(d_{2}\) as the bifurcation parameter, and for this, we can write the characteristics equation of the Jacobian matrix \(\mathbf{J}_{E_{\star}}\) as: \[\lambda^{3}+P_{2}(d_{2})\lambda^{2}+P_{1}(d_{2})\lambda+P_{0}(d_{2})=0,\] where \(P_{0}(d_{2})\), \(P_{1}(d_{2})\), and \(P_{2}(d_{2})\) are given at the end of Sect. 3 which have been obtained in a polynomial of \(d_{2}\) by some manipulation. According to Liu's criterion, at the Hopf bifurcation threshold \(d_{2}=d_{2}^{H}\), the functions \(P_{0}(d_{2})\), \(P_{1}(d_{2})\), and \(P_{2}(d_{2})\) satisfy the conditions: \[P_{0}(d_{2})>0,P_{2}(d_{2})>0,\Delta(d_{2})=0,\text{ and }\frac{d\Delta(d_{2})}{ dd_{2}}\neq 0,\] where \(\Delta(d_{2})=P_{1}(d_{2})P_{2}(d_{2})-P_{0}(d_{2})\). There is no possibility of having a co-existing equilibrium point for the considered system for \(d_{2}>d_{2}^{S}\). On the other hand, it can exist two co-existing equilibrium points \(E_{\star}\) and \(E_{\bullet}\) for \(d_{2}<d_{2}^{S}\). Hence, if the system exists a Hopf bifurcation at \(d_{2}^{H}\), then it has to be less than \(d_{2}^{S}\). For this case, at \(d_{2}=d_{2}^{H}\), the inequality \(1-2x_{\star}-y_{\star}f_{1}^{\prime}(x_{\star})>0\) satisfies for the equilibrium point \(E_{\star}\), which violates the condition \(P_{0}(d_{2}^{H})>0\). Therefore, the system does not possess a Hopf bifurcation at any values \(d_{2}<d_{2}^{S}\) for the equilibrium \(E_{\star}\) where the condition \(y_{\star}\tilde{f}_{1}^{\prime}(x_{\star})<-1\) holds. However, there could be a possibility of Hopf bifurcation for the other equilibrium point \(E_{\bullet}\). In this case, the inequality \(P_{0}(d_{2}^{H})>0\) satisfies at \(E_{\bullet}\) as \(1-2x_{\bullet}-y_{\bullet}f_{1}^{\prime}(x_{\bullet})<0\). According to our assumption (b) on \(\tilde{f}_{2}\), we can say \(\frac{d}{dy}(\tilde{f}_{2}(y))|_{(x_{\star},y_{\star},z_{\star})}=\frac{f_{2}^ {\prime}(y_{\star})y_{\star}-f_{2}(y_{\star})}{y_{\star}^{2}}<0\) which further implies \(\frac{f_{2}(y_{\star})}{y_{\star}}-f_{2}^{\prime}(y_{\star})>0\). Now using the result \(f(x_{\star})-d_{1}-z_{\star}f_{2}^{\prime}(y_{\star})=z_{\star}(\frac{f_{2}(y_ {\star})}{y_{\star}}-f_{2}^{\prime}(y_{\star}))\neq 0\), we can conclude that \(\frac{d\Delta(d_{2})}{dd_{2}}=[(z_{\star}f_{2}^{\prime}(y_{\star}))\{f_{1}(x_{ \star})-d_{1}-z_{\star}f_{2}^{\prime}(y_{\star})\}]\neq 0\) for the entire range of values of \(d_{2}\) under consideration. Since \(P_{1}(d_{2})\) and \(P_{2}(d_{2})\) both are differentiable functions of \(d_{2}\), by differentiating \(P_{1}(d_{2})\), we get \(\frac{d}{dd_{2}}P_{1}(d_{2})=z_{\star}f_{2}^{\prime}(y_{\star})>0\) implies \(P_{1}(d_{2})\) is an increasing function as we increase the value of \(d_{2}\). Again By differentiating \(P_{1}(d_{2})\), we find \(\frac{d}{dd_{2}}(P_{2}(d_{2}))=\frac{-z_{\star}}{y_{\star}}<0\) which implies \(P_{2}(d_{2})\) is a decreasing function as we increase the value of \(d_{2}\). Now if we can find some value of \(d_{2}\) such that the conditions \(P_{2}(d_{2})>P_{1}(d_{2})\) and \(P_{1}(d_{2})P_{2}(d_{2})=P_{0}(d_{2})\) holds together, then one can see the appearance of a Hopf-bifurcation at that parametric value \(d_{2}\). Again if we can find some value of \(d_{2}\) such that the conditions \(P_{2}(d_{2})<P_{1}(d_{2})\) and \(P_{1}(d_{2})P_{2}(d_{2})=P_{0}(d_{2})\) holds together, then One can see the appearance of another Hopf-bifurcation at that parametric value \(d_{2}\). Therefore, we can have at most two Hopf bifurcation thresholds. ## 5 Numerical Results In this section, at first, we demonstrate the bifurcation scenario for the coexistence equilibrium point of the model (1) with Holling type II functional responses by varying \(d_{2}\) as the bifurcation parameter. The model under consideration is the classical Hastings-Powell model, our choice of parameter values are close to the values used in [6]. Fixed parameter values for the model (1) with Holling type II functional responses are mentioned in the following table. Here we will demonstrate the appearance of a chaotic attractor through a period-doubling route and its disappearance due to the collision with an unstable limit cycle arising from the coexistence equilibrium point through a subcritical Hopf bifurcation by varying our bifurcation parameter \(d_{2}\) between the range \([0.06,0.105]\). Note that our choice of \(d_{1}\) satisfies the condition \(d_{1}<f_{1}(1)\) and hence the axial equilibrium point \(E_{1}\) is a saddle point throughout this mentioned range of \(d_{2}\) with \(xz\) plane serving as a two-dimensional stable manifold. Based on our analytical results in the previous sections, we can calculate various bifurcation thresholds. We found a saddle-node bifurcation threshold at \(d_{2}^{S}\equiv 0.1049651383\), two Hopf-bifurcation thresholds at \(d_{2}^{H1}\equiv 0.10406993\) and \(d_{2}^{H2}\equiv 0.09453397\) and a transcritical bifurcation threshold at \(d_{2}^{T}\equiv 0.09244019\). Now we will discuss the dynamical behaviour of the coexistence equilibrium point(s) due to the variation of \(d_{2}\) within the specified range. If we choose the value of \(d_{2}\) greater than \(d_{2}^{S}\), the system has no co-existing equilibrium point. Two co-existing equilibrium points \(E_{\star}\) and \(E_{\bullet}\) bifurcate through saddle-node bifurcation as the parameter \(d_{2}\) decreases through \(d_{2}^{S}\). Considering our results of the previous section, we can see that the components of these two bifurcated equilibrium points satisfy the conditions \(y_{\star}\tilde{f}_{1}^{\prime}(x_{\star})<-1\) and \(y_{\star}\tilde{f}_{1}^{\prime}(x_{\bullet})>-1\), where \(y_{\star}=y_{\bullet}\). Now according to our assumptions on the properties of \(\tilde{f}_{1}\), we can say \(y_{\star}\tilde{f}_{1}^{\prime}\) is an increasing function. As a consequence, we get the inequality \(x_{\star}<x_{\bullet}\) which leads to the inequality \(z_{\star}<z_{\bullet}\). Therefore, once we plot the \(x\) and \(z\)-coordinates of the interior equilibrium points in the bifurcation diagram, the lower branches will correspond to the equilibrium point \(E_{\star}\), which has been marked in red in Figure 1(a,c). Since the \(y\) components of both the equilibrium points are equal, the plot of \(y\)-coordinate for both the equilibrium points will coincide (see Fig. 3(b)). Due to the condition \(1-2x_{\star}-y_{\star}f_{1}^{\prime}(x_{\star})>0\), we can say that \(E_{\star}\) remains unstable whenever it exists and disappears through a transcritical bifurcation when \(d_{2}\) decreases through \(d_{2}^{T}\). The other equilibrium point \(E_{\bullet}\) is a saddle point for \(d_{2}^{H1}<d_{2}<d_{2}^{S}\) and becomes stable as \(d_{2}\) crosses the Hopf bifurcation threshold \(d_{2}^{H1}\). An unstable limit cycle emerges at \(d_{2}=d_{2}^{H_{sub}}\) and continues to exist for \(d_{2}<d_{2}^{H_{sub}}\). In summary, a subcritical Hopf bifurcation occurs at \(d_{2}=d_{2}^{H1}\). \(E_{\bullet}\) remains stable up within the parameter range \(d_{2}^{H2}<d_{2}<d_{2}^{H1}\) and losses stability through another Hopf bifurcation at \(d_{2}=d_{2}^{H2}\). A stable limit cycle bifurcates from the stable equilibrium point at this threshold. This observation confirms that the Hopf bifurcation at \(d_{2}=d_{2}^{H2}\) is a supercritical Hopf bifurcation. This stable limit cycle exists for \(d_{2}<d_{2}^{H2}\) until it undergoes a period-doubling bifurcation. In Fig. 3, stable branches of equilibrium points and limit cycles are marked in blue, components of \begin{table} \begin{tabular}{|c|c|} \hline Parameter & Values \\ \hline \(a_{1}\) & 4.98 \\ \hline \(b_{1}\) & 6.2 \\ \hline \(a_{2}\) & 0.46 \\ \hline \(b_{2}\) & 2 \\ \hline \(d_{1}\) & 0.4 \\ \hline \end{tabular} \end{table} Table 1: Parameter values for the model with Holling type II functional responses. unstable equilibrium are marked in red, and the unstable limit cycle bifurcating from the subcritical Hopf bifurcation is marked in magenta. The stability of the boundary equilibrium point \(E_{b}\) depends on the parameter value \(d_{2}\). For \(d_{2}>d_{2}^{T}\), the boundary equilibrium point is a saddle point with two complex conjugate eigenvalues with a positive real part and one negative eigenvalue. The negative eigenvalue corresponds to a stable manifold transversal to the \(xy\) plane. At the threshold value \(d_{2}^{T}\), the co-existing equilibrium point \(E_{\star}\) coincides with the boundary equilibrium point and disappears through a transcritical bifurcation. The disappearance of the co-existing equilibrium point \(E_{\star}\) can be understood from the plot for the \(z\) component (see Fig. 3(c)); for the sake of brevity, the \(x\) and \(y\) components of boundary equilibrium are not shown in Fig. 3(a,b). There exists a stable limit cycle on the \(xy\) plane for the entire range of \(d_{2}\) under consideration, which has been represented as two nearly horizontal lines in Fig. 3(a,b). As the stable limit cycle lies on the \(xy\)-plane, the plot of the \(z\) has a blue line \(z=0\) in Fig. 3(c). Furthermore, if \(d_{2}^{S}<d_{2}\), where \(d_{2}^{S}\) is the threshold value beyond which the system has no co-existing equilibrium point, it is reasonable to conclude that the top predator goes to extinction when their death rate is significantly high, precisely for \(d_{2}>d_{2}^{S}\). The absence of a co-existing equilibrium point suggests that the population dynamics of the top predator are not sustainable, leading to its decline and eventual extinction. The unstable limit cycle mentioned earlier acts as a separatrix between the basins of attraction of the stable boundary limit cycle (limit cycle in \(xy\) plane) and the stable co-existing equilibrium point (\(E_{\bullet}\)), the stable limit cycle bifurcates through supercritical Hopf bifurcation as well as a chaotic attractor for the range \(0.06\leq d_{2}\leq 0.104\) (approximately). The stable co-existing limit cycle undergoes period-doubling bifurcations, which involve a doubling of the period of the limit cycle as the parameter \(d_{2}\) varies. This period-doubling cascade begins when \(d_{2}\) falls below the value \(0.089\) (approximately). This doubling occurs repeatedly, leading to increasingly complex dynamics and eventually leads to chaotic behaviour. The chaotic oscillation disappears when the chaotic attractor hits the unstable limit cycle for a value close to \(d_{2}=0.08\). This collision alters the dynamics of the system drastically, leading to the cessation of chaotic behaviour. Interestingly, the chaotic behaviour reappears through another collision with the same unstable limit cycle at \(d_{2}\equiv 0.062\). Ecologically, this cyclical behaviour could correspond to recurring periods of instability and variability in the ecological community, with potential impacts on species coexistence, trophic interactions, and Figure 3: Bifurcation diagram for Holling type II functional responses. ecosystem functioning. ### Structural Sensitivity Now we want to study the structural sensitivity of the bifurcations of the coexistence equilibrium point and the route to chaos in the model under consideration by substituting the Ivlev functional response in place of the Holling type II functional response. We will specifically focus on the parameter \(d_{2}\) as the bifurcation parameter. To assess the structural sensitivity, we adopt the methodology outlined in the study by Fussmann et al. [27] to determine the parameter values \(\bar{a}_{j}\) and \(\bar{b}_{j}\) associated with the Ivlev functional responses. The fixed parameter values relevant to this transition are presented in the following table. By employing nonlinear least square regression, we determine the values of \(\bar{a}_{1}\) and \(\bar{b}_{1}\) while maintaining the Holling type II functional response with \(a_{1}=4.98\) and \(b_{1}=6.2\). Similarly, we obtain the values of \(\bar{a}_{2}\) and \(\bar{b}_{2}\). In order to simulate the model (1) with Ivlev functional responses, we maintain the same value of \(d_{1}\) as mentioned in Table-1. Other parameter values are as in Table-2 and \(d_{2}\) as bifurcation parameters varying between the values \(0.06\) and \(0.105\). let us denote Ivlev functional responses as \(f_{j_{Ilev}}\) for \(j=1,2\). Here also, the choice of \(d_{1}\) satisfies the inequality \(d_{1}<f_{1_{Ilev}}(1)\), which ensures that the axial equilibrium point \(E_{1_{Ilev}}\) is a saddle point possessing a two-dimensional stable manifold within the \(xz\) plane. The model incorporating Ivlev functional responses exhibits these bifurcation thresholds: a saddle-node bifurcation threshold at \(d_{2_{Ilev}}^{S}\equiv 0.10405163\), two Hopf bifurcation thresholds at \(d_{2_{Ilev}}^{H1}\equiv 0.10275556\) and \(d_{2_{Ilev}}^{H2}\equiv 0.09840295\), and a transcritical bifurcation threshold at \(d_{2_{Ilev}}^{T}\equiv 0.09544625\). When \(d_{2}\) exceeds the saddle-node bifurcation threshold \(d_{2_{Ilev}}^{S}\), the model does not possess any co-existing equilibrium points. However, as \(d_{2}\) decreases below \(d_{2_{Ilev}}^{S}\), two co-existing equilibrium points, denoted as \(E_{\star_{Ilev}}\) and \(E_{\bullet_{Ilev}}\), emerge. From previous results we can say at \(E_{\star_{Ilev}}\) and \(E_{\bullet_{Ilev}}\), we have the conditions \(y_{\star}\tilde{f}_{1^{\prime}lev}(x_{\star})<-1\) and \(y_{\bullet}\tilde{f}_{1^{\prime}lev}(x_{\bullet})>-1\), where \(y_{\star}=y_{\bullet}\). Considering the condition \(y_{\star}\tilde{f}_{1^{\prime}lev}(x_{\star})<-1\) and assumptions on \(\tilde{f}_{1^{\prime}lev}\), it can be deduced that \(E_{\star_{Ilev}}\) is unstable whenever it exists. It disappears through a transcritical bifurcation when \(d_{2}\) reaches \(d_{2_{Ilev}}^{T}\). In Fig. 4(a,c), the lower branches marked in red correspond to the equilibrium point \(E_{\star_{Ilev}}\). The other co-existing equilibrium point \(E_{\bullet_{Ilev}}\) remains a saddle point within the range \(d_{2_{Ilev}}^{H1}<d_{2}<d_{2_{Ilev}}^{S}\) and becomes stable as \(d_{2}\) decreases below \(d_{2_{Ilev}}^{H1}\). At this threshold, an unstable limit cycle bifurcates and persists for \(d_{2}<d_{2_{Ilev}}^{H1}\), which suggests that we have a subcritical Hopf bifurcation at \(d_{2}=d_{2_{Ilev}}^{H1}\). \(E_{\bullet_{Ilev}}\) remains stable until \(d_{2}\) crosses the threshold \(d_{2_{Ilev}}^{H2}\), after which it loses stability as \(d_{2}\) further decreases. At this threshold, a stable limit cycle emerges, which exists until it undergoes a period-doubling bifurcation. This fact says that we have a supercritical Hopf bifurcation at \(d_{2}=d_{2_{Ilev}}^{H2}\). The \begin{table} \begin{tabular}{|l|r|} \hline Parameter & Values \\ \hline \(\bar{a}_{1}\) & \(0.67\) \\ \hline \(b_{1}\) & \(5.349\) \\ \hline \(\bar{a}_{2}\) & \(0.1647\) \\ \hline \(b_{2}\) & \(2.457\) \\ \hline \end{tabular} \end{table} Table 2: Parameter values for the model with Ivlev functional responses. overall behaviour can be observed in the upper branch of Fig. 4(a,c). The model exhibits a boundary equilibrium point and a stable boundary limit cycle in the \(xy\) plane for the considered values of \(d_{2}\). The co-existing equilibrium point \(E_{\star_{I\!lev}}\) collides with the boundary equilibrium point at \(d_{2}=d_{2_{I\!lev}}^{T}\), causing a transcritical bifurcation. The stable co-existing limit cycle undergoes a period-doubling bifurcation, leading to chaotic behaviour, as \(d_{2}\) decreases through the value \(d_{2}=0.084\). The chaotic attractor persists within the range \([0.0677,0.0745]\) and did not hit the unstable limit cycle generated from the subcritical Hopf bifurcation. Whereas, in this scenario, the unstable limit cycle disappears at \(d_{2}\equiv 0.073\) through a global bifurcation when it collides with the stable boundary limit cycle (limit cycle in \(xy\) plane). The chaotic attractor continues to exist over an extended range of parameter values and eventually ceases to exist through a crisis, leading to the emergence of a three-periodic coexistence scenario. The disappearance of the unstable limit cycle suggests a significant shift in the population dynamics, potentially leading to a different ecological regime. This indicates the robustness and resilience of the chaotic dynamics in the ecological system. However, eventually, the chaotic attractor ceases to exist through a crisis, which represents a sudden and drastic change in the population dynamics. This crisis event leads to the emergence of a three-periodic coexistence scenario, where the interacting species exhibit cyclical patterns of population fluctuations with three distinct periods. In summary, the observed dynamics highlight the complexity and sensitivity of population interactions in the ecological system. The coexistence of chaotic attractors, the disappearance of unstable limit cycles, and the emergence of periodic coexistence scenarios all reflect the intricate interplay between species interactions, ecological conditions, and the parameter values governing the system. Here we want to mention that the limit cycle lying on the \(xy\) plane is plotted in Fig. 3 for the entire range of parameters for Holling type II functional responses. In this case, we find a non-empty basin of attraction for the stable limit cycle on \(xy\)-plane in the interior of the first octant. However, there is no initial condition in the interior of the first octant from which we can reach the stable limit cycle on the \(xy\)-plane for the model with Ivlev functional responses when \(d_{2}<0.073\). As a matter of fact, the branches of the limit cycle lying on the \(xy\)-plane are not shown in Fig. 4 for \(d_{2}<0.073\). Figure 4: Bifurcation diagram for the Ivlev functional responses. Conclusions Hastings-Powell model is well known as the first prototype model for prey-predator-top-predator type interactions which can explain the complex oscillatory coexistence of three interacting populations. In the classical Rosenzweig-MacArthur model, we find stable oscillatory coexistence with high amplitude oscillation for both the prey and predator species beyond the supercritical Hopf bifurcation threshold. However, a large stable limit cycle situated close to the coordinate axes indicates the possibility of extinction of one or more species under environmental and/or demographic variability [39]. Nowadays, researchers are interested in transient dynamics and the emergence of extinction scenarios through global bifurcations apart from the structural sensitivity of the models under consideration. The Hastings-Powell model is capable of producing long transients before the extinction of top predators, as shown in Fig. 5. This kind of dynamics was first reported by Yodzis in [15]. The existence of long transients and then the extinction of the top predator is presented for the parameter value close to \(d_{2}=0.08\) where the chaotic attractor hits the unstable limit cycle and disappears. This phenomenon is structurally sensible for the model under consideration. To justify the structural sensitivity of the long transients leading to the extinction of top predator, we consider the model (1) with Ivlev functional responses, \(d_{2}=0.08\) and other parameter values as mentioned in Table 2 in the previous section, with initial condition \((0.45,0.5,0.8)\). In this situation, we find the coexistence scenario for all three constituent species as the chaotic attractor remains quite a bit away from the unstable limit cycle. The unstable limit cycle emerged through the sub-critical Hopf bifurcation and exists for the range \(d_{2}\in[0.073,0.1027]\) approximately. The unstable limit cycle disappears through a global bifurcation when it collides with the stable boundary limit cycle (limit cycle on the \(xy\)-plane). Comparing the bifurcation diagrams in Figs. 3 and 4, we conclude that the survival and extinction scenarios are solely influenced by the parametrization of the functional responses. Alternative parametrization of functional responses, without changing the basic properties, indicates how small changes in the population of one species affect the growth of other species, which are directly or indirectly related through the functional responses. This finding indicates that the expected extinction of one or more species may be avoided due to some adaptation mechanism which is reflected through the change in the parametrization of the functional responses. In addition, considering alternative parametrization for the functional responses does not alter the overall bifurcation scenario. However, the size of the chaotic attractor and position of the chaotic attractor with respect to the position of the unstable limit cycle is sensitive with respect to the parametrization of functional responses. The model with Holling type II functional responses predicts the extinction of the top predator when \(d_{2}\) lies in the range \([0.062,0.08]\). There is no possibility for the extinction of top predators if we replace the Holling type II functional responses with Ivlev functional responses.
2307.11767
Recognition of Mental Adjectives in An Efficient and Automatic Style
In recent years, commonsense reasoning has received more and more attention from academic community. We propose a new lexical inference task, Mental and Physical Classification (MPC), to handle commonsense reasoning in a reasoning graph. Mental words relate to mental activities, which fall into six categories: Emotion, Need, Perceiving, Reasoning, Planning and Personality. Physical words describe physical attributes of an object, like color, hardness, speed and malleability. A BERT model is fine-tuned for this task and active learning algorithm is adopted in the training framework to reduce the required annotation resources. The model using ENTROPY strategy achieves satisfactory accuracy and requires only about 300 labeled words. We also compare our result with SentiWordNet to check the difference between MPC and subjectivity classification task in sentiment analysis.
Fei Yang
2023-07-16T01:27:08Z
http://arxiv.org/abs/2307.11767v1
# Recognition of Mental Adjectives in An Efficient and Automatic Style ###### Abstract In recent years, commonsense reasoning has received more and more attention from academic community. We propose a new lexical inference task, _Mental_ and _Physical_ Classification (MPC), to handle commonsense reasoning in a reasoning graph. _Mental_ words relate to mental activities, which fall into six categories: Emotion, Need, Perceiving, Reasoning, Planning and Personality. _Physical_ words describe physical attributes of an object, like color, hardness, speed and malleability. A BERT model is fine-tuned for this task and active learning algorithm is adopted in the training framework to reduce the required annotation resources. The model using ENTROPY strategy achieves satisfactory accuracy and requires only about 300 labeled words. We also compare our result with SentiWordNet to check the difference between MPC and subjectivity classification task in sentiment analysis. ## 1 Introduction In the field of artificial intelligence, commonsense reasoning refers to the capacity that a machine understands the nature of scenes commonly encountered by humans every day, and makes reasonable and appropriate reactions, mimicking human cognitive abilities. Through commonsense reasoning, humans are capable of intricate reasoning relating to fundamental domains including time, space, naive physics, and naive psychology [6]. Therefore, a good starting point is understanding how time, space, naive physics affect human's mind, exploring possible causal relationships. For example, let's consider a review "This saltwater taffy had great flavors and was very soft and chewy. I loved it and I would highly recommend this candy!". The concepts "great flavors", "soft", "chewy" describe physical attributes of the saltwater taffy and the concepts "love", "recommend" describe mental activities of the reviewer. Here concept refers to word or phrase in natural language. If a seven years old child reads this review, the child would understand that the mental activities are caused by the taffy's physical attributes. Figure 1 shows a possible reasoning graph existed in the child's mind. The words "great flavors", "soft", "chewy" indicate that this taffy is edible with a positive effect. This effect greatly satisfies the reviewer's need of food and then this strong satisfaction invokes the reviewer's emotion of love with an reaction "I love it". That strong satisfaction also invokes the reviewer's need of friendship positively, with an reaction that the reviewer would like to share this taffy with friends. To let a machine figure out a similar reasoning graph, the first step is recognizing which concept is _Physical_ and which one is _Mental_. Then all concepts are mapped to numerous more granular tags, like _Edible::Positive_ or _Need&Friendship::Positive_ shown in Figure 1. Last, all tags are linked together to form a powerful reasoning graph. The first step cannot be skipped because the coarsest reasoning path, _Physical_ -> _Mental_, provides causal concept pairs, facilitating design of more fine-grained tags. Under this research plan, we propose a task of _Mental_ and _Physical_ Classification (MPC) at lexical Figure 1: A reasoning graph between a physical event and mental reactions. _Edible::Positive_ means positive effect over a physical attribute _Edible_. In mentality part, _Need&Food::Positive_, _Need&Friendship::Positive_ mean positive effect over _Food_ and _Friendship_ respectively, which belong to _Need_ category [17]. _Need&Food::Positive_ invokes _Love_ belonging to _Emotion_ category [20]. Other tags not invoked are omitted. level in this work. Each adjective extracted from Amazon Fine Food Reviews dataset [18] is inferred with a binary tag, _Mental_ or _Physical_, by a fine-tuned BERT model. A _Mental_ adjective describes mental activities, like emotion, need, reasoning, while a _Physical_ one shows physical attributes of an object, like color, hardness, speed and malleability. Although our inference methods have been for adjectives, they can be directly applied to other word classes. The inferred tags of MPC only reveals that an adjective is more likely to express a mental view or a physical view, as a word might have different senses. Besides MPC, dozens of binary or multi-value tags, like _Emotion_ or _Need_ category will be developed in the follow-up research work. Moreover, in order to improve the reasoning performance, these tags might need to be updated or new tags join the reasoning graph. This continuous and rapid iterative process makes it impossible to annotate all the words at once. In fact, what this project really needs is the ability to tag all the words automatically relying on zero or very low annotation resources. Therefore, we consider active learning methods to train a BERT [7] model for MPC in this work. ENTROPY [13], CORESET [25], CAL [16] and Random strategies are implemented and evaluated. The experiment results indicate that ENTROPY outperforms others and achieves _Mental_ F1 0.72 and _Physical_ F1 0.87 on testset, with only around 300 words are annotated for training. The definition of MPC task bears some similarity to subjectivity classification which is one task of sentiment analysis [14], and classify whether a piece of text is objective or subjective. To investigate the difference between these tasks, our result by ENTROPY is compared with SentiWordNet [24; 2]. We find that 41.5% of the _Mental_ adjectives bear objective meanings, which indicates the notion of MPC is quite different from subjectivity classification. Adjective examples are listed out to illustrate this difference in Table 4. The main contributions of this paper include the following three points: (1) a new task MPC is proposed to handle commonsense reasoning, (2) active learning is introduced to solve MPC efficiently, relying on only a small size of annotated words, (3) a dataset with the inferred MPC tags is released publicly for future research. ## 2 Related Work **Commonsense Reasoning.** Reasoning between mentality and physics has been studied by the research community in recent years. The mental reason of affective events is explained based on seven common human needs [8]. Event2Mind studies two kind of mental state, intent and emotion, which are inferred by deep learning models given physical events described by short text-free phrases [22]. ATOMIC considers two more kind of mental state, planning and personality, under the same task setting of Event2Mind [23]. Reasoning between physical events are studied by [36] and [33]. Previous works provide no clear explanation about "how" and "why" in commonsense reasoning, which is the core question that our research works try to address. **Sentiment Analysis.** Subjectivity classification and sentiment classification are two sub-topics of sentiment analysis [14]. Subjectivity classification is to determine whether a content is objective or subjective. On the other hand, sentiment classification is utilized for subjective content to identify the sentiment polarity, that is, whether the author expresses a positive or negative opinion. One approach to sentiment analysis is using lexicons where each word is assigned with scores showing it is neutral, positive or negative [32; 10; 11; 31]. These scores are known as prior polarity, that is, irrespective of the context, whether the word convey a positive or negative or neutral connotation [35]. One popular lexical resource is SentiWordNet [24; 2] which associates polarity scores to each synset of WordNet [19]. Early researches in this domain focus on adjectives, as adjectives express the majority of subjective meaning in a piece of writing [11; 31]. Under the same consideration, we also focus on adjectives for MPC first in this work. **Active Learning.** When machine learning or deep learning algorithms are considered to solve NLP tasks, one of most common challenges is lack of labeled data and limited annotation resources due to project budget. To efficiently make use of annotation resources, only the most valuable samples are hoped to be selected out for human labeling. Active learning provides a set of algorithms to fulfill this goal [26]. ENTROPY is an uncertainty-based method, choosing the sample with the highest predicted entropy [13]. However, the problem with this approach is that there is a risk of picking outliers or similar samples [26]. To increase diversity of the selected samples, CORESET [25] chooses the furthest sample in the embedding space from the samples already selected in previous iterations. CAL [16] finds the most contrastive sample to its nearest neighbors by calculating KL divergence, leveraging both uncertainty and diversity. **BERT.** In recent years BERT [7] has become one of the most famous pre-training language models and has shown effectiveness in many natural language processing tasks. These include sentiment analysis [28], semantical similarity [9], question answering [21] and entailment inference [34]. BERT is pre-trained on the BooksCorpus (800M words) [38] and English Wikipedia (2,500M words). By pre-training on such large text data, BERT grasps rich semantic information. The most common usage of BERT is fine-tuning it over downstream tasks, trained with data from downstream tasks to update all its pre-trained parameters. By this way, both the rich semantic information from pre-training and the features from downstream tasks are taken advantage of to achieve an excellent performance. ## 3 Data and Annotation **Task Definition.** In this work, we define a binary classification task, inferring a word is _Mental_ or _Physical_. The notion of _Mental_ relates to mental activities, which fall into six categories: Emotion, Need, Perceiving, Reasoning, Planning and Personality. Personality are regarded as the external manifestation of persistent mental activities. Detailed definition of each category and word examples are shown in Table 4. Other words are defined as _Physical_ describing physical attributes of an object, like color, hardness, speed and malleability. _Mental_ words usually have abstract meanings, but _Physical_ words have more concrete meanings that can be observed in the world. This difference can be used as a simple reference to determine which class a word belongs to. The inferred class only reveals that a word is more likely to express a mental view or a physical view, as a word might have different senses. The main reason we choose lexical level rather than sense level for MPC, is to facilitate subsequent research and reduce development complexity. **Data Process.** Amazon Fine Food Reviews dataset [18]1 is used as corpus for MPC task, as this dataset contains reasoning between physics (food description) and mentality (people's opinion) in our daily life. It has more than 0.5 million reviews of Amazon fine foods from Oct 1999 to Oct 2012. We use only text column and remove all other columns like ProductId, UserId, ProfileName for anonymization considerations. Data process contains three steps. An processing example is given in Table 1. First, each piece of review is splitted into words and each word is classified into part of speech (POS-tagging) 2. Then (adjective, noun) pairs are recognized and extracted out, where the noun appeared immediately after the adjective in review sentences. The goal of this step is to make sure the extracted adjectives are used in daily life to describe objects or states, like "roasted beans", "angry complaint", and counteract possible errors from POS-tagging. Finally, adjectives from these pairs are validated by checking if they have definition text from WordNet. After deduplication, 7292 adjectives are obtained. We use version 3.6.7 of NLTK package for POS-tagging and WordNet calling. Footnote 1: This dataset is distributed under CC0: Public Domain License. Download url: [https://www.kaggle.com/datasets/snap/amazon-fine-food-reviews](https://www.kaggle.com/datasets/snap/amazon-fine-food-reviews) **Annotation.** Each adjective is annotated by two annotators checking word definition from WordNet, and disagreements are adjudicated by another expert. All participants are experienced volunteers and they are notified how their annotations are used in this work. Examples of words and their definitions are presented in Appendix A. For words with different senses, annotation results are mainly \begin{table} \begin{tabular}{l} \hline \hline **Review**: I have found them all to be \\ of good quality. \\ \hline **Step 1**: Pos-tagging. \\ **Result**: (”T”, ”PRP”), \\ (”have”, ”VBP”), (”found”, ”VBN”), \\ (“them”, ”PRP”), (“all”, ”DT”), ("to”, ”TO”), \\ (“be”, ”VB”), (“of”, ”IN”), (“good”, ”JJ”), \\ (“quality”, ”NN”), (”\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)) \\ \hline **Step 2**: Detect (adjective, noun) pairs. \\ **Result**: (”good”, “quality”) \\ \hline **Step 3**: Validate adjectives. \\ **Result**: “good” \\ \hline \hline \end{tabular} \end{table} Table 1: Review process pipeline. Input is ”I have found them all to be of good quality.” and after processing the word “good” is outputted. based on the frequency of daily use. For instance, although the word "cold" has a _Mental_ sense, "feeling or showing no enthusiasm", it's labeled as _Physical_ since it is used more frequently with the gloss "having a low or inadequate temperature or feeling a sensation of coldness or having been made cold by e.g. ice or refrigeration". A testset consisting of 100 words is annotated for measuring model performance. It contains 26% _Mental_ words and 74% _Physical_ words. Among the _Mental_ words, 12% of them have annotation disagreements while this number drops to 5% for _Physical_ words. This difference indicates that _Mental_ words are more likely to be misclassified. Total disagreement over this dataset between two annotators is 7%. Statistics of the testset is summarized in Table 2. For each active learning strategy, a dataset for training and validation is annotated, which has no overlap with the testset. ## 4 Methods We use active learning framework to train a binary classifier for MPC task, which is shown in Algorithm 1. An unlabeled word pool \(\mathcal{U}\) is set up consisting of the extracted adjectives. The random strategy is used to select a word for annotation in the first iteration, while in other iterations different active learning strategies are used. We aim to annotate \(K_{1}\) positives and \(K_{2}\) negatives in each iteration, which are put into a labeled word pool \(\mathcal{D}_{labeled}\). A threshold \(M\) is set to control the total number of annotation of each iteration, in case that the active learning strategy fails to find another positive or negative sample. At the end of each iteration, a BERT model is fine-tuned over \(\mathcal{D}_{labeled}\). When iterations end, the BERT model with best performance over testset is employed in pipeline for inference. ``` 0: Unlabeled word pool \(\mathcal{U}\), number of positive samples \(K_{1}\) and negative samples \(K_{2}\) and maximum annotated samples \(M\) per iteration, number of iterations \(T\) 1:\(\mathcal{D}_{labeled}=\{\}\) 2:\(t=0\) 3:while\(t<T\)do 4:\(\mathcal{D}_{pos},\mathcal{D}_{neg}=\{\},\{\}\) 5:\(m=0\) 6:while True do 7:if\(t=1\)then 8:\(w_{new}\leftarrow\) Randomly select a word from \(\mathcal{U}\) 9:else 10:\(w_{new}\leftarrow\) Select a word from \(\mathcal{U}\) by a specific strategy 11:endif 12: Annotate \(w_{new}\) with a class label \(C\) 13:\(\mathcal{U}=\mathcal{U}\setminus\{w_{new}\}\) 14:\(m=m+1\) 15:if\(C\) is positive and \(|\mathcal{D}_{pos}|<K_{1}\)then 16:\(\mathcal{D}_{pos}=\mathcal{D}_{pos}\cup\{(w_{new},C)\}\) 17:endif 18:if\(C\) is negative and \(|\mathcal{D}_{neg}|<K_{2}\)then 19:\(\mathcal{D}_{neg}=\mathcal{D}_{neg}\cup\{(w_{new},C)\}\) 20:endif 21:if (\(|\mathcal{D}_{pos}|=K_{1}\) and \(|\mathcal{D}_{neg}|=K_{2}\)) or \(m=M\)then 22: break 23:endif 24:endwhile 25:\(\mathcal{D}_{labeled}=\mathcal{D}_{labeled}\cup\mathcal{D}_{pos}\cup\;\mathcal{D}_{ neg}\) 26: Fine-tune a BERT over \(\mathcal{D}_{labeled}\) 27:\(t=t+1\) 28:endwhile ``` **Algorithm 1** Active Learning Framework BERT fine-tuning and inference procedure is shown in Figure 2. As WordNet maps words into sets of cognitive synonyms, each expressing a distinct concept, therefore more than one piece of \begin{table} \begin{tabular}{c c c c} \hline **Class** & **Total** & **Disagreement** & **Rate** \\ \hline _Mental_ & 26 & 3 & 12\% \\ _Physical_ & 74 & 4 & 5\% \\ \hline \end{tabular} \end{table} Table 2: Total word numbers, disagreement numbers and rate of disagreement of the two classes in the testset. Difference of disagreement rates indicates that _Mental_ words are more likely to be misclassified. definition text are provided by WordNet for a given word. For example, "shining" belongs to three clusters as an adjective with three different definitions: (1) marked by exceptional merit, (2) made smooth and bright by or as if by rubbing; reflecting a sheen or glow, and (3) reflecting light. All of them are aggregated as one piece of text, serving as input of BERT with a special token _[CLS]_ at head. We use the final hidden state of _[CLS]_ as BERT output, which is then connected with a dropout layer [29] and a linear layer. Sigmoid node is added after the linear layer to transform logits into the probability of positive class. For fine-tuning, a standard cross entropy loss is computed to update all parameters of the BERT model and the subsequent linear layer. ## 5 Experiments and Results We compare four active learning strategies, considering their classification performance and annotation resource consumption, to find which strategy is most suitable for MPC task given a limited project budget. _Mental_ class serves as positive and _Physical_ serves as negative in training. F1 scores of _Mental_ and _Physical_ classes over testset are computed respectively. The average of the number of labeled samples in each iteration is recorded. **ENTROPY.** Samples with the highest predicted entropy are selected [13]. For binary classification, the closer the prediction probability of a sample is to 0.5, the higher its entropy is. Therefore, at the starting of each iteration, the BERT model from last iteration outputs the probability of all the words in \(\mathcal{U}\) and the word whose probability is closest to 0.5 is selected. **COREST.** Samples that are furthest away from the samples selected in previous iterations are chosen to enlarge the semantic diversity [25]. FastText [3] is used to represent a word by embedding vector, as fastText works well in word-level semantic textual similarity (STS) tasks [37]. In each iteration, a word is selected as follows: \[w_{new}=\arg\max_{w\in\mathcal{U}}\min_{v\in\mathcal{D}_{labeled}}L_{2}(\phi(w), \phi(v)), \tag{1}\] where \(L_{2}(\cdot,\cdot)\) computes \(L_{2}\) distance between two vectors and \(\phi(\cdot)\) returns the embedding vector of a word. The most contrastive sample to its nearest neighbors by calculating KL divergence is chosen. **CAL.** The most contrastive sample to its nearest neighbors by calculating KL divergence is chosen [16]. Given a word \(w\) in unlabeled word pool \(\mathcal{U}\), nearest 10 words are selected as neighbors in labeled word pool \(\mathcal{D}_{labeled}\) by \(L_{2}\) distance. The average KL divergence between \(w\) and its neighbors is computed as a measure of contrastive degree. The word with the largest value of this measure is selected. **Random.** Select a word \(w\) in unlabeled word pool \(\mathcal{U}\) randomly. All strategies share the same experimental settings as following: total iterations \(T\) = 5, number of positive samples \(K_{1}\) = 20, number of negative samples \(K_{2}\) = 20, maximum annotation number \(M\) = 120. In each iteration, BERT fine-tuning takes totally 20 epochs with learning rate 2e-5 and batch size 32. Learning rate drops to 1/10 of the original level after 10 epochs. We split \(\mathcal{D}_{labeled}\) by 80% - 20% as trainset and devset. If BERT outputs a value greater than 0.5, the word is considered to belong to _Mental_, otherwise _Physical_. Winner model \(\mathcal{M}_{t}\) is the one with maximum accuracy over devset. We hypertune BERT with different values of learning rate {1e-5, 2e-5, 1e-4} and batch size {32, 64, 128} for ENTROPY strategy. The best result over devset is achieved at 2e-5 learning rate and 32 batch size. Our BERT implementation is provided by Hugging Face and we choose "bert-base-uncased" version which contains 110M parameters and does not make a difference between lowercase and up Figure 2: Definitions of words are concatenated with _[CLS]_ at head as input of BERT. A dropout and a linear layer are connected to BERT sequentially. At last, a sigmoid node outputs the probability of being positive. The output probability is consumed for inference, or as input of cross entropy loss for fine-tuning. We use a word ”shining” with its definition text as an input example. percase words.3 We use the Adam optimizer with 0.001 weight decay [15]. The size of the linear layer is 768 which is the same size of BERT final hidden state. Dropout with a probability of 0.3 is applied in the network. Training framework is based on Pytorch Lightning (version 1.5.8) which could greatly boosts training efficiency. All experiments use this network architecture. Footnote 3: [https://huggingface.co/bert-base-uncased](https://huggingface.co/bert-base-uncased) Each strategy is run three times with different random seeds and the averaged F1 scores over test-set after three, four, five iterations are reported in Table 3. ENTROPY outperforms the other three, achieving the highest _Mental_ F1 0.72 and _Physical_ F1 0.87 at iteration 4. The reason that CAL fails might be we don't find semantically similar neighbors as the size of \(\mathcal{D}_{labeled}\) is too small. Table 5 shows annotation resource consumption. ENTROPY requires 60-70 labeled words per iteration, which means totally only 300 labeled words are needed to deliver an applicable classifier. CORESET and Random need more annotations than ENTROPY. CAL could not provide enough positive and negative samples after 120 words are annotated for some iterations. Precision and recall scores are presented in Appendix B. ## 6 Comparison with SentiWordNet As the notion of _Mental_ and _Physical_ is to some extent similar to "subjective" and "objective" in the subjectivity classification task [14] of sentiment analysis, we'd like to investigate the difference between them. We choose to compare our result with SentiWordNet [24; 2] which is the most used lexicon in social opinion mining studies [5]. SentiWordNet is a lexical resource which labels each synset from WordNet [19] as "positive", "negative" or "neutral". The used version of SentiWordNet is 3.0, which is based on WordNet 3.0. SentiWordNet 3.0 associates each synset with three numerical scores _PosScore_, _NegScore_ and _ObjScore_ which show how positive, negative, and neutral the words contained in the synset are [2]. All three scores range from 0 to 1 and their sum is 1. We focus on adjective synsets and classify each of them into two classes: _SubSyn_, if the maximum of the three scores is _PosScore_ or _NegScore_; otherwise, _ObjSyn_. For an adjective that belongs to more than one synset, it owns different senses, perhaps having both subjective and objective meanings. Therefore, at lexical level, an adjective is labeled by this rule: _Subjective_, if it only belongs to _SubSyn_ synsets; _Objective_, if it only belongs to _ObjSyn_ synsets; _Dual_, if if belongs to both _SubSyn_ and _ObjSyn_ synsets. Table 6 shows the distribution of _Subjective_, _Objective_ and _Dual_ adjectives in _Mental_ and _Physical_ classes. We find that 43% of the _Mental_ adjectives are labeled as _Objective_. This indicates the notions of _Mental/Physical_ are different from _Subjective/Objective_. In fact, many _Objective_ adjectives bear mental functionalities. Some adjective examples are listed to illustrate this point in Table 4 in six categories: Emotion, Need, Perceiving, Reasoning, Planning and Personality. ## 7 Conclusion Aiming to explicitly reveal reasoning path in commonsense scenarios, our first step is to classify a word into _Mental_ or _Physical_. We provide clear definitions of these two categories and a simple criterion for judging them. Active learning algorithm is implemented to fine-tune a BERT model, reducing the required annotation resources. The BERT model automatically infers which class an adjective belongs to. We release the inferred tags publicly to facilitate future research. We also compare our result with SentiWordNet, and find the notions of _Mental/Physical_ is different from _Subjective/Objective_ in sentiment analysis. Many _Objective_ adjectives bear mental functionalities under MPC definition. Future research works focus on designing more fine-grained tags and training models to automatically infer them over words. Links are built between tags in a manual way or machine-learning style, to form an applicable reasoning graph. We \begin{table} \begin{tabular}{c c c c c} \hline **Iteration** & **ENTROPY** & **CORESET** & **CAL** & **Random** \\ \hline 3 & 0.61 & 0.69 & 0.62 & 0.61 \\ 4 & **0.72** & **0.71** & 0.64 & 0.64 \\ 5 & **0.70** & 0.69 & 0.68 & 0.64 \\ \hline \end{tabular} \end{table} Table 3: Averaged F1 scores after 3,4,5 iterations. ENTROPY outperforms the other three, achieving the highest _Mental_ F1 0.72 and _Physical_ F1 0.87 at iteration 4. hope to translate what humans know about the world and about themselves into graph that will improve the intelligence of machines. Large language models (LLMs) provide powerful techniques to extract data patterns in nature language, which makes it possible to perfectly associate words with all kinds of human-designed tags. However, at the level of reasoning, relying on LLMs is not necessarily feasible, and painstaking manual work may be essential. ### Limitations Although the model by ENTROPY achieves acceptable F1 scores, there's still a lot of room for improvement of classification precision and recall. For example, use more annotated words for fine-tuning, or try other deep learning algorithms. We leave this optimization in the future after we verify the whole research plan becomes feasible and the classification performance is a bottleneck for commonsense reasoning ability. As a word has different meanings in different contexts, the best granularity for MPC is gloss level rather than lexical level. That's to say, use each piece of gloss text as BERT input instead of merging all glosses of a word into one piece of text. Then, the output shows if a gloss belongs to _Mental_ or _Physical_. However, lexical level facilitate the development of reasoning graph as there's no need to consider context. We will change to gloss level by the time it's verified that context becomes a bottleneck and it should be integrated into reasoning graph. ## Acknowledgements We appreciate valuable suggestions for this work from every reviewer.
2310.12879
Active Solids Model: Rigid Body Motion and Shape-changing Mechanisms
Active solids such as cell collectives, colloidal clusters, and active metamaterials exhibit diverse collective phenomena, ranging from rigid body motion to shape-changing mechanisms. The nonlinear dynamics of such active materials remains however poorly understood when they host zero-energy deformation modes and when noise is present. Here, we show that stress propagation in a model of active solids induces the spontaneous actuation of multiple soft floppy modes, even without exciting vibrational modes. By introducing an adiabatic approximation, we map the dynamics onto an effective Landau free energy, predicting mode selection and the onset of collective dynamics. These results open new ways to study and design living and robotic materials with multiple modes of locomotion and shape-change.
Claudio Hernández-López, Paul Baconnier, Corentin Coulais, Olivier Dauchot, Gustavo Düring
2023-10-19T16:32:55Z
http://arxiv.org/abs/2310.12879v2
# Active Solids: Rigid Body Motion and Shape-changing Mechanisms ###### Abstract Active solids such as cell collectives, colloidal clusters, and active metamaterials exhibit diverse collective phenomena, ranging from rigid body motion to shape-changing mechanisms. The nonlinear dynamics of such active materials remains however poorly understood when they host zero-energy deformation modes and when noise is present. Here, we show that stress propagation in active solids induces the spontaneous actuation of multiple soft floppy modes, even without exciting vibrational modes. By introducing an adiabatic approximation, we map the dynamics onto an effective Landau free energy, predicting mode selection and the onset of collective dynamics. These results open new ways to study and design living and robotic materials with multiple modes of locomotion and shape-change. Polar active matter is composed of self-driven units that convert energy into directed motion or forces. Aligning interactions among the active units lead to large scale collective motion in various forms, from polar flocks of birds [1; 2], motile colloids [3], vibrated disks [4], interacting robots [5; 6; 7], and vortex flows of fish [8], bacteria [9], or colloids [10; 11], to only quote a few. The large scale physics of these flows has been the topic of intensive research and is well described by the so-called Toner-Tu equations [12; 13]. When the density of active units is large because of confinement [14] or cohesion [15; 16], the structure of the assembly may remain frozen on long time scales, and the system exhibits elastic rather than viscous properties. When cohesive interactions are large enough, as is the case for dense biofilms [17], keratocyte swarms [18] or Epithelial monolayers [19], active units can be considered embedded in an elastic network, in a way similar to artificially designed active elastic metamaterials [20; 21; 22; 23]. A natural starting point is then to analyze the dynamics in terms of the vibrational modes of the elastic medium or structure. It was shown that correlated noise generated by an active matter bath can actuate a non-trivial zero mode while suppressing harmonic modes to a degree dependent on temporal correlations [21]. Self-propulsion is further able to mobilize solid body motion [20] or a free-moving mechanism even in topologically complex case [21]. Notably, observations on _Placozoa phylum_[24], a living active solid, have revealed global rotation and translations under various conditions. Finally, in the presence of a non-linear feedback of the elastic stress on the orientation of the active forces, self-propulsion can also actuate a few selected harmonic modes [22]. Yet the selection mechanism remains unclear. More generally, in the presence of several actuable modes, whether trivially associated with solid body motion or more complex mechanisms, several dynamics coexist in phase space and there is to date no general principle to characterize their metastability. In this Letter, we provide a general formalism to describe the statistical evolution of collective motion, in the case where several zero modes are present, as illustrated in Fig. 1, using the hexbug elastic network (Fig. 1-a), introduced in [22]. When the network is pinned in the center and the translational solid body motion is forbidden, the only remaining zero mode is rotation (Fig. 1-b). The dynamics break chiral symmetry by spontaneously selecting one direction of rotation, which eventually reverses in the presence of noise. When the network is not pinned, there are two translational and one rotational zero modes, which also spontaneously break the continuous rotational and chiral symmetry, respecti Figure 1: **Active rigid body motion and active mechanisms** (a) Zoom on the experimental active elastic lattice introduced in [22], with self-propelling units - Hexbugs - trapped in 3d-printed annuli, connected by springs in a triangular lattice. (b) Experimental rotational dynamics observed under central pinning. Scale bar: 10 cm. (c,d) Experimental translational and rotational dynamics observed for a free structure. Scale bar: 20 cm. (e) Alternating translational and rotation dynamics obtained numerically for the same free structure. (f) A rotational-auxetic regime observed numerically for a non-pinned auxetic square system (See text, Fig. 3, and Sup. Movie 1 [25]). Trajectories color-coded from blue to red by increasing time. (Fig. 1-c) and rotational motion (Fig. 1-d) are observed depending on the initial conditions. Transitions between the two types of motion are observed numerically (Fig. 1-e). Finally, Fig. 1-f, shows the dynamics of a non-trivial active mechanism with two zero modes: an ideal auxetic network [26; 27; 28; 29] (with a Poisson ratio of -1) pinned at the center that can freely rotate and contract. As observed here, the motion along zero modes can alter the shape of the lattice, and the modes themselves. When the timescales of the dynamics are much longer than the elastic relaxation time, as is the case here, the harmonic modes of the solid are barely excited and the network can be considered as rigid. We show first that in this limit stress propagation is enough to induce collective motion: at small enough noise, the symmetric phase is spontaneously broken and the active rigid body motion and/or mechanism folding follows a specific path along the floppy mode space. This evolution introduces a new timescale, which competes with the timescale of reorientation of the active particles. We prove that within an adiabatic approximation, the dynamics is governed by an effective Landau free energy, from which the mode selection and the metastability following the successive symmetry breaking can be easily understood. Our results pave the way towards active metamaterials with multiple modes of actuation and locomotion [30]. We consider active systems described by the overdamped dynamics of \(N\) self-aligning units, which were introduced independently in several contexts [14; 15; 18; 22; 31]. Written in non-dimensional units (See Sup. Mat. [25], section I for details), the equations read: \[\dot{\mathbf{x}}_{i} =\mathbf{\hat{n}}_{i}+\mathbf{F}_{i}, \tag{1}\] \[\dot{\theta}_{i} =\frac{1}{\pi_{r}}(\mathbf{\hat{n}}_{i}^{\perp}\cdot\mathbf{F}_{i})+ \sqrt{2\pi_{\theta}/\pi_{r}}\xi_{i}, \tag{2}\] with \(\mathbf{x}_{i}\), respectively \(\mathbf{\hat{n}}_{i}=(\cos\theta_{i},\sin\theta_{i})\), the position and the polarization unit vector of active unit \(i\). \(\mathbf{F}_{i}=\sum_{j\in\partial_{i}}f_{ij}\mathbf{e_{ij}}\) is the sum of pairwise radially symmetric interaction forces, where \(\mathbf{\hat{e}}_{ij}\) is a unit vector pointing from particle \(i\) to \(j\). The dimensionless self-alignment length \(\pi_{r}=\ell_{a}/\ell_{0}\) is the ratio between the self-alignment length \(\ell_{a}\) and the characteristic agent-agent distance \(\ell_{0}\). The dimensionless noise coefficient corresponds to \(\pi_{\theta}=D_{\theta}\ell_{a}/v_{0}\), with \(v_{0}\) the speed of a free agent, and \(D_{\theta}\) the angular diffusion coefficient. We define \(\xi_{i}\) as a delta-correlated Gaussian white noise process. The self-aligning torque, on the right-hand side of Eq. 2, emerges from non-symmetric dissipative forces with respect to \(\mathbf{\hat{n}}_{i}\), when it is misaligned with \(\dot{\mathbf{x}}_{i}\)[31]. It was shown to be the key ingredient for the onset of collective motion in active disks [4], and collective actuation in active elastic networks [22]. One can show (see Sup. Mat. [25], section I) that the network can be safely considered as rigid as long as \[\frac{\ell_{e}}{\ell_{a}}\ll\omega_{q}^{2},\quad\forall q\in\{1..,2N\} \tag{3}\] Figure 2: **Second order transitions to solid body motion and mechanisms for 1-mode systems** (a) Periodic boundary condition (translation only) network. (b) Time series of the magnitude of the polarization vector \(\mathbf{P}\) for a 10x10 translating system; \(\pi_{\theta}=0.4,\pi_{r}\ =0.001\). (c) Phase diagram \(\mu_{P}\) vs. \(\pi_{\theta}\) of a 100x100 translating system as a function of noise. (S) for simulations, (T) for theoretical predictions. (d) Landau free energy diagram of the translating system as a function of the x and y polarization order parameters \(\mu_{x},\mu_{y}\); left: \(\pi_{\theta}=0.1\); right: \(\pi_{\theta}=0.6\). (e) A 2-layer hexagonal ring with the definition of the rotational angle \(\phi\). (f) Time series of the angular velocity \(\dot{\phi}\) of a 9-layer hexagonal ring; \(\pi_{\theta}=0.4,\pi_{r}=0.001\). (g) Phase diagram \(\mu_{\phi}\) vs. \(\pi_{\theta}\) of a 9-layer hexagonal rotating system (left, simulation data considers \(\pi_{r}=0.071\)) and \(\mu_{\phi}\) vs. \(\pi_{r}\), when \(\pi_{\theta}=0\) (right). (h) Landau free energy diagram of the rotational system as a function of \(\mu_{\phi}\); blue: \(\pi_{\theta}=0.25\); orange: \(\pi_{\theta}=0.6\). (i) A 1-layer auxetic system with the definition of the auxetic angle \(\gamma\). (k) Time series of the auxetic angular velocity \(\dot{\gamma}\) of an 8-layer auxetic system; \(\pi_{r}=0.001,\pi_{\theta}=0.4\). (l) Phase diagram \(\mu_{\gamma}\) vs. \(\pi_{\theta}\) of an 8-layer auxetic system(left, simulation data considers \(\pi_{r}=0.001\)) and \(\mu_{\gamma}\) vs. \(\pi_{r}\), when \(\pi_{\theta}=0\) (right). (m) Landau free energy diagram of the auxetic system as a function of \(\mu_{\gamma}\); blue: \(\pi_{\theta}=0.25\); orange: \(\pi_{\theta}=0.6\). where \(\ell_{e}\) is the amplitude of the typical displacement of the nodes resulting from the active forces, and \(\omega_{q}\) are the non-zero vibrational frequencies of the network. In the following, we shall perform all simulations with parameters such that the rigid approximation is satisfied. We start by simulating three systems with a single zero mode (Fig. 2): (i) a crystalline triangular lattice with periodic boundary conditions (PBC), (ii) a triangular lattice pinned at its center, and (iii) an ideal auxetic network pinned at its center, illustrated in Fig. 2-a,e,i respectively. All of them exhibit collective motion along their single zero mode at small noise. The triangular lattice with PBC translates uniformly, with a non-zero magnitude of the global polarization \(\mathbf{P}=(1/N)\sum_{i}\hat{\mathbf{n}}_{i}\) (Fig. 2-b). The network pinned at the center freely rotates, with an angular speed \(\dot{\phi}\), that randomly switches from counterclockwise to clockwise rotation (Fig. 2-f). The auxetic network freely compresses with a finite auxetic angular speed \(\dot{\gamma}\) (Fig. 2-k). In all cases, collective motion emerges from a spontaneous symmetry breaking of the disordered phase, when \(\pi_{\theta}<1/2\) (Fig. 2-c,g,k). The situation becomes more interesting when the network of interest has more than one zero mode, see Fig. 3. An active network free of PBC has both translational and rotational zero modes and can break both the continuous rotational symmetry - in a way similar to the ferromagnetic transition of the XY model - and the chiral, Ising-like, symmetry associated with the direction of rotation. The simulations reveal that, at low \(\pi_{\theta}\), the collective dynamics switches between pure translations and pure rotation a behavior that is reminiscent of the existence of different metastable states (Fig. 3-a). The case of an auxetic network that is also free to rotate is even more complex. As we shall see this is because the rotational mode depends on the distance of the particles to the center, which actually varies while the system evolves along the auxetic mode. The visual inspection of the auxetic and rotation rates as a function of time for different values of \(\pi_{\theta}\) indicates that in the limit of vanishing noise, \(\dot{\phi}\) remains constant, and \(\dot{\gamma}\) fluctuates periodically. Increasing the noise, transitions between two states with different \(\dot{\phi}\) signs can be achieved, and even larger noise values lead to a state where such transitions occur constantly (Fig. 3-d). We now come to the theoretical analysis of the above observations. For a rigid network, the bond elongations are null. Imposing such distance-preserving condition to Eq. 1 one finds after some algebra (See Sup. Mat. [25], section II) : \[\mathbf{F}_{i}=-\hat{\mathbf{n}}_{i}+\sum_{q\in\Re}\langle\mathbf{\varphi}^{q}|\hat{\mathbf{n }}\rangle\mathbf{\varphi}_{i}^{q}, \tag{4}\] where \(\mathbf{\varphi}_{i}^{q}\) is the vector associated with particle \(i\) in the \(q\)-th eigenmode of the dynamical matrix of the elastic network, and \(\Re\) is the set of zero modes. Then, replacing the force in Equations (1) and (2) the dynamics can be completely described by the floppy modes of the structure : \[\dot{\mathbf{x}}_{i}=\sum_{q\in\Re}\langle\mathbf{\varphi}^{q}|\hat{\mathbf{n}}\rangle\bm {\varphi}_{i}^{q}\quad\text{and}\quad\dot{\theta}_{i}=-\frac{1}{\pi_{r}}\frac{ \partial V}{\partial\theta_{i}}+\sqrt{\frac{2\pi_{\theta}}{\pi_{r}}}\xi_{i}, \tag{5}\] where the angular equation can be recast as a potential dynamics with \(V=-\frac{1}{2}\sum_{q\in\Re}\langle\mathbf{\varphi}^{q}|\hat{\mathbf{n}}\rangle^{2}\). The stochastic dynamics is then described by the time-dependent probability density \(Q(\mathbf{x}_{1},\dots,\mathbf{x}_{N};\theta_{1},\dots,\theta_{N};t)\), which evolves according to the Fokker-Planck equation: \[\frac{\partial Q}{\partial t}=\frac{1}{\pi_{r}}\frac{\partial}{\partial\theta _{i}}\left(\frac{\partial V}{\partial\theta_{i}}Q+\pi_{\theta}\frac{\partial Q }{\partial\theta_{i}}\right)-\mathbf{\nabla}_{\mathbf{x}_{i}}\left(\langle\mathbf{\varphi} ^{q}|\hat{\mathbf{n}}\rangle\mathbf{\varphi}_{i}^{q}Q\right) \tag{6}\] with implicit summation on the indices. Note that the modes will, in general, depend on the positional information, and the complete dynamics will be subject to the interplay between the polarity field alignment and the positional rearrangement of the structure. The evolution of the zero modes can be described with a set of angles, distances, or more general coordinates, which we denote \(\mathbf{\alpha}=\{\alpha_{m}\}_{m=1}^{M}\), with \(M\) the number of zero modes. To make further progress, we proceed to an adiabatic approximation, assuming that the dynamics of the zero modes is much slower than that of the orientation of the active units, which is expected to hold in the \(\pi_{r}\ll 1\) regime. At zeroth-order, this approximation amounts to considering that the probability density function \(Q\) is different from zero only for combinations of \(\mathbf{x}_{i}\) which preserve the same zero modes. In such a case, the Fokker-Planck equation for the reduced density probability function \(\mathcal{Q}=\int_{-\infty}^{\infty}Qd\mathbf{x}_{1}\dots d\mathbf{x}_{N}\), admits a steady state solution given by the Gibbs measure: \[\mathcal{Q}=\frac{\exp(-\beta V)}{\mathcal{Z}} \tag{7}\] with \(\beta=1/\pi_{\theta}\) and \(\mathcal{Z}=\int_{-\pi}^{\pi}e^{-\beta V}\ \mathrm{d}\theta_{1}\dots\mathrm{d}\theta_{N}\). Collective motion is achieved when the normalized projections of the polarity vectors over the zero modes, namely the order parameters \(\mu_{q}=\left\langle\left(\sum_{i}\mathbf{\varphi}_{i}^{q}\cdot\mathbf{\hat{n}}_{i} \right)\right\rangle/\sqrt{N}\) are \(\mathcal{O}(1)\). In the thermodynamic limit, and considering the case of extended floppy modes, such as translations, rotations, or auxetic modes [28], we find that the mode selection is governed by the minimum of the Landau free energy: \[f[\mathbf{\mu},\mathbf{\alpha}]=\sum_{q\in\Re}\frac{\mu_{q}^{2}}{2}-\frac{1}{\beta N} \sum_{i}\log\left(I_{0}\left(\beta\mathcal{D}_{i}\right)\right), \tag{8}\] where \(I_{0}\) is the modified Bessel function of the first kind and: \[\mathcal{D}_{i}=\left(N\sum_{q,l\in\Re}\mu_{q}\mu_{l}\left(\mathbf{\varphi}_{i}^{q }(\mathbf{\alpha})\cdot\mathbf{\varphi}_{i}^{l}(\mathbf{\alpha})\right)\right)^{1/2} \tag{9}\] couples the different zero modes. Finally, having found the order parameters for a particular system configuration, which in general depend on \(\mathbf{\alpha}\), we can follow the adiabatic evolution prescription and evolve every \(\alpha_{m}\) as \(\dot{\alpha}_{m}=L_{m}(\mathbf{\alpha},\mathbf{\mu})\), where \(L_{m}\) is a structure-dependent operator. Using the above formulation we can prove that, within the adiabatic approximation, for any system with extended floppy modes, there is a continuous phase transition from a disordered phase to some form of collective motion, taking place at \(\pi_{\theta}^{c}=1/2\). This description, valid only in the large \(N\) limit, can be extended to small \(N\) in terms of simple integrals and higher order corrections as powers of \(\pi_{r}\) can also be found (See Sup. Mat. [25], sections IV and VI). Extending it to the case of localized floppy modes is a challenging issue. We shall now apply the above procedure from the simplest case of the translational network with PBC to the intricate case of the rotational-auexetic network. The simplest scenario corresponds to pure translational motion. If no other floppy mode is allowed, as is the case for an unpinned lattice with PBC, the zeroth-order solution (Eq. 7) is exact because the translational zero modes are position independent. The order parameter is \(\mathbf{\mu}_{P}=(\mu_{x},\mu_{y})=\left<\frac{1}{N}\sum_{i}\hat{\mathbf{n}}_{i}\right>\). The angular potential reads \(V=-\frac{1}{2N}\sum_{i,j}\cos(\theta_{i}-\theta_{j})\) and therefore exactly maps onto the 2D mean-field XY model. The minima of the corresponding free energy shown in Fig. 2-d for two noise amplitudes, perfectly captures the transition. In the thermodynamic limit, a phase transition at \(\pi_{\theta}=1/2\) with the mean-field critical exponents is obtained (See Sup. Mat. [25], section IV). Remarkably, the mean-field behavior does not arise from an uncontrolled approximation: true long-range order emerges from system-wide stress propagation, resulting from rigidity (See Sup. Movie 2 [25]). In the purely rotational, or auxetic cases, i.e pinned systems, the adiabatic approximation is not exact because the associated zero modes depend on the instantaneous structure prescribed by \(\phi\), the rotational angle, and \(\gamma\), the compression angle (See Fig. 2-e and Fig. 2-i). However, in the presence of a single zero mode, an adequate choice of reference frame leaves the Landau free energy independent of the structure parameter \(\alpha_{m}\). As a result, the adiabatic evolution simplifies enormously, with \(L_{m}\) being structure-independent. Here also we find a perfect agreement between the simulations data (Fig 2-g,k), and the order parameters \(\mu_{\phi}\) and \(\mu_{\gamma}\) extracted from the minimization of the free energy shown on Fig 2-h,l above and below \(\pi_{\theta}^{c}=1/2\) (See Sup. Mat. [25], sections VI and VII). The dependence of these so-defined order parameters on \(\pi_{r}\), when \(\pi_{\theta}=0\), is also perfectly well captured (see Fig 2-g,k and Sup. Mat. [25], sections VI and VII). Two-mode settings lead to more intricate dynamics and complex energy landscapes. First, we will consider a translational-rotational system that has a Landau free energy that is independent of the structure parameter, an exception due to the structure-invariant nature of the translational floppy modes. The order parameters are the ones defined for the pure translation and rotational case. The free energy, however, displays a richer behavior, when \(\pi_{\theta}<\pi_{\theta}^{c}=1/2\) : the space isotropy for translation and the chiral symmetry for rotation are simultaneously broken (See Sup. Movie 3), yet the translational solution is always the global minimum, (see Fig. 3-c and Sup. Mat. [25], section VIII) a non trivial result. Interestingly, mixed translational/rotational states are not steady state solutions. The mean-field nature of the systems leads to minima that are separated by an energy barrier proportional to \(N\). For a finite system, transitions between the two states Figure 3: **2-mode systems phenomenology; switching between modes of actuation.** (a) Time series of the magnitude of the polarization vector \(\mathbf{P}\) (blue) and of the angular velocity \(\phi\) for a 2 rings non pinned triangular lattice; left to: \(\pi_{\theta}=0.10,0.35\); \(\pi_{r}=0.1\). (b) Phase diagram \(\mu_{P}\) and \(\mu_{\phi}\) vs. \(\pi_{\theta}\) (Simulation data considers \(\pi_{r}=0.001\)) of a 30-ring non pinned triangular lattice. (c) Landau free energy for a non-pinned 2-ring triangular lattice as a function of \(\mu_{\phi}\) and \(\mu_{z}\); from left to right: \(\pi_{\theta}=0.10,0.35\). (d) Time series of the auxetic angular velocity \(\dot{\gamma}\) (blue) and rotational angular velocity \(\dot{\phi}\) (red) of an 8-layer rotational-auexetic network; from top to bottom: \(\pi_{\theta}=0.001,0.006,0.014\); \(\pi_{r}=0.001\). (e) Phase diagram \(\mu_{\gamma}\) (top) and \(\mu_{\phi}\) (bottom) vs. \(\pi_{\theta}\) of the 8-layer rotational-auexetic system; simulation data considers \(\pi_{r}=0.001\). The theoretical values are the time average of each solution obtained from following the free energy minima as \(\gamma\) varies. Inset: zoom-in to the \(\dot{\phi}\neq 0\) region. (f) Landau free energy for an 8-layer rotational-auexetic system as it varies with \(\gamma\) (\(\gamma=0.2+n\pi/2\), with \(n=0,1,2,3\)); \(\pi_{\theta}=0.001\), and the evolution time step \(\Delta t=0.01\). shall be given by Kramer's escape time, which diverges in the thermodynamic limit. Finally, considering a network where the two zero modes are the rotation and the auxetic one allows us to demonstrate the efficiency of our approach, while stressing its limitations. In this case, there is no choice of coordinate frame that can eliminate the dependence of the modes on the two structure parameters \(\phi\) and \(\gamma\) simultaneously. Our prescription eliminates the dependence on \(\phi\) and defines the two order parameters \(\mu_{A}\) and \(\mu_{B}\) as functions of \(\langle\dot{\phi}\rangle\), \(\langle\dot{\gamma}\rangle\) and \(\gamma\) (See Sup. Mat. [25], section IX). Depending on the value of \(\pi_{\theta}\), the free energy has 4 local minima, 2, or just one, i.e. the disordered solution. As shown in Fig. 3-f these minima move in phase space as \(\gamma\) evolves. For a given set \(\pi_{\theta},\pi_{r}\), we can follow the evolution of each different solution from the adiabatic prescription \(\dot{\alpha}_{m}=L_{m}(\mathbf{\alpha},\mathbf{\mu})\) (See Sup. Mat. [25], section IX). This adiabatic evolution converges to a well-defined phase diagram (See Fig. 3-e) that remarkably displays two transitions, the aforementioned one at \(\pi_{\theta}=\pi_{c}=1/2\), where the auxetic contraction is activated, and a second one at much smaller values, \(\pi_{\theta}\approx 0.02\), where rotation is activated. Numerical simulations of the auxetic-rotation systems show good agreement in most of the phase diagram. Although we perfectly describe the zero noise limit and the intermediate noise regime, the low noise regime, below the onset of the rotation dynamics, is not well captured (See inset of Fig. 3-e). The reason is that the energy barrier separating both minima is very low, eventually vanishing for large \(N\). Hence the system does not evolve following a single minimum, invalidating the adiabatic prescription. An appropriate numerical procedure to follow the evolution of the structure, hence of the modes, should be able to solve this discrepancy. In this letter, we have studied the dynamics of active solid body motion and mechanism folding through a general theoretical framework in the rigid limit. Our formalism allows for the design and tuning of a wide range of materials where elastic deformations are negligible, with the interaction between different modes giving rise to rich, complex dynamics. Future work will explore structures with multiple shape-changing modes [32], interactions between these active solids and environmental obstacles or one another, and consider our dynamical system outside the \(\pi_{r}\ll 1\) and perfectly rigid limits, potentially capturing more subtle effects such as the selection of particular translational motion direction based on the geometry of the network [5]. ###### Acknowledgements. C.H-L. was supported by a Ph.D. grant from ED564 'Physique en Ile de France'. G.D. acknowledges support from Fondecyt Grant No. 1210656. P.B. was supported by a Ph.D. grant from ED564 'Physique en Ile de France'. C.C. acknowledges funding from the European Research Council under grant agreement 852587 and from the Netherlands Organisation for Scientific Research under grant agreement VIDI 2131313.
2306.09201
Tensor BM-Decomposition for Compression and Analysis of Video Data
Given tensors $\boldsymbol{\mathscr{A}}, \boldsymbol{\mathscr{B}}, \boldsymbol{\mathscr{C}}$ of size $m \times 1 \times n$, $m \times p \times 1$, and $1\times p \times n$, respectively, their Bhattacharya-Mesner (BM) product will result in a third-order tensor of dimension $m \times p \times n$ and BM-rank of 1 (Mesner and Bhattacharya, 1990). Thus, if an arbitrary $m \times p \times n$ third-order tensor can be written as a sum of a small number, relative to $m,p,n$, of such BM-rank 1 terms, this BM-decomposition (BMD) offers an implicitly compressed representation of the tensor. In this paper, we first show that grayscale surveillance video can be accurately captured by a low BM-rank decomposition and give methods for efficiently computing this decomposition. To this end, we first give results that connect rank-revealing matrix factorizations to the BMD. Next, we present a generative model that illustrates that spatio-temporal video data can be expected to have low BM-rank. We combine these observations to derive a regularized alternating least squares (ALS) algorithm to compute an approximate BMD of the video tensor. The algorithm itself is highly parallelizable since the bulk of the computations break down into relatively small regularized least squares problems that can be solved independently. Extensive numerical results compared against the state-of-the-art matrix-based DMD for surveillance video separation show our algorithms can consistently produce results with superior compression properties while simultaneously providing better separation of stationary and non-stationary features in the data. We then introduce a new type of BM-product suitable for color video and provide an algorithm that shows an impressive ability to extract important temporal information from color video while simultaneously compressing the data.
Fan Tian, Misha E. Kilmer, Eric Miller, Abani Patra
2023-06-15T15:37:11Z
http://arxiv.org/abs/2306.09201v3
# Tensor BM-Decomposition for Compression and Analysis of Spatio-Temporal Third-order Data+ ###### Abstract Given tensors \(\mathcal{A},\mathcal{B},\mathfrak{C}\) of size \(m\times 1\times n\), \(m\times p\times 1\), and \(1\times p\times n\), respectively, their Bhattacharya-Mesner (BM) product will result in a third order tensor of dimension \(m\times p\times n\) and BM-rank of 1 (Mesner and Bhattacharya, 1990). Thus, if a third-order tensor can be written as a sum of a small number of such BM-rank 1 terms, this BM-decomposition (BMD) offers an implicitly compressed representation of the tensor. Therefore, in this paper, we give a generative model which illustrates that spatio-temporal video data can be expected to have low BM-rank. Then, we discuss non-uniqueness properties of the BMD and give an improved bound on the BM-rank of a third-order tensor. We present and study properties of an iterative algorithm for computing an approximate BMD, including convergence behavior and appropriate choices for starting guesses that allow for the decomposition of our spatial-temporal data into stationary and non-stationary components. Several numerical experiments show the impressive ability of our BMD algorithm to extract important temporal information from video data while simultaneously compressing the data. In particular, we compare our approach with dynamic mode decomposition (DMD): first, we show how the matrix-based DMD can be reinterpreted in tensor BMP form, then we explain why the low BM-rank decomposition can produce results with superior compression properties while simultaneously providing better separation of stationary and non-stationary features in the data. We conclude with a comparison of our low BM-rank decomposition to two other tensor decompositions, CP and the t-SVDM. ## 1 Introduction Low order tensor decomposition methods have provided domain-specific insight into large, multidimensional data sets as well as a means of compressing these data [22]. Many such methods have been proposed including the CANDECOMP/PARAFAC or canonical polyadic (CP) decomposition [18, 31], the Tucker model or the higher-order SVD (HOSVD) method [40, 4], tensor-train decomposition [33], t-SVD [20] and its more general form \(\star_{M}\) tensor SVD (t-SVDM) [19]. The use of a specific method on a particular problem depends heavily on the underlying application (i.e., the properties of the data) as well as the processing objectives (compression, information extraction, etc.). Of interest here are the compression and the decomposition of video into stationary background and moving foreground components. In [17], regularized CP decomposition was used for the video background estimation. Our approach builds on recent interest in the tensor Bhattacharya-Mesner (BM) product [29, 30] and associated BM-algebra [9, 10, 11]. The fundamental difference between factoring a tensor using the BM-product representation and the tensor CP-decomposition lies in the order of the set of "factor tensors" into which a given third order tensor (the case of interest here) is decomposed. The CP approach is based on outer products generated from triplets of vectors while a BM decomposition (BMD) employs triplets of matrices, as we show in Section 2. Theoretical studies of third-order tensor spectral decomposition and singular value decomposition in terms of BM-product have been discussed in [10] and [12] respectively. However, no numerical algorithms on tensor BM-decomposition methods have been proposed until recently. In 2022, we first presented an alternating least squares (ALS) algorithm to factor a third-order tensor into an unconstrained BMD [37], which served as motivation for the present work. Independently, in 2023, Luo et. al [28] proposed an unconstrained tensor factorization framework based on the third-order tensor BMP, which they rename as matrix outer-product (MOP). They, too, propose an ALS algorithm for an application in Bayesian inference, but do not address issues including starting guess or convergence. In our current work, we rigorously study the mathematical and algorithmic aspects of computing a low BM-rank tensor approximation with application to spatiotemporal data. As we outline more fully below, our specific contributions include: * An upper bound on the BM-rank in terms of slice-wise matrix rank, which can be tighter than the known bound of \(\text{BM-rank}(\mathbf{\mathfrak{X}})\leqslant\min\{m,p,n\}\) for a third order \(m\times p\times n\) tensor [11]. * A discussion of the non-uniqueness of the BMD and its relation to other decompositions. * An alternating least square algorithm for computing the BMD (BMD-ALS). * Study of the convergence properties of the algorithm by showing that, similar to the ALS algorithm for CP decomposition [26, 41, 43, 42], the BMD-ALS algorithm is also the exact block nonlinear Gauss-Seidel method [13]. * A generative low-rank BMP model for surveillance-type videos with stationary background and simple moving foreground illustrates the superior compressive power of the BMD for video separation and compression. * Proposal of two starting guesses that enable the separation of stationary and non-stationary portions of the video in the BMD while ensuring compression. The first, Spatiotemporal Slice-based SVD (SS-SVD), was proposed for video background initialization in [16]. The second method considered is the Dynamic Mode Decomposition (DMD) commonly employed to evaluate the dynamics of complex systems and first utilized in video processing in [14]. * Reinterpretation of the DMD method in the context of the BMP model and comparison to the BMD-ALS video reconstruction results using SS-SVD initial guess. * Comparison of the BMD to the CP and t-SVDM decompositions, where the latter also reveals why the BMD is more powerful than matrix-based dimensionality reduction. The remainder of this paper is organized as follows. Background notation, tensor definitions, the basics of the BM-algebra are provided in Section 2. In Section 3, we provide a new upper-bound on third-order tensor BM-rank via slice-wise SVDs. Section 4 discusses the generative spatiotemporal video model, which illustrates the inherently low BM-rank property of (gray-scale) video tensors. We also show that the SS-SVD method with low-rank truncation is capable of capturing spatial features of the video, which makes it a good candidate as an initial guess for computing BMD. In Section 5, we discuss the unconstrained low BM-rank approximation problem and an ALS algorithm for computing the tensor BMD. By drawing connections between the BMD-ALS algorithm and the block nonlinear Gauss-Seidel method, we provide a convergence analysis for the ALS algorithm. Section 6 reviews the DMD algorithm and describes our method of extending the DMD factors to a conformable tensor BMP triplet. Section 7 contains a numerical study of the BMD method for the application of compressible video background/foreground reconstructions. In Section 8, we relate the tensor BMD to CP and \(\star_{M}\) product based representations. ## 2 Background and Notation As shown in Fig. (2), given \(\mathfrak{X}\in\mathbb{R}^{m\times p\times n}\) a third-order tensor, there are three ways of slicing \(\mathfrak{X}\): * The **frontal slices** of \(\mathfrak{X}\) are given by \(\mathfrak{X}_{:,:,k}\in\mathbb{R}^{m\times p\times 1}\), \(1\leqslant k\leqslant n\), which are obtained by slicing the tensor front to back. * The **lateral slices** of \(\mathfrak{X}\) are given by \(\mathfrak{X}_{:,j,:}\in\mathbb{R}^{m\times 1\times n}\), \(1\leqslant j\leqslant p\), which are obtained by slicing the tensor left to right. * The **horizontal slices** are denoted as \(\mathfrak{X}_{i,:,:}\in\mathbb{R}^{1\times p\times n}\), \(1\leqslant i\leqslant m\) and are obtained by slicing the tensor top to bottom. **Tube fibers** are denoted as \(\mathfrak{X}_{i,j,:}\in\mathbb{R}^{1\times 1\times n}\), \(1\leqslant i\leqslant m,1\leqslant j\leqslant p\) and are obtained by holding both the first and the second indices fixed and varying the other third index. Throughout the paper, we may also use the **vec** and **reshape** operators. The **vec** operator maps a matrix \(\mathbf{A}\in\mathbb{R}^{m\times n}\) to a vector \(\mathbf{a}\in\mathbb{R}^{mn\times 1}\) by column stacking of \(\mathbf{A}\). In Matlab notation, we have \(\mathbf{vec}(\mathbf{A})\equiv\mathbf{A}(:)\). The \(\mathbf{reshape}(\mathbf{a},[m,n])\) operation folds a given vector \(\mathbf{a}\in\mathbb{R}^{mn\times 1}\) into a matrix \(\mathbf{A}\) of size \(m\times n\) by filling this matrix on column at a time. We also define the **Tvec** and the **Tfold** operations. Given a tensor \(\mathfrak{X}\in\mathbb{R}^{m\times p\times n}\), we have \[\mathbf{x}=\texttt{Tvec}(\mathfrak{X})=\left[\begin{array}{c}\mathbf{x}^{(1,1)}\\ \vdots\\ \mathbf{x}^{(i,j)}\\ \vdots\\ \mathbf{x}^{(m,p)}\end{array}\right]=\left[\begin{array}{c}\mathrm{vec}\left( \mathfrak{X}_{1,1,:}\right)\\ \vdots\\ \mathrm{vec}\left(\mathfrak{X}_{i,j,:}\right)\\ \vdots\\ \mathrm{vec}\left(\mathfrak{X}_{m,p,:}\right)\end{array}\right]. \tag{1}\] where \(\mathbf{x}^{(i,j)}=\mathrm{vec}(\mathfrak{X}_{i,j,:})\in\mathbb{R}^{n\times 1}\). This operation is also equivalent to storing the three-dimensional array in sequential memory location using row-major ordering [21]. The Tfold operation is defined to be the inverse action of Tvec, i.e. \(\texttt{Tfold}\left(\texttt{Tvec}(\mathfrak{X})\right)=\mathfrak{X}.\) To make these definitions more concrete, we can think of the operation \(\texttt{Tvec}(\mathfrak{X})\) as stacking the tube fibers of \(\mathfrak{X}\) into a long vector \(\mathbf{x}\) and the Tfold operator reverse the flattening back into a third-order tensor (referring to Fig. (2) for visual illustration). Figure 2: Slices and tube fibers of a third order tensor of size \(m\times p\times n\) and corresponding indexing in Matlab notation. The **Mat** operation flattens two tensors into a block-diagonal matrix (see Fig. (3)). Given \(\mathbf{\mathcal{A}}\in\mathbb{R}^{m\times\ell\times n}\) and \(\mathbf{\mathcal{B}}\in\mathbb{R}^{\ell\times p\times n}\), \(\texttt{Mat}(\mathbf{\mathcal{A}},\mathbf{\mathcal{B}})\) yields \(\mathbf{H}\in\mathbb{R}^{mpn\times mp\ell}\) defined as \[\mathbf{H}=\texttt{Mat}(\mathbf{\mathcal{A}},\mathbf{\mathcal{B}})=\underset{ \begin{subarray}{c}1\leqslant i\leqslant m\\ 1\leqslant j\leqslant p\end{subarray}}{\oplus}\mathbf{H}^{(i,j)}, \tag{2}\] The Matlab squeeze command, \(\mathbf{A}=\mathbf{squeeze}(\mathbf{\mathcal{A}})\), takes a lateral slice \(\mathbf{\mathcal{A}}\in\mathbb{R}^{m\times 1\times n}\) to an \(m\times n\) matrix \(\mathbf{\mathcal{A}}\)[19], while the Matlab permute operator rearranges the dimensions of an array. In particular, tensor transposes are defined based on the cyclic permutations of the indices of each entry. Details are given in Section 2.1. The Frobenius norm [22] of a tensor \(\mathbf{\mathcal{X}}\in\mathbb{R}^{m\times p\times n}\) is defined analogously to the matrix case as \(\|\mathbf{\mathcal{X}}\|_{F}=\sqrt{\sum_{i=1}^{m}\sum_{j=1}^{p}\sum_{k=1}^{n}| \mathbf{\mathcal{X}}_{i,j,k}|^{2}}\). ### Overview of BM-algebra The BM-product on a third-order tensor triplet was first introduced by Mesner and Bhattacharya [29, 30] and later generalized to tensors of arbitrary orders by Gnang and Filmus [10, 11]. In this section, we will focus on the third-order case. **Definition 1**: _For a third-order conformable tensor triplet \(\mathbf{\mathcal{A}}\in\mathbb{R}^{m\times\ell\times n};\mathbf{\mathcal{B}}\in \mathbb{R}^{m\times p\times\ell}\) and \(\mathbf{\mathcal{C}}\in\mathbb{R}^{\ell\times p\times n}\), the BM-product \(\mathbf{\mathcal{X}}=\texttt{bmp}\left(\mathbf{\mathcal{A}},\mathbf{\mathcal{B}},\mathbf{ \mathcal{C}}\right)\in\mathbb{R}^{m\times p\times n}\) is given entry-wise by_ \[\mathbf{\mathcal{X}}_{i,j,k}=\sum_{1\leqslant t\leqslant\ell}\mathbf{\mathcal{A}}_{i,t,k}\mathbf{\mathcal{B}}_{i,j,t}\mathbf{\mathcal{C}}_{t,j,k}. \tag{3}\] Figure 3: Illustration of the tensor \(\texttt{Mat}\) operation that flattens the two tensors \(\mathbf{\mathcal{A}}\in\mathbb{R}^{m\times\ell\times n},\mathbf{\mathcal{B}}\in \mathbb{R}^{\ell\times p\times n}\) into a block-diagonal matrix \(\mathbf{H}\in\mathbb{R}^{mpn\times mp\ell}\). Figure 2: Illustration of the tensor flattening and folding operations \(\texttt{Tvec}(\mathbf{\mathcal{X}})\) and \(\texttt{Tfold}(\mathbf{\mathcal{X}})\). The vectorization and the reshaping of the first three tubes fibers \(\mathbf{\mathcal{X}}_{1,1,:},\mathbf{\mathcal{X}}_{1,2,:}\), and \(\mathbf{\mathcal{X}}_{1,3,:}\) are shown in this figure, and the rest of the tube fibers are omitted. In general, the tube fibers are flattened by rows when a \(\texttt{Tvec}\) operator is applied to the tensor \(\mathbf{\mathcal{X}}\) and restored in the same way when a \(\texttt{Tfold}\) operator is applied to the flattened vector to recover \(\mathbf{\mathcal{X}}\). The BMP also conveniently expresses the notion of the BM-outer product. When \(\ell=1\), a BM-outer product corresponds to the BMP of the conformable order-2 tensor (matrix) slices, i.e. a lateral slice \(\mathbf{\mathcal{A}}\in\mathbb{R}^{m\times 1\times n}\), a frontal slice \(\mathbf{\mathcal{B}}\in\mathbb{R}^{m\times p\times 1}\), and a horizontal slice \(\mathbf{\mathcal{C}}\in\mathbb{R}^{1\times p\times n}\). Consequently, the BM-product \(\mathbf{\mathcal{X}}=\texttt{bmp}\left(\mathbf{\mathcal{A}},\mathbf{\mathcal{B}},\mathbf{ \mathcal{C}}\right)\) given in Eq. (2.3) can be written equivalently as a sum of BM outer-products of matrix slices \[\mathbf{\mathcal{X}}=\sum_{1\leq t\leq\ell}\texttt{bmp}\left(\mathbf{\mathcal{A}}_{:,t,:},\mathbf{\mathcal{B}}_{:,:,t},\mathbf{\mathcal{C}}_{t,:,:}\right). \tag{2.4}\] A figure illustration is shown in Fig. (2.4). **Remark 1**: _Every BM-product expresses a sum of BM-outer products and vice versa._ The BM-outer product in Eq.(2.4) induces a natural notion of the tensor BM-rank. **Definition 2** **(BM-rank[11])**: _The BM-rank of \(\mathbf{\mathcal{X}}\in\mathbb{R}^{m\times p\times n}\) is the minimum number of the BM-outer products of conformable matrix slices that sum up to \(\mathbf{\mathcal{X}}\)._ Tensor transpose was introduced analogous to matrix transpose [10, 11]. **Definition 3**: _Suppose \(\mathbf{\mathcal{X}}\) is a third-order tensor of size \(m\times p\times n\), then \(\mathbf{\mathcal{X}}\) has the following transpose operations which are given by cyclic permutations of the indices of each entry:_ \[\mathbf{\mathcal{X}}^{\top} =\texttt{permute}(\mathbf{\mathcal{X}},[2,3,1]);\;\mathbf{\mathcal{X}}^{ \top}\in\mathbb{R}^{p\times n\times m}.\] \[\mathbf{\mathcal{X}}^{\top^{2}} =\left(\mathbf{\mathcal{X}}^{\top}\right)^{\top}=\texttt{permute}(\bm {\mathcal{X}},[3,1,2]);\;\mathbf{\mathcal{X}}^{\top^{2}}\in\mathbb{R}^{n\times m \times p}. \tag{2.5}\] _As a result, \(\mathbf{\mathcal{X}}^{\top^{3}}=\mathbf{\mathcal{X}}\)._ **Definition 4**: _When \(\mathbf{\mathcal{X}}\) is a BM-product of tensors \(\mathbf{\mathcal{A}},\mathbf{\mathcal{B}}\) and \(\mathbf{\mathcal{C}}\), i.e. \(\mathbf{\mathcal{X}}=\texttt{bmp}(\mathbf{\mathcal{A}},\mathbf{\mathcal{B}},\mathbf{\mathcal{ C}})\), then the transpose of the BM-product is a BM-product of transposes such that_ \[\mathbf{\mathcal{X}}^{\top}=\texttt{bmp}(\mathbf{\mathcal{A}},\mathbf{\mathcal{B}},\mathbf{ \mathcal{C}})^{\top}=\texttt{bmp}\left(\mathbf{\mathcal{B}}^{\top},\mathbf{\mathcal{C }}^{\top},\mathbf{\mathcal{A}}^{\top}\right). \tag{2.6}\] **Theorem 2**: _The BM-rank of a third-order tensor is equal to the BM-rank of its transpose._ Assume \(\mathbf{\mathcal{X}}\in\mathbb{R}^{m\times p\times n}\) has BM-rank \(r\leq\min\{m,p,n\}\)[11]. Suppose that \(\mathbf{\mathcal{X}}^{\top}\) has BM-rank \(s\neq r\). By Definition 2, there exists a tensor triplet \(\mathbf{\mathcal{A}}\in\mathbb{R}^{m\times r\times n};\mathbf{\mathcal{B}}\in\mathbb{ R}^{m\times p\times r}\) and \(\mathbf{\mathcal{C}}\in\mathbb{R}^{r\times p\times n}\) such that \[\mathbf{\mathcal{X}}^{\top}=\sum_{1\leq t\leq s}\texttt{bmp}\left(\mathbf{\mathcal{A}}_ {:,t,:},\mathbf{\mathcal{B}}_{:,:,t},\mathbf{\mathcal{C}}_{t,:,:}\right). \tag{2.7}\] Figure 2.4: Illustration of the BM-product of a conformable tensor triplet \(\mathbf{\mathcal{A}},\mathbf{\mathcal{B}}\), and \(\mathbf{\mathcal{C}}\) as a sum of BM-outer products of matrix slices. Then by taking transpose twice of \(\mathfrak{X}^{\top}\), we have \[\left(\mathfrak{X}^{\top}\right)^{\top^{2}} =\sum_{1\leqslant t\leqslant s}\mathsf{bmp}\left(\mathbf{\mathcal{A}}_ {:,t,:},\mathbf{\mathcal{B}}_{:,:,t},\mathbf{\mathcal{C}}_{t,:,:}\right)^{\top^{2}} \tag{8}\] \[\implies\mathfrak{X}^{\top^{3}} =\mathfrak{X} =\sum_{1\leqslant t\leqslant s}\mathsf{bmp}\left(\mathbf{\mathcal{C}} _{:,:,t}^{\top^{2}},\mathbf{\mathcal{A}}_{t,:,:}^{\top^{2}},\mathbf{\mathcal{B}}_{ :,t,:}^{\top^{2}}\right).\] Clearly, taking the transpose does not change the total number of the BM-outer products of matrix slices. Thus the BM-rank of \(\mathfrak{X}^{\top}\) cannot be bigger than \(r\). Moreover, since \(\mathfrak{X}\) has BM-rank \(r\), by definition, it is the minimum number of the BM-outer products of matrix slices that sum up to \(\mathfrak{X}\). Then by contradiction, \(s\) cannot be smaller than \(r\). Hence, the BM-rank of \(\mathfrak{X}^{\top}\) must equal the BM-rank of \(\mathfrak{X}\). ## 3 BM-rank Upper Bound by Slicewise SVDs In [11], the BM-rank of a generic tensor \(\mathfrak{X}\in\mathbb{R}^{m\times p\times n}\) is said to be bounded above by \(\min\{m,p,n\}\). In this section, we give a bound on the BM-rank of a given tensor which is possibly smaller, depending on properties of the tensor. This also lays the groundwork for an algorithm to compute a BMP approximate decomposition. We first show how matrix-SVDs of frontal slices (recall Fig. (1) can be reorganized into the BMP form. This will give us one upper bound on the BM-rank of the tensor. Taking transposes, we repeat the argument to all three modes of slicing to get a a final upper bound on the BM-rank of the tensor. For an \(m\times p\times n\) tensor \(\mathfrak{X}\), let \(r=\min(m,p).\) Express the SVD of each frontal slice as \[\mathfrak{X}_{:,:,k}=\sum_{t=1}^{r}\left(\mathbf{u}_{t}^{(k)}\sigma_{t}^{(k)} \right)\left(\mathbf{v}_{t}^{(k)}\right)^{\top},\quad\forall\ 1\leqslant k \leqslant n. \tag{9}\] The rank of \(\mathfrak{X}_{:,:,k}\) is \(r_{k}\leqslant r\). Note that if \(r_{k}<r\), then \(\sigma_{r_{k}+1}^{(k)},\ldots,\sigma_{r}^{(k)}=0\). Set \(\mathbf{\mathcal{A}}_{:,t,k}=\mathbf{u}_{t}^{(k)}\sigma_{t}^{(k)}\), \(\mathbf{\mathcal{C}}_{t,:,k}=\left(\mathbf{v}_{t}^{(k)}\right)^{\top}\) for \(1\leqslant t\leqslant r\) and \(\mathbf{\mathcal{B}}_{:,:,k}=\mathbf{e}\mathbf{e}^{\top}\), with \(\mathbf{e}\) the vector of ones. The following theorem shows that the frontal slice SVDs can be combined into a BMP of this triplet. Given the setup above with \(\mathbf{\mathcal{A}},\mathbf{\mathcal{C}}\) defined using the frontal slice SVDs, and the frontal slices of \(\mathbf{\mathcal{B}}\) as the rank-1 matrix \(\mathbf{e}\mathbf{e}^{\top}\), we have \[\mathfrak{X}=\mathsf{bmp}(\mathbf{\mathcal{A}},\mathbf{\mathcal{B}},\mathbf{\mathcal{C}}). \tag{10}\] Based on the definition of the BMP given in Eq. (3), we have \[\mathfrak{X}_{i,j,k}=\sum_{t=1}^{r}\mathbf{\mathcal{A}}_{i,t,k}\mathbf{\mathcal{B}}_{i,j,t}\mathbf{\mathcal{C}}_{t,j,k}=\sum_{t=1}^{r}\mathbf{\mathcal{A}}_{i,t,k}\mathbf{ \mathcal{C}}_{t,j,k},\] since \(\mathbf{\mathcal{B}}\) is a tensor of all ones. Then holding the \(k\)-th index fixed, we have \[\mathfrak{X}_{:,:,k}=\sum_{t=1}^{r}\mathbf{\mathcal{A}}_{:,t,k}\mathbf{\mathcal{C}}_{t,:,k}=\sum_{t=1}^{r}\left(\mathbf{u}_{t}^{(k)}\sigma_{t}^{(k)}\right)\left( \mathbf{v}_{t}^{(k)}\right)^{\top}. \tag{11}\] Corollary: If \(r_{k}<r\) for all \(1\leq k\leq n\), the BM-rank of \(\mathfrak{X}\) is bounded above by \(\max\limits_{1\leq k\leq n}(r_{k})\). Corollary: Because we can repeat the above argument for \(\mathfrak{X}^{\top}\), \(\mathfrak{X}^{\top^{2}}\), the BM-rank cannot exceed the smallest of the maximum slicewise rank of the \(n+m+p\) matrix slices given by \[\mathfrak{X}_{:,:,k},\quad\forall\ 1\leq k\leq n;\] \[\mathtt{squeeze}(\mathfrak{X}_{i,:,:}),\quad\forall\ 1\leq i\leq m;\] \[\mathtt{squeeze}(\mathfrak{X}_{:,j:,:}),\quad\forall\ 1\leq j\leq p.\] **Non-uniqueness.** Grouping the slicewise singular values with the corresponding left singular vectors was arbitarily made in order to preserve the scaling constants of the left-singular vectors. This illustrates a nonuniqueness property of the BMD. To see this more generally, let \(\ell=1\), \(\mathbf{A}\) be \(m\times n\), \(\mathbf{B}\) be \(m\times p\), \(\mathbf{C}\) be \(p\times n\). Then define two tensors \[\begin{split}\boldsymbol{\mathcal{A}}_{:,1,:}=\mathbf{A},\qquad \boldsymbol{\mathcal{B}}_{:,:,1}&=\mathbf{B},\qquad\boldsymbol{ \mathcal{C}}_{1,:,:}=\mathbf{C},\\ \tilde{\boldsymbol{\mathcal{A}}}_{:,1,:}=\mathbf{D}_{1}\mathbf{A}, \qquad\tilde{\boldsymbol{\mathcal{C}}}_{1,:,:}&=\mathbf{D}_{2} \mathbf{C},\qquad\tilde{\mathbf{B}}_{:,:,1}=\mathbf{D}_{1}^{-1}\mathbf{B} \mathbf{D}_{2}^{-1},\end{split}\] where \(\mathbf{D}_{i}\) are invertible \(m\times m\) and \(p\times p\) diagonal matrices. Then \[\mathtt{bmp}(\boldsymbol{\mathcal{A}},\boldsymbol{\mathcal{B}},\boldsymbol{ \mathcal{C}})=\mathtt{bmp}(\tilde{\boldsymbol{\mathcal{A}}},\tilde{\mathbf{B} },\tilde{\boldsymbol{\mathcal{C}}}),\] showing that the factors are non-unique. ## 4 Compressed Background/Foreground Modeling The video background and foreground separation task is one of the important applications in computer vision based on videos [1, 25, 38]. Background subtraction is often used in this task in order to detect moving foreground objects. Hence, accurate modeling of the video background under complex, diverse, and cluttered conditions is of paramount importance for most of the background/foreground separation methods. A comprehensive review of recent challenges and different models to deal with specific video conditions in background subtraction applications has been discussed in [8]. One of the research directions in the field focuses on utilizing matrix decomposition methods. By vectorizing video frames into vectors and stacking them column-wise into a matrix, decomposition methods such as the robust principal component analysis (RPCA) can be used to separate the video data matrix into the low-rank (background) and sparse (foreground) components [1]. However, flattening three-dimensional video data into a matrix presents obvious disadvantages. For instance, vectorizing the video frames would destroy the intrinsic spatial structure within frames. Moreover, complex disturbances within the background can be ignored after flattening, which would potentially lead to a lower background reconstruction quality[27]. To overcome these difficulties for matrix-based methods, an increasing number of tensor decomposition methods have been proposed dealing with background subtraction tasks [2, 27, 17, 16, 35]. Similar to matrix-based methods, the tensor-based techniques model the background video with a low tensor rank component and obtain the sparse foreground by subtracting the background from the video data. Although for most of the aforementioned methods the low-rank property of the background suggests that it is also possible to compress the stationary background for more efficient data storage, the step of background subtraction does not guarantee a compressed representation of the foreground video, even though detecting the foreground objects is usually the main focus in real-life applications. In this section, we will discuss a generative spatiotemporal video model with a low tensor BM-rank. Since in our model the stationary background image and the moving foreground object can be incorporated into two separate BM-rank 1 components, low BM-rank tensor decomposition separates the video data into compressed representations of both the background and the foreground simultaneously. ### Generative Spatiotemporal Model With Low BM-rank Suppose we have a static background image \(\mathbf{X}\in\mathbb{R}^{m\times n}\), and an object (of constant intensity \(\alpha\) and constant rectangular size \(r_{1}\times r_{2}\) for simplicity) moving across the background over \(p\) time steps. At time \(k\), \(1\leqslant k\leqslant p\), the object is located at \(\mathcal{I}_{k}\times\mathcal{J}_{k}:=[i_{k},i_{k}+r_{1}]\times[j_{k},j_{k}+r_ {2}]\) with \(1\leqslant i_{k}\leqslant m-r_{1}\) and \(1\leqslant j_{k}\leqslant n-r_{2}\). We think of the rectangle as a binary image, \(\mathbf{1}_{\mathcal{I}_{k},\mathcal{J}_{k}}\), where \[\mathbf{1}_{\mathcal{I}_{k},\mathcal{J}_{k}}(i,j)=\begin{cases}\alpha,\text{ if }(i,j)\in\mathcal{I}_{k}\times\mathcal{J}_{k}\\ 0,\text{ otherwise}\end{cases}. \tag{10}\] Next, define a vector \(\mathbf{b}^{(k)}\in\mathbb{R}^{m}\) such that \(\mathbf{b}^{(k)}_{i}=1\) when \(i\in\mathcal{I}_{k}\) and is 0 otherwise. Similarly, define \(\mathbf{c}^{(k)}\in\mathbb{R}^{n}\) such that \(\mathbf{c}^{(k)}_{j}=1\) when \(j\in\mathcal{J}_{k}\) and is 0 otherwise. Finally, let \(\mathbf{E}\in\mathbb{R}^{m\times n}\) be a matrix of all-ones. Then the rectangle image at time \(t\) can be expressed as \[\mathbf{1}_{\mathcal{I}_{k},\mathcal{J}_{k}}=\alpha\operatorname{diag}( \mathbf{b}^{(k)})\cdot\mathbf{E}\cdot\operatorname{diag}(\mathbf{c}^{(k)}), \tag{11}\] where \(\operatorname{diag}(\mathbf{v})\) means the square, diagonal matrix with entries provided by the vector argument \(\mathbf{v}\). Hence, the \(k\)-th video frame that captures both the background image and the moving object, denoted as \(\mathbf{T}^{(k)}\), can be expressed as \[\mathbf{T}^{(k)} :=\mathbf{X}-\operatorname{diag}(\mathbf{b}^{(k)})\cdot \mathbf{X}\cdot\operatorname{diag}(\mathbf{c}^{(k)})+\mathbf{1}_{\mathcal{I }_{k},\mathcal{J}_{k}} \tag{12}\] \[=\operatorname{diag}(\texttt{ones}(m))\mathbf{X}\operatorname{ diag}(\texttt{ones}(n))+\operatorname{diag}(\mathbf{b}^{(k)})\,(-\mathbf{X}+ \alpha\mathbf{E})\operatorname{diag}(\mathbf{c}^{(k)}).\] The term \(-\operatorname{diag}(\mathbf{b}^{(k)})\mathbf{X}\operatorname{diag}( \mathbf{c}^{(k)})\) "zeros out" the entries in the stationary image where the object is living in this frame, and the term \(\mathbf{1}_{\mathcal{I}_{k},\mathcal{J}_{k}}\) puts the constant value rectangle over those pixels. Both are necessary to ensure that the object retains its constant value. Defining the third-order tensor triplet, 1. An \(m\times 2\times n\) tensor \(\boldsymbol{\mathcal{A}}\) with \(\boldsymbol{\mathcal{A}}_{:,1,:}=\mathbf{X}\) and \(\boldsymbol{\mathcal{A}}_{:,2,:}=(-\mathfrak{X}+\alpha\mathbf{E})\), 2. An \(m\times p\times 2\) tensor \(\boldsymbol{\mathcal{B}}\) with \(\boldsymbol{\mathcal{B}}_{:,:,1}=\texttt{ones}(m,p)\) and \(\boldsymbol{\mathcal{B}}_{:,k,2}=\mathbf{b}^{(k)}\), 3. A \(2\times p\times n\) tensor \(\mathfrak{C}\) with \(\mathfrak{C}_{1,:,:}=\texttt{ones}(p,n)\), and \(\mathfrak{C}_{2,k,:}=\mathbf{c}^{(k)}\), The video tensor is then a sum of two BM-rank 1 tensors: the stationary background tensor and the foreground moving object tensor, i.e. \[\mathfrak{X}=\mathfrak{X}^{\text{bg}}+\mathfrak{X}^{\text{fg}}:=\texttt{ bmp}(\boldsymbol{\mathcal{A}}_{:,1,:},\boldsymbol{\mathcal{B}}_{:,:,1},\mathfrak{C}_{1,:,:})+ \texttt{bmp}(\boldsymbol{\mathcal{A}}_{:,2,:},\boldsymbol{\mathcal{B}}_{:,:,2}, \mathfrak{C}_{2,:,:}). \tag{13}\] The \(k\)-th lateral slice of tensor \(\mathfrak{X}\) is given exactly by Eq. (12), i.e. \(\texttt{squeeze}(\mathfrak{X}_{:,k,:})=\mathbf{T}^{(k)}\). We can augment this generative spatiotemporal video model to add various complexities. For example, if we wanted to model multiple constant objects moving at a time, this is also possible in the BM-rank 2 model if the index sets for the objects do not overlap in a frame. If we wanted our object to have different pixel values as opposed to a constant value, we could do this with a BM-rank 3 construction by changing the values of the entries in \(\mathbf{b}^{(k)}\), \(\mathbf{c}^{(k)}\) on (only) the third term in the expression to be something other than 1, though this will not be able to capture every possible pixel pattern in a rectangle. The point is that our model illustrates why we might reasonably expect a to be able to capture the foreground and the background with the compressive power of a low BM-rank approximation. Moreover, in one of the numerical experiments, we will use this generative model to illustrate superiority of the our low BM rank model vs. a state-of-the-art DMD-based approach. ### Spatiotemporal Slice-based SVD (Ss-Svd) In the recent study by Kajo et al. [16], the slice-wise SVD method was applied to the spatiotemporal slices of an input video tensor to extract the background information. This method was thus given the name spatiotemporal slice-based SVD (SS-SVD). Since it has a connection to our initialization step in the BMD-ALS algorithm, we will briefly describe their method here. Given a video of \(p\) frames with size \(m\times n\), we order the frames as lateral slices to form a third-order tensor \(\mathfrak{X}\) of size \(m\times p\times n\). Each frontal slice, \(\mathfrak{X}_{:,:,k},1\leq k\leq n\), or horizontal slice \(\mathfrak{X}_{i,:,:},1\leq i\leq m\) is called a spatiotemporal slice containing both space and time information. For consistency purposes, we use the frontal spatiotemporal slices in the following. The SS-SVD method then applies a low-rank approximation to each spatiotemporal slice using truncated SVDs with a target matrix rank \(\ell\), \(1\leq\ell\leq p\): \[\mathfrak{X}_{:,:,k}\approx\sum_{t=1}^{\ell}\mathbf{u}_{t}^{(k)}\sigma_{t}^{ (k)}\left(\mathbf{v}_{t}^{(k)}\right)^{\top},1\leq k\leq n. \tag{4.5}\] The first rank-1 matrix reconstruction corresponds to the largest singular value of each slice and they argue that it captures mainly the dominant background scene across slices1. This set of tensor slices \(\mathbf{\hat{\mathfrak{X}}}_{:,:,k}^{\text{bg}}\) for all \(k=1,\ldots,n\) is given by Footnote 1: We note that if the data is non-negative, the first rank-1 triples will be non-negative by the Perrone-Frobenius theorem. \[\mathbf{\hat{\mathfrak{X}}}_{:,:,k}^{\text{bg}}=\mathbf{u}_{1}^{(k)}\sigma_{ 1}^{(k)}\left(\mathbf{v}_{1}^{(k)}\right)^{\top}. \tag{4.6}\] The foreground in [16] is taken to be the difference between the given video data and the reconstructed background, i.e. \(\mathbf{\hat{\mathfrak{X}}}^{\text{fg}}=\mathfrak{X}-\mathbf{\hat{\mathfrak{ X}}}^{\text{bg}}\), which is not a compressed representation. Next, using our generative model for insight, we will explain why we expect to be able to exploit the SS-SVD for our purposes. ### SS-SVD as BMD Analysis First, we observe that a truncated SS-SVD can be reinterpreted as a low-rank BMD by Theorem 3.1. Now, consider our generative model in which only single pixel-size object at each time step moves left to right in row \(i\) of background image \(\mathbf{X}\). This will have BM-rank of two. Each frame is a lateral slice \(\mathbf{\mathcal{X}}_{:,j,:}\). Then it is easy to see that the rank of each frontal slice \(\mathbf{\mathcal{X}}_{:,:,k}\) is either \(1\) or \(2\), because \(\mathbf{\mathcal{X}}_{:,j,k}\) is \(\mathbf{\mathcal{X}}_{:,k}\) or \(\mathbf{\mathcal{X}}_{:,k}+(-x_{ik}+\alpha)\mathbf{\mathrm{e}}_{i}\). Thus, indeed, multiples of the \(\mathbf{\mathrm{u}}_{1}^{(k)}\) will accurately approximate the \(\mathbf{\mathcal{X}}_{:,k}\). However, by orthogonality and the fact that \(c^{(k)}\mathbf{\mathrm{u}}_{1}^{k}\approx\mathbf{\mathcal{X}}_{:,k}\), \(\mathbf{\mathrm{u}}_{2}^{(k)}\) is approximately some multiple of \((-x_{ik}+\alpha)\mathbf{\mathrm{e}}_{i}-\kappa_{k}\mathbf{\mathcal{X}}_{:,k}\). In sum, for \(\ell=1\), we approximate \(\mathbf{\mathcal{X}}^{\mathrm{bg}}\) in (4.4) and the \(\ell=2\) term approximates \(\mathbf{\mathcal{X}}^{\mathrm{fg}}\) in (4.4). So both the foreground and the background have compressed representations in BMD form. We will utilize this observation in our choice of starting guess for our algorithm. ## 5 Low BM-rank Tensor Approximation We now introduce the third-order tensor BM-decomposition (BMD) and an alternating least-squares approach for computing a low BM-rank approximation. We will also provide ways to make reasonable starting guesses for application-specified BMP approximation. Given a third-order tensor \(\mathbf{\mathcal{X}}\in\mathbb{R}^{m\times p\times n}\) with BM-rank \(r\), our goal is to compute a decomposition with BM-rank \(\ell\), \(1\leqslant\ell\leqslant r\), which best approximates \(\mathbf{\mathcal{X}}\) such that \[\min_{\mathbf{\mathcal{X}}}|\mathbf{\mathcal{X}}-\mathbf{\mathcal{\tilde{X}}}|_{F}^{2}\ \text{with}\ \mathbf{\mathcal{\tilde{X}}}=\sum_{t=1}^{\ell}\mathbf{\mathsf{bmp}}\left(\mathbf{ \mathcal{A}}_{:,t,:},\mathbf{\mathcal{B}}_{:,:,t},\mathbf{\mathcal{C}}_{t,:,:}\right). \tag{5.1}\] Since the BM-product is a ternary multiplication of the factor tensors \(\mathbf{\mathcal{A}},\mathbf{\mathcal{B}}\), and \(\mathbf{\mathcal{C}}\), finding a decomposition of \(\mathbf{\mathcal{X}}\) in terms of the factor tensors would require solving a nonlinear least-squares problem. However, while the Jacobian of the residual vector, \(\operatorname{vec}(\mathbf{\mathcal{X}}-\mathbf{\mathsf{bmp}}(\mathbf{\mathcal{A}},\mathbf{ \mathcal{B}},\mathbf{\mathcal{C}}))\) (see Supplementary notes for details) will have \(3\ell\) non-zeros per row, \(mnp\) rows and \(\ell(mn+mp+np)\) columns, it could be rank deficient. When \(mnp>mn+mp+np\), the rank of the Jacobian cannot exceed \(\ell(mn+mp+np-m-n-p+1)\). Levenberg-Marquardt is one nonlinear least squares solver option that ensures we can compute a search direction. We could add constraints to the cost functional to make the problem better posed. However, the high dimension and matrix (sparsity) structure mean that considerable care would need to go into the implementation to render this a computationally feasible approach for large tensors. To avoid these difficulties, we focus on deriving an alternating least-squares (ALS) algorithm to solve the BMD problem, which we call BMD-ALS. We show the work involved per iteration can be decoupled into several small problems that can be solved independently of one another. The ALS algorithm has been widely used for computing the tensor CP decomposition [3, 22] and tensor block term decomposition [5]. Despite the limitations of the algorithm such as the slow convergence, the dependency on the starting guesses, the swamp effect [32] and more [26, 41], the simplicity of understanding and implementing the ALS algorithm with superior quality of results still marks it as today's "workhorse" algorithm for CP decomposition [22]. We will show in the following subsections that our ALS algorithm for BMD is also straightforward to implement, that the small size of subproblems allows us to find the unique minimum norm least squares solution of each, and that their independent nature allows for parallelizability. Moreover, we will show that the slicewise SVDs serve as an excellent starting guess for BMD-ALS, particularly for our video application (see 4), and for which we can compute the initial error easily. ### Phase I - Starting Guess Suppose the third-order tensor \(\mathbf{\mathcal{X}}\in\mathbb{R}^{m\times p\times n}\) has BM-rank \(r\). Given an integer \(\ell\), \(1\leqslant\ell\leqslant r\), as the target BM-rank, we compute the slicewise SVDs and truncate them to \(\ell\) terms. Using the same setup as in Theorem 3.1, this can be expressed in BMD form. We call this BM-rank \(\ell\) tensor \(\widehat{\mathfrak{X}}\). The Eckart-Young theorem gives us an expression for the quality of this starting guess: \[\|\mathfrak{X}-\widehat{\mathfrak{X}}\|_{F}^{2}=\sum_{k=1}^{n}\sum_{t=\ell+1}^{ r_{k}}\left(\sigma_{t}^{(k)}\right)^{2}, \tag{5.2}\] where \(r_{k}\) is the matrix rank of the \(k\)-th frontal slice of \(\mathfrak{X}\). The discussion in Section 4.2 leads us to believe that the tensor resulting from the \(\ell=1\) term will be representative of the stationary component of the image, and the remainder will capture motion components. ### Phase II - Linear Least-Squares Problem Next, we optimize for a better choice of middle tensor \(\mathcal{B}\), holding the pair \(\mathcal{A}\) and \(\mathfrak{C}\) fixed. That is we find \[\hat{\mathbf{B}}=\min_{\mathcal{B}\in\mathbb{R}^{m\times p\times\ell}}\| \mathfrak{X}-\mathtt{bmp}(\mathcal{A},\mathcal{B},\mathfrak{C})\|_{F}^{2}. \tag{5.3}\] We now show that this step can be calculated in a straightforward way. **Theorem 5.1**: _The tensor least-squares problem_ \[\min_{\mathcal{B}\in\mathbb{R}^{m\times p\times\ell}}\|\mathfrak{X}-\mathtt{ bmp}(\mathcal{A},\mathcal{B},\mathfrak{C})\|_{F}^{2}, \tag{5.4}\] _can be equivalently written as the following matrix least-squares problem_ \[\min_{\mathbf{b}\in\mathbb{R}^{mp\ell\times}}\|\mathbf{x}-\mathbf{H}\mathbf{b }\|_{F}^{2}, \tag{5.5}\] _where \(\mathbf{H}=\mathtt{Mat}\ \ (\mathcal{A},\mathfrak{C})\) is a direct sum of \(mp\) matrices, each of size \(n\times\ell\),_ \[\text{i.e.}\ \mathbf{H}=\underset{1\leqslant i\leqslant m}{\oplus}\ \mathbf{H}^{(i,j)};\ \text{where}\ \mathbf{H}^{(i,j)}_{k,t}=\mathcal{A}_{i,t,k}\mathfrak{C}_{t,j,k}, \tag{5.6}\] \(\forall\,1\leqslant i\leqslant m,1\leqslant j\leqslant p,1\leqslant k\leqslant n\)_, \(1\leqslant t\leqslant\ell\), and_ \[\mathbf{x}=\mathtt{Tvec}(\mathfrak{X})=\left[\begin{array}{c}\vdots\\ \mathbf{x}^{(i,j)}\\ \vdots\end{array}\right];\quad\mathbf{b}=\mathtt{Tvec}(\mathcal{B})=\left[ \begin{array}{c}\vdots\\ \mathbf{b}^{(i,j)}\\ \vdots\end{array}\right]\] _with \(\mathbf{x}^{(i,j)}\in\mathbb{R}^{n\times 1}\) and \(\mathbf{b}^{(i,j)}\in\mathbb{R}^{\ell\times 1}\)._ _Furthermore, the matrix least-squares problem decouples into mp least-squares subproblems_ \[\min_{\mathbf{b}^{(i,j)}\in\mathbb{R}^{\ell\times 1}}\left\|\mathbf{x}^{(i,j)}- \mathbf{H}^{(i,j)}\mathbf{b}^{(i,j)}\right\|_{F}^{2}. \tag{5.7}\] The proof is included in the supplementary materials. By Theorem (5.1), solving the objective function in Eq. (5.4) is equivalent to solving \(mp\) smaller matrix least-squares problem whose results \(\hat{b}^{(i,j)}\) are the tube fibers of \(\hat{\mathbf{B}}\). The total cost of implementing the matrix SVD for a matrix of size \(m\times p\) with \(m\leq p\) is \(\mathcal{O}(mp^{2})\)[39]. So the total cost in phase I would be \(\mathcal{O}(nmp^{2})\) for computing the \(n\) frontal slice matrix SVDs, and these can be done in parallel. Importantly, the individual matrices do not have to be full rank, a point that is not addressed in [28]. Therefore, we compute the unique, minimum norm least squares solution to each of the subproblems. Thus, in phase II, solving the least-squares problem of a single \(n\times\ell\), small block matrix has the same time complexity as computing the SVD of the matrix, i.e. \(\mathcal{O}(n\ell^{2})\). The total cost of solving the \(mp\) least-squares problems is then \(\mathcal{O}(mpn\ell^{2})\). Overall, the total cost of phase I and II is \(\mathcal{O}(nmp^{2})+\mathcal{O}(mpn\ell^{2})\). The least-squares solutions are obtained using e.g., Matlab's lsqminnorm function for the minimum norm solution. We also emphasize that both phase I and phase II computations can be done in parallel since we can compute matrix SVDs simultaneously on the individual frontal slices of the input tensor, and the second phase of the algorithm is also parallelizable since the \(mp\) smaller least-squares problems can be solved independently and concurrently. So parallel computation methods can potentially improve the execution time significantly, though further discussion is beyond the scope of this study and hence will not be discussed further in the current work. ### Alternating Least-Squares (ALS) Algorithm Given the third-order tensor \(\boldsymbol{\mathfrak{X}}\in\mathbb{R}^{m\times p\times n}\), let \(\boldsymbol{\mathcal{A}}^{0}\in\mathbb{R}^{m\times\ell\times n}\), \(\boldsymbol{\mathfrak{B}}^{0}\in\mathbb{R}^{m\times p\times\ell}\) and \(\boldsymbol{\mathcal{C}}^{0}\in\mathbb{R}^{\ell\times p\times n}\) be the tensor triplet result obtained from phase I (5.1) and phase II (5.2). Then the Alternating Least-Squares (ALS) algorithm takes the given initial factor tensors and solves the following least squares subproblems via tensor transposes for iterations \(k=0,1,2,\ldots\) \[\begin{split}\left(\boldsymbol{\mathcal{A}}^{\top^{2}}\right)^{ k+1}&=\min_{\boldsymbol{\mathcal{A}}^{\top^{2}}\in\mathbb{R}^{n \times m\times\ell}}\left\|\boldsymbol{\mathcal{T}}^{\top^{2}}-\text{bmp} \left(\left(\boldsymbol{\mathcal{C}}^{\top^{2}}\right)^{k},\boldsymbol{ \mathcal{A}}^{\top^{2}},\left(\boldsymbol{\mathcal{B}}^{\top^{2}}\right)^{k} \right)\right\|_{F}^{2}\\ \cdot\boldsymbol{\mathcal{B}}^{k+1}&=\min_{ \boldsymbol{\mathfrak{B}}\in\mathbb{R}^{m\times p\times\ell}}\left\| \boldsymbol{\mathcal{T}}-\text{bmp}\left(\boldsymbol{\mathcal{A}}^{k+1}, \boldsymbol{\mathcal{B}},\boldsymbol{\mathcal{C}}^{k}\right)\right\|_{F}^{2} ;\\ \left(\boldsymbol{\mathcal{C}}^{\top}\right)^{k+1}&= \min_{\boldsymbol{\mathcal{C}}^{\top}\in\mathbb{R}^{p\times n\times\ell}} \left\|\boldsymbol{\mathcal{T}}^{\top}-\text{bmp}\left(\left(\boldsymbol{ \mathcal{B}}^{\top}\right)^{k+1},\boldsymbol{\mathcal{C}}^{\top},\left( \boldsymbol{\mathcal{A}}^{\top}\right)^{k+1}\right)\right\|_{F}^{2}\end{split} \tag{5.8}\] In each subproblem, we are holding the first and the third factor tensors fixed and solving for the middle tensor. Thus each is of the same, decoupled form as Eq. (5.3). We can therefore solve all three using the same flattening scheme described in phase II of Sec. refusubsec:phase2. We vectorize the factor tensors \(\boldsymbol{\mathcal{A}}^{\top^{2}}\in\mathbb{R}^{n\times m\times\ell}\), \(\boldsymbol{\mathcal{B}}\in\mathbb{R}^{m\times p\times\ell}\) and \(\boldsymbol{\mathcal{C}}^{\top}\in\mathbb{R}^{p\times n\times\ell}\), the data tensor \(\boldsymbol{\mathcal{T}}\) and its transposes by stacking the tube fibers to obtains \[\mathbf{a}=\texttt{Tvec}(\boldsymbol{\mathcal{A}}^{\top^{2}}); \quad\mathbf{b}=\texttt{Tvec}(\boldsymbol{\mathcal{B}});\quad\mathbf{c}= \texttt{Tvec}(\boldsymbol{\mathcal{C}}^{\top}), \tag{5.10}\] \[\mathbf{y}_{\boldsymbol{\mathcal{T}}^{\top^{2}}}=\texttt{Tvec}( \boldsymbol{\mathcal{T}}^{\top^{2}});\quad\mathbf{y}_{\boldsymbol{\mathcal{T} }}=\texttt{Tvec}(\boldsymbol{\mathcal{T}});\quad\mathbf{y}_{\boldsymbol{ \mathcal{T}}^{\top}}=\texttt{Tvec}(\boldsymbol{\mathcal{T}}^{\top}). \tag{5.9}\] where \(\mathbf{a}\in\mathbb{R}^{nm\ell\times 1}\), \(\mathbf{b}\in\mathbb{R}^{mp\ell\times 1}\) and \(\mathbf{c}\in\mathbb{R}^{pn\ell\times 1}\). Moreover, we metricize the factor tensors that are held fixed by \[\begin{split}\mathbf{H}_{\mathcal{BE}}&=\texttt{Mat}\ \ \left(( \boldsymbol{\mathcal{C}}^{k})^{\top^{2}},(\boldsymbol{\mathcal{B}}^{k})^{\top^{2} }\right)\in\mathbb{R}^{nmp\times nm\ell};\\ \mathbf{H}_{\mathcal{AE}}&=\texttt{Mat}\ \ \left( \boldsymbol{\mathcal{A}}^{k+1},\boldsymbol{\mathcal{C}}^{k}\right)\in \mathbb{R}^{mpn\times mp\ell};\\ \mathbf{H}_{\mathcal{BA}}&=\texttt{Mat}\ \ \left(( \boldsymbol{\mathcal{B}}^{k+1})^{\top},(\boldsymbol{\mathcal{A}}^{k+1})^{\top} \right)\in\mathbb{R}^{pmm\times pn\ell}.\end{split} \tag{5.11}\] Thus the tensor least-squares subproblems given in Eq. (5.8) becomes \[\begin{split}\mathbf{a}^{k+1}&=\min_{\mathbf{a}\in \mathbb{R}^{nm\ell\times 1}}\left\|\mathbf{y}_{\boldsymbol{\mathcal{T}}^{\top^{2}}}- \mathbf{H}_{\mathcal{AE}}\mathbf{a}\right\|_{F}^{2}\\ \mathbf{b}^{k+1}&=\min_{\mathbf{b}\in\mathbb{R}^{ mp\ell\times 1}}\left\|\mathbf{y}_{\boldsymbol{\mathcal{T}}}-\mathbf{H}_{ \mathcal{AE}}\mathbf{b}\right\|_{F}^{2}\\ \mathbf{c}^{k+1}&=\min_{\mathbf{c}\in\mathbb{R}^{ mp\ell\times 1}}\left\|\mathbf{y}_{\boldsymbol{\mathcal{T}}^{\top}}-\mathbf{H}_{ \mathcal{BA}}\mathbf{c}\right\|_{F}^{2}.\end{split} \tag{5.12}\] The implementation of the ALS algorithm for computing tensor BMD is illustrated in Algorithm 5.1. The termination criteria for the algorithm is chosen to be either reaching the maximum number of iterations \(K\) or the relative change in successive iterations becomes sufficiently small, i.e. \(\|\boldsymbol{\mathfrak{X}}^{k+1}-\boldsymbol{\mathfrak{X}}^{k}\|_{F}/\| \boldsymbol{\mathfrak{X}}^{k}\|_{F}<\epsilon\) for some tolerance parameter \(\epsilon>0\). A complete iterate requires to compute all three-factor tensors \(\boldsymbol{\mathcal{A}}\), \(\boldsymbol{\mathcal{B}}\), and \(\boldsymbol{\mathcal{C}}\). ``` 1:procedure\([\boldsymbol{\mathcal{A}}^{K},\boldsymbol{\mathcal{B}}^{K},\boldsymbol{ \mathcal{C}}^{K}]\)=BMD-ALS\((\boldsymbol{\mathfrak{X}},\boldsymbol{\mathcal{A}}^{0},\boldsymbol{ \mathcal{B}}^{0},\boldsymbol{\mathcal{C}}^{0},K,\epsilon)\) 2:for\(k=0,1,2,\ldots,K\)do 3:\(\mathbf{a}^{k+1}\leftarrow\min\left\|\mathbf{y}_{\boldsymbol{\mathcal{T}}^{ \top^{2}}}-\mathbf{H}_{\mathcal{EB}}\mathbf{a}\right\|_{F}^{2}\) 4:\(\boldsymbol{\mathcal{A}}^{k+1}=\texttt{permute}(\texttt{Tfold}(\mathbf{a}^{k +1}),[2,3,1])\) 5:\(\mathbf{b}^{k+1}\leftarrow\min\left\|\mathbf{y}_{\boldsymbol{\mathcal{T}}}- \mathbf{H}_{\mathcal{AE}}\mathbf{b}\right\|_{F}^{2}\) 6:\(\boldsymbol{\mathcal{B}}^{k+1}=\texttt{Tfold}(\mathbf{b}^{k+1})\) 7:\(\boldsymbol{\mathcal{C}}^{k+1}\leftarrow\min\left\|\mathbf{y}_{\boldsymbol{ \mathcal{T}}^{\top}}-\mathbf{H}_{\mathcal{BA}}\mathbf{c}\right\|_{F}^{2}\) 8:\(\boldsymbol{\mathcal{C}}^{k+1}=\texttt{permute}(\texttt{Tfold}(\mathbf{c}^{k +1}),[3,1,2])\) 9:\(\hat{\boldsymbol{\mathfrak{X}}}^{k+1}=\texttt{bmp}(\boldsymbol{\mathcal{A}}^{k +1},\boldsymbol{\mathcal{B}}^{k+1},\boldsymbol{\mathcal{C}}^{k+1})\) 10:if\(\|\boldsymbol{\mathfrak{X}}^{k+1}-\boldsymbol{\mathfrak{X}}^{k}\|_{F}/\| \boldsymbol{\mathfrak{X}}^{k}\|_{F}<\epsilon\)then 11:\(k\gets K\) ``` **Algorithm 5.1** Tensor BMD-ALS ### ALS Convergence Analysis The alternating least-squares algorithm has been widely used for computing the tensor CP decomposition [22] and the block term decomposition [32, 5] among others. The local and global convergence of the ALS algorithm for the tensor CP decomposition has been studied in [41, 43, 26] based on its connection to the block nonlinear Gauss-Seidel (GS) method. In this section, we will show that the ALS algorithm for computing low BM-rank tensor approximation is also closely connected to the nonlinear GS method, and hence several convergence results follow directly from its framework. Recall the nonlinear GS method solves the following minimization problem \[\begin{split}\min&\,f\left(\mathbf{x}\right)\\ \text{subject to}&\,\,\mathbf{x}\in X=X_{1}\times X_{2} \times\cdots\times X_{M}\subset\mathbb{R}^{N\times 1},\end{split} \tag{5.13}\] where \(f\) is a continuously differentiable function from \(\mathbb{R}^{N\times 1}\) to \(\mathbb{R}\) and \(X\) is a Cartesian product of closed, nonempty and convex subsets \(X_{i}\subset\mathbb{R}^{N_{i}\times 1}\), for \(i=1,\ldots,M\) with \(\sum_{i=1}^{M}N_{i}=N\). If the vector \(\mathbf{x}\in\mathbb{R}^{N}\) is partitioned into \(M\) component vectors \(\mathbf{x}_{i}\in\mathbb{R}^{N_{i}\times 1}\), then we can consider \(f\) is a function from \(\mathbb{R}^{N_{1}\times 1}\times\mathbb{R}^{N_{2}\times 1}\times\cdots \times\mathbb{R}^{N_{M}\times 1}\) to \(\mathbb{R}\) with \(f\left(\mathbf{x}\right)=f\left(\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{ x}_{M}\right)\). The solution to the nonlinear optimization function can then be found by the block Gauss-Seidel method via the following iteration in a cyclic order, \[\mathbf{x}_{i}^{k+1}=\min_{\mathbf{y}_{i}\in X_{i}}f\left(\mathbf{x}_{1}^{k+1},\ldots,\mathbf{x}_{i-1}^{k+1},\mathbf{y}_{i},\mathbf{x}_{i+1}^{k},\ldots, \mathbf{x}_{M}^{k}\right),\] which updates the components of \(\mathbf{x}\). The iterative technique starts from a given initial guess \(\mathbf{x}^{0}=\left(\mathbf{x}_{1}^{0},\mathbf{x}_{2}^{0},\ldots,\mathbf{x}_ {M}^{0}\right)\) and generates a sequence \(\{\mathbf{x}^{k}\}=\left\{\left(\mathbf{x}_{1}^{k},\mathbf{x}_{2}^{k},\ldots, \mathbf{x}_{M}^{k}\right)\right\}\). The connection between the nonlinear block Gauss-Seidel method and the ALS-BMD algorithm in Eq. (5.12) is made evident by noting the cost function we want to minimize, \[\left\|\boldsymbol{\mathfrak{X}}-\hat{\boldsymbol{\mathfrak{X}}}\right\|_{F}^{2 }=\sum_{i,j,k}\left(\boldsymbol{\mathfrak{X}}_{i,j,k}-\sum_{t=1}^{\ell} \boldsymbol{\mathcal{A}}_{i,t,k}\boldsymbol{\mathcal{B}}_{i,j,t}\boldsymbol{ \mathcal{C}}_{t,j,k}\right)^{2}=f\left(\boldsymbol{\mathcal{A}},\boldsymbol{ \mathcal{B}},\boldsymbol{\mathcal{C}}\right), \tag{5.14}\] is a function \(f:\mathbb{R}^{(mn+np+mp)\ell\times 1}\to\mathbb{R}\). By letting \(\mathbf{v}=\left[\mathbf{a};\mathbf{b};\mathbf{c}\right]\in\mathbb{R}^{(nm+mp +pn)\ell\times 1}\) where \(\mathbf{a},\mathbf{b}\) and \(\mathbf{c}\) are the vectorized factor tensors given in Eq. (5.10), then we can see that \[f\left(\mathbf{v}\right)=f\left(\boldsymbol{\mathcal{A}},\boldsymbol{\mathcal{ B}},\boldsymbol{\mathcal{C}}\right). \tag{5.15}\] The BMD problem therefore can be reformulated into the following problem \[\begin{split}\min&\,f\left(\mathbf{v}\right)\\ \text{subject to}&\,\,\mathbf{v}\in\mathbb{R}^{nm\ell \times 1}\times\mathbb{R}^{mp\ell\times 1}\times\mathbb{R}^{pn\ell\times 1}. \end{split} \tag{5.16}\] The ALS algorithm updates the components of \(\mathbf{v}\) by \[\begin{split}\mathbf{a}^{k+1}&=\min_{\mathbf{y}\in \mathbb{R}^{nm\ell\times 1}}f\left(\mathbf{y},\mathbf{b}^{k},\mathbf{c}^{k}\right),\\ \mathbf{b}^{k+1}&=\min_{\mathbf{y}\in\mathbb{R}^{mp \ell\times 1}}f\left(\mathbf{a}^{k+1},\mathbf{y},\mathbf{c}^{k}\right),\\ \mathbf{c}^{k+1}&=\min_{\mathbf{y}\in\mathbb{R}^{ pn\ell\times 1}}f\left(\mathbf{a}^{k+1},\mathbf{b}^{k+1},\mathbf{y}\right). \end{split} \tag{5.17}\] This is exactly the nonlinear block Gauss-Seidel method. By proposition 2.1 in [43], let \(\mathbf{v}^{k}\) denote the \(k\)-th solution vector, i.e. \(\mathbf{v}^{k}=\left(\mathbf{a}^{k},\mathbf{b}^{k},\mathbf{c}^{k}\right)\). When the normal equations matrix in each of the linear least-squares subproblems given in (5.12) is positive definite, i.e. \(\mathbf{H}^{\top}\mathbf{H}>0\) for any \(\mathbf{H}\in\{\mathbf{H}_{\mathsf{E}\mathcal{B}},\mathbf{H}_{\mathsf{A}\mathcal{ E}},\mathbf{H}_{\mathsf{B}\mathcal{A}}\}\), then each least-squares subproblem has a unique solution and the whole sequence \(\{\mathbf{v}^{k}\}\) generated by ALS converges to a limit point. In particular, \[f\left(\mathbf{v}^{k+1}\right)\leq f\left(\mathbf{a}^{k+1},\mathbf{b}^{k+1}, \mathbf{c}^{k}\right)\leq f\left(\mathbf{a}^{k+1},\mathbf{b}^{k},\mathbf{c}^{ k}\right)\leq f\left(\mathbf{v}^{k}\right)\leq\cdots\leq f\left(\mathbf{v}^{0} \right),\] shows that ALS monotonically reduces the cost function. Since \(f\) is bounded below, \(\{f(\mathbf{v}^{k})\}\) has a limit point \(f^{*}\geq 0\). Moreover, by Theorem 3.1 in [43], if \(\{\mathbf{v}^{k}\}\) is bounded, then the limit point of the sequence is a stationary point of the problem. As discussed in [26, 32], there is no guarantee for the ALS algorithm in CP-decomposition to converge to a stationary point when the coefficient matrices in the subproblems are rank-deficient, in which case, the objective function is not strictly quasiconvex. This is also the case for the ALS subproblems for third-order tensor BM-decomposition. The coefficient matrices given in (5.11) may not have full column rank in general. As a result, each of the least-squares subproblems will not be strictly quasiconvex, and hence the objective function \(f(\mathbf{v})\) is not strictly quasiconvex with respect to each component of \(\mathbf{v}\). Therefore, similar to the CP-decomposition, the BMD-ALS algorithm may produce a sequence with limit points that are not critical points to the original problem resulting in slow convergence. The remedy for the convergence issue of the ALS algorithm for CP decomposition has been discussed in several studies [13, 26, 32, 43] which potentially can be adapted for the BMD-ALS algorithm. We will consider this in future work. ## 6 Dynamic Mode Decomposition (DMD) Connection Though initially introduced in the fluid dynamics community for extracting spatiotemporal patterns [23], DMD has also been effective in the video context for separating the stationary background from foreground motions by differentiating between the DMD modes with near-zero frequency and remaining modes with frequencies bounded away from the origin [14]. The excellent visual results and superior computational efficiency compared to the Robust-PCA algorithm of the DMD method have drawn great attention in the computer vision community with new algorithms developed based on DMD for improved accuracy and efficiency. A few examples include the compressed DMD [6], randomized DMD [7], DMD via dictionary learning[15], and multi-resolution DMD for object tracking with varying motion rates [24]. More recently, the connection between the general DMD method and the CP decomposition has been studied [34]. When multiple experiments were conducted, data matrices generated from the experiments are collected and ordered as frontal slices of a third-order tensor. Then performing DMD on each frontal slice of the data tensor the authors show will decompose it into a sum of vector outer products of the DMD vector triplets consisting of the DMD modes, DMD eigenfunctions, and corresponding eigenvalues. This decomposition is equivalent to taking a CP decomposition of the third-order tensor when the experimental data are collected from distinct sources with a single exponential growth or decay and/or oscillatory dynamics. In the present study, we will show that the DMD results obtained from decomposing a single data matrix can also be viewed as a BMD. This is particularly meaningful to our video application since the DMD method for video background/foreground separation developed originally is applied to a single data matrix flattened from a third-order video tensor [14]. Moreover, we will show that due to the nonlinearity of the foreground motion in general, the BMD factor tensor triplet does not directly convert back to DMD modes and eigenvalues. In the following, we first provide a brief review of the DMD method and then describe a way of turning the DMD video reconstruction results into a BM-product of a third-order tensor triplet. We next show that the DMD results can also serve as an initial guess for the ALS algorithm to find an optimal BM-decomposition. Finally, we discuss the advantages and disadvantages of using the DMD method as an initial guess over using the SS-SVD initialization. ### DMD Algorithm Given a video data of \(p\) frames and each frame is of size \(n\times m\), let \(\mathbf{x}_{k}\in\mathbb{R}^{mn\times 1}\) be the vectorized \(k\)-th video frame for \(1\leq k\leq p\), then the DMD method groups the data into matrices \(\mathbf{X}_{1},\mathbf{X}_{2}\in\mathbb{R}^{mn\times(p-1)}\) as follows \[\mathbf{X}_{1}=\left[\mathbf{x}_{1}\ \mathbf{x}_{2}\ \mathbf{x}_{3}\cdots \mathbf{x}_{p-1}\right];\quad\mathbf{X}_{2}=\left[\mathbf{x}_{2}\ \mathbf{x}_{3}\ \mathbf{x}_{4}\cdots\mathbf{x}_{p}\right]. \tag{10}\] The method then finds a linear map \(\mathbf{A}\in\mathbb{R}^{mn\times mn}\) such that \(\mathbf{x}_{k+1}=\mathbf{A}\mathbf{x}_{k}\). Thus \(\mathbf{X}_{2}\approx\mathbf{A}\mathbf{X}_{1}\). The SVD of the matrix \(\mathbf{X}_{1}\) can be used for dimensionality reduction, i.e. \(\mathbf{X}_{1}=\mathbf{U}\mathbf{\Sigma}\mathbf{V}^{\top}\), where \(\mathbf{U}\in\mathbb{R}^{mn\times\ell}\) is unitary, \(\mathbf{\Sigma}\in\mathbb{R}^{\ell\times\ell}\) is diagonal and \(\mathbf{V}\in\mathbb{R}^{(p-1)\times\ell}\) is unitary. The parameter \(\ell\) is a chosen rank truncation of \(\mathbf{X}_{1}\). After projecting \(\mathbf{A}\) onto the left-singular vector matrix \(\mathbf{U}\) as \(\tilde{\mathbf{A}}=\mathbf{U}^{\top}\mathbf{A}\mathbf{U}=\mathbf{U}^{\top} \mathbf{X}_{2}\mathbf{V}\mathbf{\Sigma}^{-1}\), \(\tilde{\mathbf{A}}\in\mathbb{R}^{\ell\times\ell}\), we compute the eigen-decomposition of \(\tilde{\mathbf{A}}\) such that \(\tilde{\mathbf{A}}\mathbf{W}=\mathbf{W}\mathbf{\Lambda}\), with \(\mathbf{\Lambda}=\operatorname{diag}\left(\lambda_{t}\right)\in\mathbb{C}^{ \ell\times\ell}\) where \(\lambda_{t},1\leq t\leq\ell\), are DMD eigenvalues which can be converted to Fourier frequencies via \(\omega_{t}=\frac{\ln\left(\lambda_{t}\right)}{\Delta t}\). Assuming video frames with equally spaced time, \(\Delta t=1\), \(e^{\omega_{t}}=\lambda_{t}\). The DMD modes \(\mathbf{\Phi}\in\mathbb{C}^{mn\times\ell}\) are obtained by \[\mathbf{\Phi}=\mathbf{X}_{2}\mathbf{V}\mathbf{\Sigma}^{-1}\mathbf{W}. \tag{11}\] Taking the DMD modes and the DMD frequencies, the original vectorized video frame at time \(k=1,2,\ldots,p\) can be reconstructed by \[\mathbf{x}_{k}\approx\sum_{1\leq t\leq\ell}b_{t}\boldsymbol{\varphi}_{t}e^{ \omega_{t}(k-1)}=\sum_{1\leq t\leq\ell}b_{t}\boldsymbol{\varphi}_{t}\lambda_{ t}^{k-1}, \tag{12}\] with \(\boldsymbol{\varphi}_{t}=\mathbf{\Phi}_{:,t}\) the \(t\)-th DMD mode and \(\mathbf{b}=\left[b_{1}\,b_{2}\,\cdots\,b_{\ell}\right]^{\top}\) contains the initial amplitudes for the modes and is obtained by solving \(\mathbf{\Phi}\mathbf{b}=\mathbf{x}_{1}\) with \(\mathbf{x}_{1}\) the vectorized first frame. Assume the set of DMD frequencies \(\omega_{\alpha}\), \(1\leq\alpha\leq\ell\), satisfies \(\|\omega_{\alpha}\|\approx 0\), i.e. \(\|\lambda_{\alpha}\|=\|e^{\omega_{\alpha}}\|\approx 1\) for \(1\leq\alpha\leq\ell\). Then the background and the foreground video sequence \(\mathbf{X}\) reconstructed with the DMD technique for the time vector \(\boldsymbol{\theta}=[0,1,\ldots,p-1]\) are given respectively by \[\mathbf{X}^{\mathrm{bg}}\approx\sum_{1\leq t-\alpha\leq\ell}b_{t}\boldsymbol{ \varphi}_{t}\lambda_{t}^{\boldsymbol{\theta}};\quad\mathbf{X}^{\mathrm{fg}} \approx\sum_{1\leq t\neq\alpha\leq\ell}b_{t}\boldsymbol{\varphi}_{t}\lambda_ {t}^{\boldsymbol{\theta}}. \tag{13}\] We note that, as discussed in [14], since it should be true that the video sequence \(\mathbf{X}=\mathbf{X}^{bg}+\mathbf{X}^{fg}\), so a real-valued foreground approximation can be alternatively obtained by \[\mathbf{X}^{\mathrm{fg}}\approx\mathbf{X}-|\mathbf{X}^{\mathrm{bg}}|, \tag{14}\] where \(|\cdot|\) yields the modulus of each element within the matrix. However, the expression given by the second approximation in (14) is not a compressible representation of the foreground video sequence, we focus our discussion on comparing the DMD method and our BMD method using the expression given in (13). ### From DMD to Tensor BMP Next, we will show that the video reconstruction by the DMD method given in Eq.(6.3) can be equivalently written in a tensor BM-product form. Let us first define the DMD mode matrix \(\mathbf{M}_{t}\in\mathbb{C}^{m\times n}\) to be \[\mathbf{M}_{t}=\mathtt{reshape}(b_{t}\boldsymbol{\varphi}_{t},[m,n]) \tag{6.6}\] for all \(t=1,\ldots,\ell\). Each DMD mode \(\boldsymbol{\varphi}_{t}\) is scaled by the corresponding initial amplitude \(b_{t}\), and converted into an \(m\times n\) matrix. Next, we define a third-order tensor triplet 1. \(\boldsymbol{A}\): an \(m\times\ell\times n\) DMD mode tensor with lateral slices \(\boldsymbol{A}(:,t,:)=\mathbf{M}_{t}\), 2. \(\mathbf{B}\): an \(m\times p\times\ell\) tensor of ones, 3. \(\mathfrak{C}\): an \(\ell\times p\times n\) DMD eigenvalue tensor with row vectors, \(\mathfrak{C}_{t,:,k}=\lambda_{t}^{\boldsymbol{\theta}}\), \(1\leq k\leq p\). Ordering the video frames as lateral slices to form a video tensor \(\boldsymbol{\Upsilon}\in\mathbb{R}^{m\times p\times n}\), the DMD reconstruction of Eq. (6.3) can be written in the tensor BM-product form as follows \[\boldsymbol{\Upsilon}\approx\mathtt{bmp}\left(\boldsymbol{A},\boldsymbol{ \mathcal{B}},\mathfrak{C}\right). \tag{6.7}\] We can update \(\boldsymbol{\mathcal{B}}\) by solving (5.3) as before and then subsequently optimizing each factor tensor using ALS to improve the low BM-rank approximation. The BMD reconstructed background and foreground tensors are given by \(\boldsymbol{\Upsilon}^{\mathrm{bg}}\approx\sum_{1\leq t=\alpha\leq\ell} \mathtt{bmp}\left(\boldsymbol{A}_{:,t,:},\boldsymbol{\mathcal{B}}_{:,:,t}, \boldsymbol{\mathcal{C}}_{t,:,:}\right)\) and \(\boldsymbol{\Upsilon}^{\mathrm{fg}}\approx\sum_{1\leq t\neq\alpha\leq\ell} \mathtt{bmp}\left(\boldsymbol{A}_{:,t,:},\boldsymbol{\mathcal{B}}_{:,:,t}, \boldsymbol{\mathcal{C}}_{t,:,:}\right)\) respectively. ### DMD factor interpretation We note that the first step of the DMD method which applies a truncated SVD to \(\mathbf{X_{1}}\) is similar to taking the SVDs of spatiotemporal slices of the video tensor (with one frame less). Instead of applying SVD to individual spatiotemporal slices, DMD applies SVD to all slices at the same time. As a result, the left-singular matrix \(\mathbf{U}\) similarly captures the spatial information of the video. Particularly, as discussed in 4.3, the first left-singular vector captures the dominant background scene. However, the DMD method that models the static background image in-fact models a weighted linear combination of all the left-singular vectors (spatial information). From the eigendecomposition of \(\tilde{\mathbf{A}}\), we have \(\tilde{\mathbf{A}}\mathbf{W}=\mathbf{U}^{\top}\mathbf{X}_{2}\mathbf{V} \boldsymbol{\Sigma}^{-1}\mathbf{W}=\mathbf{W}\boldsymbol{\Lambda}\), which then gives \(\boldsymbol{\Phi}=\mathbf{X}_{2}\mathbf{V}\boldsymbol{\Sigma}^{-1}\mathbf{W}= \mathbf{U}\mathbf{W}\boldsymbol{\Lambda}\). Therefore, the DMD modes given in Eq. (6.2) can be re-written as \[\boldsymbol{\Phi}_{i,j}=\sum_{1\leq s\leq\ell}\lambda_{j}\mathbf{U}_{i,s} \mathbf{W}_{s,j}\implies\boldsymbol{\Phi}_{:,j}=\sum_{1\leq s\leq\ell} \lambda_{j}\mathbf{U}_{:,s}\mathbf{W}_{s,j}. \tag{6.8}\] Thus, the background DMD mode represented by \(\boldsymbol{\Phi}_{:,\alpha}\) when \(\|\lambda_{\alpha}\|\approx 1\) for all \(\alpha,1\leq\alpha\leq\ell\), is given by \[\boldsymbol{\Phi}_{:,\alpha}=\sum_{1\leq s\leq\ell}\lambda_{\alpha}\mathbf{U}_ {:,s}\mathbf{W}_{s,\alpha}. \tag{6.9}\] From (6.9), we see that the DMD reconstruction of the stationary mode is a weighted linear combination of the spatial information captured by the left-singular vectors of the spatiotemporal video matrix. This could potentially explain the observations made in the numerical experiments of the videos in [14]: the DMD reconstructed background scene tends to include spurious foreground pixels. Since in the DMD method, the SVD step is applied to vectorized video frames across all \(p\) frames, taking the rank-truncation with \(\ell=p-1\) almost recovers the original video matrix. The spatial information of the foreground objects moving through time is captured by the set of left-singular vectors \(\mathbf{U}_{:,t}\) with \(2\leq t\leq\ell\), which are also contained in the background DMD mode \(\mathbf{\Phi}_{:,\alpha}\). This could also explain another phenomenon about the foreground motions in [14]: the moving objects in the foreground create movement trails extrapolating both past and future motions of the objects. Since the uncompressed foreground video by the DMD algorithm is obtained by subtracting the background from the original video, the erroneous foreground motion trails contained in the background DMD reconstruction persisted in the foreground video. Importantly, the DMD method does not provide an approximation to the generative video model from Sec. 4.1. In addition to the background reconstruction with spurious object motion discussed above, the DMD compressed foreground video frames exhibit another issue. Since the foreground modes \(\boldsymbol{\varphi}_{t}\), \(1\leq t\neq\alpha\leq\ell\), change over time linearly scaled by \(\lambda_{t}^{j-1}\) at time \(j\), the non-linear foreground object motions cannot be exactly modeled by the linear change. We also observe the following: * The DMD method requires processing video segments rather than computing the decomposition of the whole video at once. One reason for taking smaller segments is that it helps with keeping shorter processing time than data acquisition time. Also longer videos also produce longer erroneous movement trails on the approximated background. * DMD modes and frequencies are complex-valued. In [14], the real-valued video pixels are obtained by taking the modulus of the complex values. The code from [23] uses the real part of the complex-valued video frame entries for display. In our BMD video reconstruction method with DMD initialization, we will also use real-part solutions. * The DMD method has the potential of a compressed reconstruction of the background scene by setting \(\ell=1\) in the SVD step [14]. In our work, by allowing \(\ell\) to be small, i.e. \(1<\ell\ll p\), we are interested in obtaining the compressed representations of the background scene and the foreground object video. ## 7 Numerical Results In this section, we illustrate the performance of video background and foreground separation with SS-SVD, DMD, and BMD using both the slicewise SVD initial guess and the DMD initial guess on the following 3 video datasets: 1. First is a simulated video based on the BM-rank 2 generative spatiotemporal video model discussed in Section 4.1. We use a \(50\times 50\) cloud image [36] and simulate a square object of size \(5\times 5\) with intensity \(\alpha=10\) moving over 30 time steps across the background. The object moves from the top-left corner to the right-bottom corner across the image with an initial position \([i_{1},j_{1}]=[5,2]\) and a constant velocity \([\mathbf{v}_{i},\mathbf{v}_{j}]=[1,2]\) in the first 14 frames and the last 10 frames. Between frames 15 and 20, the object traverses back with the negative velocity \([-1,-2]\). That is, \([i_{t},j_{t}]=[i_{t-1}+\Delta t,j_{t-1}+2\Delta t]\) for \(2\leq t<15\) and \(20<t\leq 30\), and \([i_{t},j_{t}]=[i_{t-1}-\Delta t,j_{t-1}-2\Delta t]\) for \(15\leq t\leq 20\). We set \(\Delta t=1\). Frames 5 and 15 are selected for display (first two images in Fig. (1)). 2. The second, "car" video is from a surveillance video of moving vehicles on a highway. The video data is available in Matlab and can be loaded using the command VideoReader('traffic.mj2'). This video consists of 120 grayscale frames each of size \(120\times 160\). The camera is pointing at an entrance of a highway where the cars are traveling from the top right corner to the bottom of the image as frames progress. Frames 54 and 110 are selected for display (middle two images in Fig. (23)). 3. The third video, referred to as the "escalator" video, is taken from another surveillance video of escalators and people. This video consists of 200 frames in grayscale and each frame is of size \(130\times 160\). The camera is pointing directly from above at three parallel escalators. Among them, the staircase of the left two escalators is moving upwards and the staircase of the right escalator is moving downwards. The staircase of the escalators is moving periodically while people either walk across the platforms above the escalator, stand on the escalators or walk down the escalator on the right. Frames 54 and 110 are selected for display (last two images in Fig. (23)). ### Video Background/Foreground Separation The quality of separating the background and foreground is compared for the following methods (1) SS-SVD method (2) BMD-ALS with spatiotemporal slice-wise SVD initial guess (BMD-ALS\({}_{SVD}\)) (3) DMD method (4) BMD-ALS with DMD initial guess (BMD-ALS\({}_{DMD}\)). For the DMD method, the video streams are broken into segments of 30 frames, and the rank truncation parameter is taken as \(\ell=30-1=29\) as suggested in [14] in order to produce the best quality results. For all other methods, we apply the decomposition to the entire video dataset, and the matrix slice-wise rank or tensor BM-rank is chosen to be \(\ell=2\) for the simulated video, \(\ell=3\) for the car video and \(\ell=5\) for the escalator video. Moreover, for the BMD methods, a total number of 150 iterations is used and the tolerance parameter is chosen to be \(10^{-5}\) for the relative error of consecutive iterates, i.e. \(\|\hat{\boldsymbol{\chi}}^{k+1}-\hat{\boldsymbol{\chi}}^{k}\|_{F}/\|\hat{ \boldsymbol{\chi}}^{k}\|_{F}\). For the following numerical experiments, all video frames displayed are re-scaled to the pixel range of \([0,255]\) on grayscale. In Fig. (24), the background and foreground frames (5 and 15) are displayed for the simulated BM-rank 2 video. From top to bottom are respectively: the original simulated video, the BMD reconstruction using the spatiotemporal slice-wise SVD initial guess, and the BMD reconstruction with DMD initial guess. As we can see from the images, the BMD method with both initial guesses obtained an almost perfect separation of the stationary background and the moving foreground objects compared to the ground-truth frames. In particular, the BMD-ALS\({}_{SVD}\) results are almost identical to the original frames. Furthermore, as discussed in Section 4.1, the first BM-rank 1 tensor of the generative Figure 23: Specific frames selected from the three testing videos: simulated video based on the BM-rank 2 generative spatiotemporal model, the car video with cars traveling on a highway, and the escalator video with passengers. video model consists of the background image captured by \(\mathbf{\mathcal{A}}_{:,1,:}\). The second BM-rank 1 tensor consists of the left motion of the object captured by columns of the slice \(\texttt{squeeze}(\mathbf{\mathcal{B}}_{:,:,2})\) and the right motion of the object captured by columns of the slice \(\texttt{squeeze}(\mathbf{\mathcal{C}}_{2,:,:})^{\top}\). In Fig. (7.3.a), these corresponding slices of the factor tensors \(\mathbf{\mathcal{A}},\mathbf{\mathcal{B}}\), and \(\mathbf{\mathcal{C}}\) are also displayed for the ground-truth video, the BMD-ALS\({}_{SVD}\), and the BMD-ALS\({}_{DMD}\) results. We note that in the tensor BMD-ALS\({}_{SVD}\) results, the background slice \(\mathbf{\mathcal{A}}_{:,1,:}\) have inverted pixel intensities compared to the ground-truth image and hence are multiplied by \(-1\) for display purposes. To make the overall BM-product positive, the factor tensor \(\mathbf{\mathcal{B}}\) is multiplied by \(-1\). Overall, as we can see in Fig. (7.3.a), the BMD results with both initial guesses recover the stationary background image captured in \(\mathbf{\mathcal{A}}_{:,1,:}\). However, regarding the foreground motion, the decomposition with the DMD initial guess fails to obtain information depicting the trajectory of the motion, while the approximation results using the SS-SVD initial guess capture both the left and right motions. The ALS convergence plot shown in Fig. (7.3.b) also suggests that the BMD-ALS\({}_{SVD}\) algorithm outperforms the BMD-ALS\({}_{DMD}\) algorithm with a much smaller relative error when the stopping criteria are met. In Fig. (7.4), the reconstruction of the car background and foreground are compared for all four methods. The DMD reconstructed background exhibits spurious pixels of the car moving forward and backward in time of the current frame. The foreground reconstruction Figure 7.2: _simvideo_ using the SS-SVD method lacks clarity of the moving cars. After updating the slice-wise SVD results with BMD-ALS iterations, the BMD-ALS\({}_{SVD}\) foreground objects are more visible with much more details captured of the traveling vehicles. In Fig. (7.5) of the escalator video, we also look at the reconstruction results of the background and foreground for the four methods interested. For the escalator video, apart from pedestrian motions, the stairs of the escalators are also moving periodically. In the SS-SVD and the DMD reconstructed background frames, the staircase is clearly absent. However, it is captured in the BM-rank 1 approximated background tensor frames with either initial guesses. Moreover, the movement trail of the pedestrians is again captured in the background for the DMD method, while the BMD-ALS\({}_{DMD}\) method eliminated the extraneous motion pixels built from the DMD initial guess. Table (7.1) summarizes the _compression ratio_ (CR) and the _relative error_ (RE) for specific rank-truncation parameters \(\ell\) selected for each video. For convenience, we refer to the slice-wise matrix SVD truncation parameter, the DMD dimensionality reduction parameter, and the tensor BM-rank as rank \(\ell\) for all methods compared in this table. Moreover, the metrics RE and CR are defined respectively to be \[\text{CR}=\frac{\text{uncompressed size}}{\text{compressed size}};\quad\text{RE}=\frac{\|\mathbf{\mathcal{X}}-\hat{\mathbf{\mathcal{X}}}\|_{F}}{\|\mathbf{ \mathcal{X}}\|_{F}}. \tag{7.1}\] Figure 7.3: Comparison of the BM-rank 2 approximation to the simulated video with spatiotemporal slice-wise SVD initial guess (BMD-ALS\({}_{SVD}\)) and the DMD initial guess (BMD-ALS\({}_{DMD}\)) respectively. (a) The ground-truth factor tensor slices with \(\mathsf{squeeze}(\mathbf{\mathcal{A}}_{,1,:})\) capturing the stationary background image, \(\mathbf{\mathcal{B}}_{,:,2}\) depicting the left-motion of the object and \(\mathsf{squeeze}(\mathbf{\mathcal{C}}_{2,:,:})^{\top}\) depicting the right-motion of the object. (b) The ALS convergence comparisons for the two methods. The relative error is computed for the overall video tensor \(\frac{\|\hat{\mathbf{\mathcal{X}}}-\mathbf{\mathcal{X}}\|_{F}}{\|\mathbf{\mathcal{X}}\|_{ F}}\). The parameters \(m,p\), and \(n\) are the row, column, and depth dimensions of the video tensor respectively. In the DMD case, \(s\) is the number of video segments based on the number of frames in each segment, i.e. \(s=\left\lceil\frac{p}{\text{no. of frames}}\right\rceil\). ### Video Compression In Fig. (7.6), we show the relative error comparison using BMD with SVD and DMD initializations for different compression ratios. In the results \begin{table} \begin{tabular}{|l|c|c|c c c c|} \hline & \(m\times p\times n\) & & SS-SVD & BMD-ALS\({}_{SVD}\) & DMD & BMD-ALS\({}_{DMD}\) \\ \hline \multirow{4}{*}{Car} & \multirow{4}{*}{\(120\times 120\times 160\)} & \multirow{4}{*}{\(130\times 200\times 160\)} & Compressed size & \(\ell(mn+np+n)\) & \(\ell(mn+mp+np)\) & \(s\cdot\ell(mn+2)\) & \(\ell(mn+mp+np)\) \\ \cline{3-6} & & rank \(\ell\) & 3 & 3 & 29 & 3 \\ \cline{3-6} & & CR & 19.917 & 14.546 & 1.034 & 14.546 \\ \cline{3-6} & & RE & 0.122 & 0.0608 & 3.059 & 0.0591 \\ \hline \multirow{2}{*}{Escalator} & \multirow{2}{*}{\(130\times 200\times 160\)} & rank \(\ell\) & 5 & 5 & 29 & 5 \\ \cline{3-6} & & CR & 15.71 & 10.5584 & 1.0344 & 10.5584 \\ \cline{3-6} & & RE & 0.0845 & 0.057 & 4.64 & 0.0553 \\ \hline \end{tabular} \end{table} Table 7: Video reconstruction results using 1. SS-SVD method; 2. BM-Decomposition with SS-SVD initial guess (BMD-ALS\({}_{SVD}\)); 3. DMD method, \(s=\left\lceil\frac{p}{\text{no. of frames}}\right\rceil\) is the total number of video segments; 4. BM-Decomposition with DMD initial guess (BMD-ALS\({}_{DMD}\)). Figure 7: Car video: Background/Foreground separation. frames 54 and 110. of the car video and the escalator video, both BMD-ALS\({}_{SVD}\) and BMD-ALS\({}_{DMD}\) methods performed equivalently well. As the rank truncation parameter \(\ell\) increases, the amount of the improvement of the approximation quality decreases since the differences between RE become smaller for larger \(\ell\). This result also suggested that surveillance videos inherently have a low BM-rank. ## 8 Comparison to other tensor methods ### CP form Just as other non-CP tensor approximations can be expressed as a sum of rank-one outer products of tensor, so too, can the BMD. We give the result in the supplementary materials, but not that it isn't particularly informative as a CP approximation unless the slices of the tensor factors are themselves low rank matrices. On the other hand, if we have a CP decomposition of \(\mathbf{\mathcal{X}}\in\mathbb{C}^{m\times p\times n}\) we can get a bound on the BM-rank by looking at the ranks of the factor matrices. **Theorem 8.1**: _Let \(\mathbf{\mathcal{X}}\) have the CP decomposition \(\mathbf{\mathcal{X}}=\sum_{1\leq t\leq r}\mathbf{A}_{:,t}\circ\mathbf{B}_{:,t} \circ\mathbf{C}_{:,t}\)2 where \(\mathbf{A}\in\mathbb{C}^{m\times r}\), \(\mathbf{B}\in\mathbb{C}^{p\times r}\), and \(\mathbf{C}\in\mathbb{C}^{n\times r}\) are the factor matrices and \(r\) is the (real) CP tensor rank._ Footnote 2: The symbol “\(\circ\)” represents the vector outer product [22]. Figure 7: _Escalator video: Background/Foreground separation: frames 54 and 110._ _Suppose the factor matrices have ranks \(\rho_{A},\rho_{B}\), and \(\rho_{C}\) respectively, with \(\rho:=\min\{\rho_{A},\rho_{B},\rho_{C}\}\). Then the BM-rank is bounded above by \(\rho\)._ Proof.: If \(\rho:=\min\{m,p,n\}\), this is trivial, since it is no different than the previous upper bound. Thus, let \(\rho<\min\{m,p,n\}\). Due to the orientation independence of the CP decomposition, we assume without loss of generality that \(\rho_{A}=\rho\). The CP-decomposition of \(\mathfrak{X}\) admits an expression of its \(k\)th frontal slice [22] as \(\mathfrak{X}_{::,k}=\mathbf{Adiag}(\mathbf{C}_{k,:})\mathbf{B}^{\top}\), where \(\mathtt{diag}(\mathbf{C}_{k,:})\in\mathbb{R}^{r\times r}\). Then we can take a rank-revealing factorization of \(\mathbf{A}\) such that \(\mathbf{A}=\mathbf{U}_{A}\mathbf{V}_{A}^{\top}\) with \(\mathbf{U}_{A}\in\mathbb{R}^{m\times\rho_{A}}\), \(\mathbf{V}_{A}\in\mathbb{R}^{r\times\rho_{A}}\), \(\rho_{A}=\rho\) columns. Setting tensors \(\mathbf{\mathcal{A}}\in\mathbb{R}^{m\times\rho\times n}\), \(\mathbf{X}\in\mathbb{R}^{m\times p\times\rho}\), and \(\mathfrak{C}\in\mathbb{R}^{\rho\times p\times n}\) as \[\mathbf{\mathcal{A}}_{::,k}=\mathbf{U}_{A};\qquad\mathfrak{C}_{::,k}=\mathbf{V}_ {A}^{\top}\mathtt{diag}(\mathbf{C}_{k,:})\mathbf{B}^{\top}\qquad\mathfrak{K}= \mathtt{ones}\left(m,p,\rho\right),\] for \(k=1,\ldots,n\), then \(\mathfrak{X}=\mathtt{bmp}\left(\mathbf{\mathcal{A}},\mathbf{X},\mathfrak{C}\right)\). ### Comparison to tensor SVD under \(\star_{M}\) In [19], the authors describe a tensor SVD for third order tensors under a specific type of product between pairs of tensors of appropriate dimension. What is needed to define the tensor-tensor \(\star_{M}\) product is a unitary or orthogonal \(n\times n\) matrix, \(\mathbf{M}\). Under the resulting multiplication, truncating the tensor-SVD gives an optimal approximation in the Frobenius norm. Here, we take \(\mathbf{M}=\mathbf{I}\), so the starting guess \(\widehat{\mathfrak{X}}\) defined by the SS-SVD will be exactly the \(\ell\) term approximation under the \(\star_{M}\) product. Importantly, they showed for any choice of orthogonal/unitary \(\mathbf{M}\), if \(\mathfrak{X}_{\ell}\) is the truncated t-SVDM approximation under \(\star_{M}\), that \(\|\mathfrak{X}-\mathfrak{X}_{\ell}\|_{F}\leq\|\mathbf{X}-\mathbf{X}_{\ell}\|_ {F}\) where the \(\mathbf{X}\) here is the unstacked \(\mathfrak{X}\) as a \(mn\times p\) matrix, and \(\mathbf{X}_{\ell}\) denotes the \(\ell\)-term truncated matrix SVD approximation. It was shown that strict inequality can be achieved. Therefore, \[|\mathfrak{X}-\mathtt{bmp}(\mathbf{\mathcal{A}}^{(k)},\mathbf{\mathcal{B}}^{(k)}, \mathfrak{C}^{(k)})\|_{F}<\|\mathfrak{X}-\mathfrak{X}_{\ell}\|_{F}\leq\| \mathbf{X}-\mathbf{X}_{\ell}\|_{F},\] where the superscripts indicate the ALS iteration count of the \(\ell\)-term BMD approximation and \(\widehat{\mathfrak{X}}=\mathfrak{X}_{\ell}\) is the \(\ell\)-term t-SVDM approximation with \(\mathbf{M}=\mathbf{I}\). Figure 6: Comparison of relative error for different compression ratios of the two real videos. ## 9 Conclusions and future work We have demonstrated that the third-order tensor BM-decomposition can be used for video background/foreground separation and video compression. Using our generative spatiotemporal video model, we have shown that videos with a stationary background are naturally low BM-rank tensors. As a result, significant image quality improvements are gained for highly compressed foreground videos compared to existing methods of tensor SS-SVD and DMD. Moreover, we have shown that the alternating least-squares algorithm for computing the BMD is closely connected to the block nonlinear Gauss-Seidel (GS) method, from which the convergence analysis of the algorithm follows naturally. We also gave new theoretical insight into the BM-rank bounds, non-uniqueness and relationship to other tensor decompositions. In the future, we will investigate the use of constraints for a better posed problem, feasibility of the extension of computation to higher dimensions, and we will investigate the utility of the BMD applied to other types of spatiotemporal data.
2308.15384
Hedging Forecast Combinations With an Application to the Random Forest
This papers proposes a generic, high-level methodology for generating forecast combinations that would deliver the optimal linearly combined forecast in terms of the mean-squared forecast error if one had access to two population quantities: the mean vector and the covariance matrix of the vector of individual forecast errors. We point out that this problem is identical to a mean-variance portfolio construction problem, in which portfolio weights correspond to forecast combination weights. We allow negative forecast weights and interpret such weights as hedging over and under estimation risks across estimators. This interpretation follows directly as an implication of the portfolio analogy. We demonstrate our method's improved out-of-sample performance relative to standard methods in combining tree forecasts to form weighted random forests in 14 data sets.
Elliot Beck, Damian Kozbur, Michael Wolf
2023-08-29T15:24:52Z
http://arxiv.org/abs/2308.15384v2
# Hedging Forecast Combinations ###### Abstract This paper proposes a generic, high-level methodology for generating forecast combinations that would deliver the optimal linearly combined forecast in terms of the mean-squared forecast error if one had access to two population quantities: the mean vector and the covariance matrix of the vector of individual forecast errors. We point out that this problem is related to a mean-variance portfolio construction problem, in which portfolio weights correspond to forecast combination weights. We allow negative forecast weights and interpret such weights as hedging over and under estimation risks across estimators. This interpretation follows directly as an implication of the portfolio analogy. We demonstrate our method's improved out-of-sample performance relative to standard methods in combining tree forecasts to form weighted random forests in 14 data sets. KEY WORDS: Forecast combinations, nonlinear shrinkage, random forest. JEL classification codes: C21, C53. Introduction We visit the well-known and well-studied problem of forecast combinations with a new angle. In this problem, one combines individual forecasts of a univariate response variable using a vector-valued set of regressors (or attributes) as input. The individual forecasts are obtained by given number (or ensemble) of forecasting methods and the combination is generally taken to be a linear combination with weights summing up to one. Based on years of hands-on experience, the consensus in the literature is that simple averaging (or equal-weighting) of the individual forecasts is hard to beat in practice. This papers proposes a generic, high-level methodology that would deliver the optimal linearly combined forecast in terms of the mean-squared forecast error if one had access to two population quantities: the mean vector and the covariance matrix of the vector of individual forecast errors. We point out that this problem is related to the finance problem of portfolio selection, in which portfolio weights correspond to forecast-combination weights. In practice, the quantities needed above are unknown and must be estimated based on a set of available (training) data. One of our contributions is to suggest the use of nonlinear shrinkage in order to estimate the covariance matrix as opposed to the standard (or canonical) choice in the related literature, the sample covariance matrix. Another contribution is that we allow for negative weights in the linear combination of individual forecast, whereas the standard in the literature is to enforce the weights to be non-negative. In order to protect against weights that are unduly large in absolute value we borrow an idea from the finance literature and enforce a "gross-exposure constraint" on the weight vector, that is, an upper bound on its \(L_{1}\) norm (given by the sum of the absolute values of the entries). In this way, we arrive at we call "hedged forecast combinations". As an application, we consider the random forest which is one of the most popular and widely used supervised machine learning methods and can be used for two main purposes: regression and classification. For the purpose of regression, the random forest is a special case of an equal-weighted forecast combination where the individual forecasting methods are regression trees. We demonstrate empirically on a collection of 14 benchmark data sets that our methodology applied to the random forest, called "hedged random forest", improves the forecasting performance of the standard random forest, especially for smaller training sets. In the remainder of this paper, Section 2 presents the general methodology, Section 3 provides an application to the random forest, and the appendix contains various robustness checks. ## 2 Methodology ### General Description The goal is to forecast (or predict)1 a random variable \(y\in\mathbb{R}\) based on a set of variables (or attributes) \(x\in\mathbb{R}^{d}\). Denote a generic forecast by \(\hat{f}\). Then its mean-squared error (MSE) is given by Footnote 1: In this paper, the terms “forecast” and “prediction” are used interchangeably. Arguably, some people associate with “forecast” a time series setting but we do not. \[\operatorname{MSE}(\hat{f})\coloneqq\mathbb{E}\big{(}y-\hat{f}(x)\big{)}^{2}\.\] Here and below the moments are, of course, obtained under the joint distribution governing the random vector \(v\coloneqq(y,x^{\prime})^{\prime}\in\mathbb{R}^{1+d}\) and assumed to exist. Letting \[\operatorname{Bias}(\hat{f})\coloneqq\mathbb{E}\big{(}y-\hat{f}(x)\big{)} \quad\text{and}\quad\operatorname{Var}(\hat{f})\coloneqq\mathbb{V}ar\big{(}y- \hat{f}(x)\big{)}\coloneqq\mathbb{E}\big{(}(y-\hat{f}(x)\big{)}^{2}-\big{[} \mathbb{E}\big{(}(y-\hat{f}(x)\big{)}\big{]}^{2}\,\] there exists the well-known decomposition \[\operatorname{MSE}(\hat{f})=\operatorname{Bias}^{2}(\hat{f})+\operatorname{ Var}(\hat{f}). \tag{2.1}\] The oracle that minimizes the MSE is given by the conditional expectation \(\hat{f}_{\text{or}}(x)\coloneqq\mathbb{E}(y|x)\) but is not available in practice; for example, see Hayashi (2000, Section 2.9). This paper considers combinations of a given set of \(p\) forecasting methods (or forecasting models), denoted by \(\{\mathcal{M}_{j}\}_{j=1}^{p}\). The number of methods, \(p\), is assumed to be exogenous and fixed. Although we do not make this explicit in the notation, methods may be data-dependent in the sense that, for example, certain parameters are fitted based on observed data (such as regression parameters). There exists an extensive literature on forecast combinations; for example, see Elliott and Timmermann (2016, Chapter 14), Wang et al. (2022), and the references therein. The consensus seems to be that simple averaging (or equal weighting), given by \[\hat{f}_{\text{AV}}(x)\coloneqq\frac{1}{p}\sum_{j=1}^{p}\mathcal{M}_{j}(x)\,\] is hard to beat by more general linear combinations of the kind \[\hat{f}_{w}(x)\coloneqq\sum_{j=1}^{p}w_{j}\mathcal{M}_{j}(x)\quad\text{with} \quad w\coloneqq(w_{1},\dots,w_{p})^{\prime}\quad\text{and}\quad\sum_{j=1}^{p} w_{j}=1. \tag{2.2}\] Nevertheless, our aim is to find a method for selecting a set of weights \(w\) that does improve the (out-of-sample) MSE of simple averaging, at least 'on balance'. Denote by \(e_{j}\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=y-\mathcal{M}_{j}(x)\) the forecast error made by model \(M_{j}\) and collect these errors into the vector \(e\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=(e_{1},\ldots,e_{p})^{\prime}\) with expectation (vector) and covariance matrix \[\mu\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=\mathbb{E}(e) \quad\text{and}\quad\Sigma\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize. }}}=\mathbb{V}ar(e)\.\] The MSE of the forecast (2.2) is then given by \[\text{MSE}(\hat{f}_{w})=(w^{\prime}\mu)^{2}+w^{\prime}\Sigma w\.\] Therefore, the optimal (in terms of the MSE) forecast in the class (2.2) is the solution of the following optimization problem: \[\min_{w}\,(w^{\prime}\mu)^{2}+w^{\prime}\Sigma w \tag{2.3}\] \[\text{s.t.}\quad w^{\prime}\mathbb{1}=1\, \tag{2.4}\] where \(\mathbb{1}\) denotes a conformable vector of ones. Problem (2.3)-(2.4) is a convex optimization problem and can, in principle, be solved quickly with readily available software, even for large dimensions \(p\). The problem in practice is that the inputs \(\mu\) and \(\Sigma\) are unknown. A feasible solution is to replace them with sample-based estimates \(\hat{\mu}\) and \(\hat{\Sigma}\), which is an application of the general "plug-in method". Being agnostic, for the time being, abut the nature of the estimators \(\hat{\mu}\) and \(\hat{\Sigma}\), we then solve the feasible optimization problem \[\min_{w}\,(w^{\prime}\hat{\mu})^{2}+w^{\prime}\hat{\Sigma}w \tag{2.5}\] \[\text{s.t.}\quad w^{\prime}\mathbb{1}=1\quad\text{and}\] (2.6) \[||w||_{1}\leq\kappa\, \tag{2.7}\] where \(||w||_{1}\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=\sum_{j=1}^{p }|w_{j}|\) denotes the \(L_{1}\) norm of \(w\) and \(\kappa\in[1,\infty]\) is constant chosen by the user. Assuming succinctly that the estimator \(\hat{\Sigma}\) is symmetric and positive semi-definite, the optimization problem (2.5)-(2.7) is still of convex nature and can be solved easily and quickly in practice, even for large dimensions \(p\). We shall denote the solution to this optimization problem by \(\hat{w}\). The addition of the constraint (2.7) is motivated by the related problem of _portfolio selection_ in finance, in which context the constraint is called a "gross-exposure constraint". Adding this type of constraint to the infeasible problem (2.3)-(2.4) clearly would result in a (weakly) worse solution for any value \(\kappa\in[1,\infty)\). But in the feasible problem, which must use estimated instead of true inputs, the constraint typically helps. The intuition here is that replacing \(\mu\) and \(\Sigma\) with respective estimates \(\hat{\mu}\) and \(\hat{\Sigma}\) can lead to unstable and underdiversified solutions that look good in sample (or in the training set) but perform badly out of sample, especially when the number of models, \(p\) is not (exceedingly) small relative to the sample size relevant to the estimation of \(\mu\) and \(\Sigma\); for example, see Jobson and Korkie (1980), Michaud (1989), and Jagannathan and Ma (2003). In the extreme case \(\kappa=1\), the weights are forced to be non-negative, that is, \(w_{i}\geq 0\), which is called a "no-short-sales constraint" in finance. Imposing this constraint is standard in the forecast-combination literature but it might well lead to sup-optimal performance because of not giving enough flexibility to the solution of the problem (2.5)-(2.7), At the other end of the spectrum, the choice \(\kappa=\infty\) corresponds to removing the constraint (2.7), which may also lead to sub-optimal performance for the reasons mentioned above. Staying away from either extreme, there is ample evidence in the finance literature that choosing \(\kappa\in[1.5,2.5]\) typically results in improved forecasting performance, and that the exact choice in this interval is not overly critical; for example, see DeMiguel et al. (2009). Because the constraint (2.7) protects the user against extreme "positions", that is, against weights \(\hat{w}_{j}\) that are unduly large in absolute value, we call our approach "hedging forecast combinations".2 Footnote 2: For example, according to Merriam-Webster (online version) the verb “to hedge against” means “to protect oneself from (something)”. ### Theory The solution to the convex optimization problem (2.5)-(2.7) is continuous in the inputs \(\hat{\mu}\) and \(\hat{\Sigma}\). Therefore, with the choice \(\kappa:=\infty\), its solution, denoted by \(\hat{w}\) would lead to an asymptotically optimal forecast combination \(\hat{f}_{\hat{w}}\) based on consistent estimators \(\hat{\mu}\) and \(\hat{\Sigma}\). Stating this fact in a theorem is possible, but as this is a routine matter we find it outside the scope the basic research content of this paper. First, this fact has been recognized before. Furthermore, in practical application, the relevant property is the finite-sample performance of the forecast \(\hat{f}_{\hat{w}}\) and, so far, the evidence based on simulation studies and empirical applications to real-life data sets indicates that such forecast combinations, on balance, do not outperform \(\hat{f}_{\text{AV}}\), that is, simple averaging; again, see Elliott and Timmermann (2016, Chapter 14), Wang et al. (2022), and the references therein. Therefore, our goal is isolated to finding a forecast combination \(\hat{f}_{\hat{w}}\) that, on balance, outperforms \(\hat{f}_{\text{AV}}\) in empirical applications to commonly used benchmark data sets.3 Footnote 3: On the other hand, we shall abstain from any simulation studies, since we could tilt the data generating process arbitrarily to our favor. Our preceding high-level description is agnostic about (i) the nature of the forecasting methods \(\{\mathcal{M}_{j}\}_{j=1}^{p}\) and (ii) the estimation of the mean (vector) \(\mu\) and the covariance matrix \(\Sigma\) of the corresponding vector of forecast errors \(e\in\mathbb{R}^{p}\). This estimation is crucial to the performance of the proposed forecast-combination method. We assume the existence of a data set \(\{v_{i}\}_{i=1}^{n}\) with \(v_{i}:=(y_{i},x_{i}^{\prime})^{\prime}\) and consider two cases. When \(\{v_{i}\}_{i=1}^{n}\) is an independent and identically distributed (i.i.d.) sample where the distribution of \(v_{i}\) is equal to the distribution of \(v\). In this case there exists a well-established literature on how to generate pseudo-out-of-sample forecast errors, with the most popular technique being cross-validation; for example, see Efron and Hastie (2022, Chapter 12) and Hastie et al. (2017, Chapter 7). Another option is to use in-sample errors, or residuals; this option has a bad reputation because in-sample errors tend to be systematically smaller (in magnitude) compared to (actual) out-of-sample errors due to the well-known phenomenon of "overfitting". However, for our purposes this may not be a serious problem; see Remark 2.1 below. Whether one computes pseudo-out-of-sample or in-sample errors, the results are collected in a matrix \(R\) based on which one estimates \(\mu\) and \(\Sigma\). To this end, one can use sample counterparts (that is, sample mean and sample covariance matrix), shrinkage methods, penalized estimation schemes, etc; Having said this, we shall restrict attention to shrinkage methods as an alternative to sample counterparts. For shrinkage estimation of a mean vector, see Hansen (2016), Bodnar et al. (2019), and the references therein; for shrinkage estimation of a covariance matrix, see Ledoit and Wolf (2022) and the references therein. Note that other data settings are also possible, for instance when \(\{v_{1},\ldots,v_{n},v\}\) is a stationary time series. Also in this case the recommendation is to base the estimation of \(\mu\) and \(\Sigma\) on a collection of pseudo-out-of-sample or in-sample forecast errors \(R\). How to generate those is less well established compared to the i.i.d. case but proposals do exist; for example, see Bergmeir and Benitez (2012) and Bergmeir et al. (2018). If one prefers shrinkage estimation over sample counterparts, ideally one should used methods designed for time-series data; for example, see Sancetta (2008) and Engle et al. (2019) for shrinkage estimation of the covariance matrix. Having said this, the case of stationary time series is, arguably, more difficult in practice. Compared to an i.i.d. sample it would take a larger sample size, generally, to have a similar chance of outperforming \(\hat{f}_{\text{AV}}\), that is, simple averaging; but especially macroeconomic time series only have relatively small sample sizes. Furthermore, many real-life time series (even after detrending and deseasonalizing) may still not be stationary because of structural breaks, for example. **Remark 2.1** (Scale invariance).: The solution \(\hat{w}\) to the optimization problem (2.5)-(2.7) remains unchanged if \(\hat{\mu}\) and \(\hat{\Sigma}\) are replaced by \(c\hat{\mu}\) and \(c^{2}\hat{\Sigma}\), respectively, for any constant \(c\in(0,\infty)\). Therefore, it is not important that the estimators \(\hat{\mu}\) and \(\hat{\Sigma}\) get the 'levels' of the true quantities \(\mu\) and \(\Sigma\) right. In particular, the use of in-sample (or training-set) errors in the construction of \(\hat{\mu}\) and \(\hat{\Sigma}\) can still lead to favorable performance of the forecast combination \(\hat{f}_{\hat{w}}\) even if such errors are systematically smaller (in magnitude) compared to out-of-sample errors because of in-sample (or training-set) overfitting. Instead of approximating the actual entries of \(\mu\) and \(\Sigma\), the corresponding estimators \(\hat{\mu}\) and \(\hat{\Sigma}\) only need to approximate the entries relative to each other in order for \(\hat{f}_{\hat{w}}\) to outperform \(\hat{f}_{\text{AV}}\). That may still not be a trivial task, but it is certainly an easier task. Empirical Application: The Random Forest ### Background The random forest is one of the most popular tree-based methods in supervised machine learning; at time of this writing the original paper Breiman (2001) has more than 110,000 Google Scholar citations already. If the variable \(y\) is categorical, the random forest is used for _classification_; if the variable \(y\) is numerical, the random forest is used for _regression_. In this application, we focus on regression only. A review of the mechanics of the random forest is as follows. First, one grows an ensemble of _decorrelated_ trees; these correspond to the forecasting methods \(\{\mathcal{M}_{j}\}_{j=1}^{p}\) in our context. Second, one uses the simple average of the trees or, alternatively put, the equal-weighted ensemble. This means the individual forecasts of the trees are simply averaged to arrive at a final, combined forecast; this corresponds to the forecast combination \(\hat{f}_{\text{AV}}\) in our context. For a textbook treatment on the random forest the reader is referred to, for example, Hastie et al. (2017, Chapter 15). In our analysis below we shall study whether it is possible to find a forecast combination \(\hat{f}_{\hat{w}}\) that can, on balance, outperform the (standard) random forest \(\hat{f}_{\text{AV}}\). To this end we will use a number of benchmark data sets (some artificial, some real) from the literature. Importantly, we shall restrict focus on cross-sectional (or i.i.d.) data sets. ### Data To assess the performance of our proposed method relative to the standard (equal-weighted) random forest, we use 14 data sets sourced from the Penn Machine Learning Benchmarks (PMLB) database; see Romano et al. (2021). PMLB serves as a comprehensive repository of benchmark data sets, designed specifically for the evaluation and comparison of supervised machine learning methods. Each data set is available on the website www.openml.org, which also provides access to metadata and descriptions. In particular, we select all data sets whose number of observations ranges from 6,000 to 100,0004. The lower bound of 6,000 was chosen to ensure a test set of size at least 1,000 in all scenarios, as will become apparent below. Note that our selection contains both artificial and real-world data sets. For example, the data set 201_pole is a real-world telecommunication data set first used by Weiss and Indurkhya (1995); on the other hand, 564_fried is an artificial data set introduced by Friedman (1991) and also described in Breiman (1996). Table 1 lists the data sets used in this analysis together with the corresponding numbers of observations and numbers of attributes (or \(x\)-variables). ### Implementation Throughout, we consider a fixed range of training-set (sample) sizes \(200\leq n\leq 5,000\). For a given data set and a given training-set size, we then draw \(n\) observations at random (without replacement) as the training set and the remaining observations constitute the test set; as a consequence, when \(n=5,000\) we need at least 6,000 observations in the original data set to obtain a test set of size 1,000. We next train a random forest on the training set using the "ranger library" implemented in the programming language R, where the various hyperparameter are set to the defaults recommended by Wright and Ziegler (2017); in particular, the number of trees is set to \(p=500\) as per default. After training the random forest, we extract the forecasts of each tree on the entire training set and thus obtain a residual (or in-sample error) matrix of size \(n\times p\). We do not, for a given tree, extract predictions on the corresponding out-of-bag observations only (that is, on the subset of the training set not used in pruning the particular tree) because in this way we would not obtain a full \(n\times p\) matrix of residuals but instead a matrix that would contain a large number of missing values.5 But as explained below, we need a full matrix \(R\) for the estimation of \(\Sigma\), if not for the estimation of \(\mu\) necessarily. Footnote 5: On average, there would be about \(1-1/e\approx 63.2\%\) of missing values The various inputs to the feasible optimization problem (2.5)-(2.7) are chosen as follows. First, for the estimation of \(\mu\), we use the (column-wise) sample mean of \(R\); we also experimented with some shrinkage estimators instead but the results remained virtually unchanged. Second, for the estimation of \(\Sigma\), we apply nonlinear shrinkage to \(R\); in particular, we use the quadratic inverse shrinkage (QIS) estimator of Ledoit and Wolf (2022b).6 Note here that nonlinear shrinkage requires \begin{table} \begin{tabular}{l r r} \hline Name & \# Observations & \# Attributes \\ \hline 197\_cpu\_act & 8,192 & 21 \\ 201\_pol & 15,000 & 48 \\ 215\_2dplanes & 40,768 & 10 \\ 218\_house\_8L & 22,874 & 8 \\ 225\_puma8NH & 8,192 & 8 \\ 227\_cpu\_small & 8,192 & 12 \\ 344\_mv & 40,768 & 10 \\ 537\_houses & 20,640 & 8 \\ 562\_cpu\_small & 8,192 & 12 \\ 564\_fried & 40,768 & 10 \\ 573\_cpu\_act & 8,192 & 21 \\ 574\_house\_16H & 22,784 & 16 \\ 1193\_BNG\_lowbwt & 31,104 & 9 \\ 1199\_BNG\_echoMonths & 17,496 & 9 \\ \hline \end{tabular} \end{table} Table 1: Data sets used. a full matrix \(R\) as an input. Third, for the gross-exposure constraint \(\kappa\) in (2.7), we use \(\kappa\coloneqq 2\). Appendix A runs robustness checks that consider (i) the sample covariance matrix based on \(R\), rather than nonlinear shrinkage, as the estimator \(\hat{\Sigma}\) and (ii) alternative values of the gross-exposure constraint \(\kappa\). Thus all inputs to the optimization problem (2.5)-(2.7) are now in place. The solution \(\hat{w}\) assigns weight \(\hat{w}_{j}\) to tree \(\mathcal{M}_{j}\) rather than weight \(1/p\) as for the standard random forest (RF). We call this weighted random forest the "hedged random forest" (HRF). For each method, RF and HRF, fitted on the training set we obtain a MSE on the test set, denoted by \(\text{MSE}_{\text{RF}}\) and \(\text{MSE}_{\text{HRF}}\), respectively. In order to eliminate, or at least mitigate, randomness due to the random choice of \(n\) observations as the test set, we then repeat this process (independently) \(B\) times. As the final performance measure, we report the following root-mean-squared-error (RMSE) ratio: \[\text{RMSE}_{\text{HRF}/\text{RF}}:=\frac{\sqrt{\frac{1}{B}\sum_{b=1}^{B} \text{MSE}_{\text{HRF},\text{b}}}}{\sqrt{\frac{1}{B}\sum_{b=1}^{B}\text{MSE} _{\text{RF},\text{b}}}}. \tag{3.1}\] This means that, for each method, we average the MSE values over the \(B\) repetitions and then take the root to arrive at individual RMSE values. Finally, we take the ratio of the two RMSE values. Values greater than one of this ratio speak in favor of RF whereas values smaller than one speak in favor of HRF. Our results below are all based on \(B=100\) repetitions; larger values of \(B\) leave the results virtually unchanged. In this way, for any training-set size \(n\), we get 14 RMSE ratios (3.1), one for each data set listed in Table 1. We convert the 14 ratios into a boxplot and then line up the boxplots for \(n\in\{200,400,600,800,1000,2000,3000,4000,5000\}\) in Figure 1. The results can be summarized as follows: * On balance, HRF clearly outperforms RF. * The gains are most pronounced for small training-set sizes \(n\) but 'live on' up to the largest size considered, \(n=5000\). * Out of the 14 data sets, there is one on which HRF performs worse than RF, but the loss is never more than 6% and always below 5% for \(n\geq 400\). * On the other hand, there are two data sets for which HRF reduces the RMSE compared to RF by more than 25% for \(n\geq 400\); for one of these data sets, HRF actually reduces the RMSE by more than 40% for all \(n\). * Summing up, based on the 14 data sets considered, there is little to lose but potentially much to gain by upgrading from RF to HRF. * HRF outperforms RF particularly convincingly for smaller \(n\), and thus for larger \(d\) relative Figure 1: Boxplots of RMSE ratios (3.1). For each training-set size \(n\), the boxplot is based on the 14 ratios corresponding to the data sets listed in Table 1. to \(n\). The intuition for this finding is that both RF and HRF are 'consistent' forecasting methods for sequences of data with \(d\) independent of \(n\) (when data are i.i.d.) and hence the difference between the two forecasts tend to decrease for larger \(n\). **Remark 3.1** (Cross-validation).: At this point, some readers may wonder why we do not use cross-validation to build up a \(n\times p\) matrix of pseudo-out-of-sample errors \(R\). The reason lies in the special nature of the random forest, since the individual forecasting methods, namely the trees, depend on the underlying training set. If we used ten-fold cross-validation, say, we would obtain ten different tree ensembles, none of which would coincide with the ensemble used at the end for forecasting, namely the ensemble based on the entire training set. This problem would not arise with other forecasting methods, such as linear regression models; of course, estimated parameters in a given regression model would change as a function of the underlying training set, but not the 'characteristics' of that model (such as the number and the constitution of the regressors). **Remark 3.2** (Time series data).: We have previously demonstrated that the hedged random forest, on balance, provides superior forecasting performance compared to the standard random forest (based on equally-weighted trees). To be fair, all this evidence has been for cross-sectional (or i.i.d.) data sets. We also experimented with time series data sets but failed to outperform, on balance, the standard random forest. There could be several reasons for this finding. On the one hand, with time series data it is more difficult to estimate \(\mu\) and \(\Sigma\). On the other hand, many real-life time series may not be stationary but suffer from time-varying parameters or structural breaks; in such cases, the estimates of \(\mu\) and \(\Sigma\) based on observations in a past window (including today) could simply be noticeably off-target for what is actually coming the future, and not weighting the trees would be more robust. Having said this, it may well be that for other applications (different from the random forest) our generic high-level methodology can also provide gains over simple averaging for time series data. This topic is left to future research. ### Related Literature Weighted versions of the random forest, that is, versions that do not use equal weighting have, of course, been considered before in the literature. One strand of this literature does not apply to our setting, since it considers the random forest in the context of classification rather than regression; for example, see Kouloumpris and Vlahavas (2022). In this case, the goal is the minimize classification error rather than MSE which, in principle, does not involve a covariance matrix and leads to optimization problems of a different nature compared to those considered in Section 2.1. In the context of regression, the various proposals often are problem-specific rather than generic, that is, they are designed specifically for the random forest (only) rather than using a high-level methodology as the one of Section 2.1 applied to the random forest as a special case (but not exploiting specific features of it). Such proposals can be rather complex (to understand) and difficult to implement (in terms of coding). As an example, the reader is referred to Chen et al. (2023). It would have been interesting to compare with their proposal but both their Algorithms 1 and 2 are cumbersome to implement and the authors do not provide (so far) any corresponding code. Furthermore, their proposal requires the choice of tuning parameters which, in addition to a training set, also requires a validation set, resulting in a loss of information in practice; see their Section 4. On the other hand, we can compare to an earlier weighted random forest which is quite easy to implement, namely the proposal of Winham et al. (2013). For each tree \(\mathcal{M}_{j}\), they only consider the corresponding out-of-bag (OOB) errors in the training set to compute \[t\mathrm{PE}_{j}\coloneqq\frac{1}{|\mathrm{OBB}_{j}|}\sum_{i\in\mathrm{OOB}_{ j}}\bigl{|}y_{i}-\mathcal{M}_{j}(x_{i})\bigr{|}\,\] where \(\mathrm{OOB}_{j}\subset\{1,\ldots,n\}\) denotes the OOB subset of the training set corresponding to tree \(\mathcal{M}_{j}\). Next, they compute'relative' weights according to one of the three following formulas: \[\hat{w}_{j,\mathrm{rel}} \coloneqq 1-t\mathrm{PE}_{j} \tag{3.2}\] \[\hat{w}_{j,\mathrm{rel}} \coloneqq \exp\left(\frac{1}{t\mathrm{PE}_{j}}\right)\] (3.3) \[\hat{w}_{j,\mathrm{rel}} \coloneqq \left(\frac{1}{t\mathrm{PE}_{j}}\right)^{\lambda}\quad\text{for some $ \lambda>0$} \tag{3.4}\] Finally, the weights \(\{\hat{w}_{j}\}\) are forced to sum up to one by defining \[\hat{w}_{j}\coloneqq\frac{\hat{w}_{j,\mathrm{rel}}}{\sum_{l=1}^{p}\hat{w}_{l, \mathrm{rel}}}\.\] Note that, by construction, these weights \(\hat{w}_{j}\) are all strictly positive. Following the advice of Winham et al. (2013), we tried as the leading candidates version (3.3) and version (3.4) with \(\lambda\coloneqq 5\), of which the former performed somewhat better. Calling the resulting method WRF (for Winham-et-al Random Forest), we can construct as an analog to (3.1) the ratio \[\mathrm{RMSE}_{\mathrm{HRF/WRF}}\coloneqq\frac{\sqrt{\frac{1}{B}\sum_{b=1}^{B }\mathrm{MSE}_{\mathrm{HRF,b}}}}{\sqrt{\frac{1}{B}\sum_{b=1}^{B}\mathrm{MSE}_ {\mathrm{WRF,b}}}}. \tag{3.5}\] As an analog to Figure 1 we then obtain Figure 2, which demonstrates that HRF clearly outperforms WRF. **Remark 3.3** (Importance of negative weights).: Notably all previous proposals for weighting the random forest that we are aware of, not only on the context of regression but also in the context of classification, impose the "no-short-sales constraint" \(\kappa=1\), that is, \(w_{j}\geq 0\ \forall j\). As shown in the robustness checks of Appendix A, allowing for negative weights generally improves performance and for certain data sets by a pronounced margin. In the context of finance, a "no-short-sales constraint" can be motivated by legislation (for example, mutual funds are not allowed to short stocks) or by practical considerations (for example, shorting certain assets may not be possible or prohibitively expensive). On the other hand, "short-selling" individual forecast methods \(\mathcal{M}_{j}\) by assigning them a negative weight is always possible and does not incur any monetary costs. We, therefore, hope that our paper will serve as motivation to the scientific community to allow for negative weights not only in the random forest but also in other forecast-combination applications. Figure 2: Boxplots of RMSE ratios (3.5). For each training-set size \(n\), the boxplot is based on the 14 ratios corresponding to the data sets listed in Table 1.
2306.15628
Machine-learning based noise characterization and correction on neutral atoms NISQ devices
Neutral atoms devices represent a promising technology that uses optical tweezers to geometrically arrange atoms and modulated laser pulses to control the quantum states. A neutral atoms Noisy Intermediate Scale Quantum (NISQ) device is developed by Pasqal with rubidium atoms that will allow to work with up to 100 qubits. All NISQ devices are affected by noise that have an impact on the computations results. Therefore it is important to better understand and characterize the noise sources and possibly to correct them. Here, two approaches are proposed to characterize and correct noise parameters on neutral atoms NISQ devices. In particular the focus is on Pasqal devices and Machine Learning (ML) techniques are adopted to pursue those objectives. To characterize the noise parameters, several ML models are trained, using as input only the measurements of the final quantum state of the atoms, to predict laser intensity fluctuation and waist, temperature and false positive and negative measurement rate. Moreover, an analysis is provided with the scaling on the number of atoms in the system and on the number of measurements used as input. Also, we compare on real data the values predicted with ML with the a priori estimated parameters. Finally, a Reinforcement Learning (RL) framework is employed to design a pulse in order to correct the effect of the noise in the measurements. It is expected that the analysis performed in this work will be useful for a better understanding of the quantum dynamic in neutral atoms devices and for the widespread adoption of this class of NISQ devices.
Ettore Canonici, Stefano Martina, Riccardo Mengoni, Daniele Ottaviani, Filippo Caruso
2023-06-27T17:08:52Z
http://arxiv.org/abs/2306.15628v1
# Machine-learning based noise characterization and correction on neutral atoms NISQ devices ###### Abstract Neutral atoms devices represent a promising technology that uses optical tweezers to geometrically arrange atoms and modulated laser pulses to control the quantum states. A neutral atoms Noisy Intermediate Scale Quantum (NISQ) device is developed by Pasqali with rubidium atoms that will allow to work with up to 100 qubits. All NISQ devices are affected by noise that have an impact on the computations results. Therefore it is important to better understand and characterize the noise sources and possibly to correct them. Here, two approaches are proposed to characterize and correct noise parameters on neutral atoms NISQ devices. In particular the focus is on Pasqal devices and Machine Learning (ML) techniques are adopted to pursue those objectives. To characterize the noise parameters, several ML models are trained, using as input only the measurements of the final quantum state of the atoms, to predict laser intensity fluctuation and waist, temperature and false positive and negative measurement rate. Moreover, an analysis is provided with the scaling on the number of atoms in the system and on the number of measurements used as input. Also, we compare on real data the values predicted with ML with the a priori estimated parameters. Finally, a Reinforcement Learning (RL) framework is employed to design a pulse in order to correct the effect of the noise in the measurements. It is expected that the analysis performed in this work will be useful for a better understanding of the quantum dynamic in neutral atoms devices and for the widespread adoption of this class of NISQ devices. ## I Introduction In the last few years we are witnessing a revolution in the field of quantum computing. The so called Noisy Intermediate Scale Quantum (NISQ) devices [1] represent the state of the art in this field. The intermediate scale of such devices refers to the fact that at the best of our technologies, we are still capable of dealing with at most few hundreds of qubits. Several error correction codes have been developed to deal with such noise [2; 3; 4], but they require the adoption of auxiliary qubits further decreasing the resources available for the computation. _Pasqal_[5] has developed a NISQ device called _Fresnel_ based on a neutral atom quantum processor capable of using up to 100 qubits [6] and provides a _Python_ library called _Pulser_[7] that can be used to prepare a setting either to run it on the real machines or to simulate it on a built-in simulator. Machine Learning (ML) is a field in the context of Artificial Intelligence (AI) that deals with the study and realization of models that learn to make predictions after being trained with data [8; 9]. Artificial Neural Networks (ANNs) are ML methods organized in layers of artificial neurons that performs calculations with weighted summation of the inputs followed by non-linear activation functions. ML methods has already developed in the context of quantum noise characterization [10; 11; 12; 13] and have already been adopted in the context of error estimation. In [14] the authors train a recurrent neural network to detect if certain errors happened in a quantum circuit and use the model to enhance a surface error correction code. Surface error correction codes allows an high error tolerance, however to be implemented they need an high number of physical qubits [15]. By contrast, in our proposed approach for noise mitigation, no additional qubits are needed for error detection. In fact, our purpose is to learn how to modify the pulses in such a way as to minimize the effect of noise without implementing error correction codes. Moreover, we estimate the noise in devices with the analog interface and not with the digital one. In fact, with neutral atoms devices it is possible to take advantage of analog and digital modes. With the former, laser pulses can be used to directly manipulate the Hamiltonian of the system: \[H=\frac{\hbar\Omega(t)}{2}\sum_{i}\sigma_{i}^{x}-\frac{\hbar\delta(t)}{2}\sum _{i}\sigma_{i}^{z}+\sum_{i<j}U_{ij}n_{i}n_{j}.\] With the digital mode, on the other hand, it is possible to evolve the state of the system through quantum gates, thus creating quantum circuits. In [16] the authors consider the noise to have the form of a Pauli channel and make the assumption that the error rate is modeled with a Gibbs random field (GRF). Those assumptions allows the authors to effectively learn the parameters of the GRF to characterize the noise of a real IBM NISQ device. As discussed below, in our work we use a different noise formalization, in fact we resort on how the noise is implemented in the Pasqal simulator that we use to generate the data to train the deep learning model. RL is a ML methodology that requires the presence of a simulator of an environment where an agent operates [17]. The agent is usually implemented as a neural network that is trained to implement the policy that governs the actions of the agent. Initially, for each episode (the elementary phase of each RL algorithm that is repeated over time and is constituted of a series of actions of the agent and reactions of the environment), the agent and the environment are initialized in some initial state. Then, the agent perceive some information about the environment and, based on that, the policy follows a probability distribution of the possible next actions that the agent can perform to change the state of the environment or the state of the agent within the environment. The episode continue with the choosing of the best action according to the policy and new steps until a predefined number of steps or some episode-ending condition. RL have been already used in the context of state preparation and circuit optimization [18; 19; 20]. In the context of noise correction, RL have been adopted to correct the noise that degrades a state over time [21] or to optimize existing quantum correction codes [22]. In our work we instead focus on the task to correct the effects of the noise of a defined quantum dynamics without modifying the base pulse. ## II Noise benchmarking protocol A setting consists in the topological arrangement of atoms and the description of the laser pulses that interacts with them. Then, the computation on Quantum Processing Units (QPUs) is structured in cycles of three phases: (i) the preparation of the register, (ii) the quantum processing and (iii) the register readout. In particular, on neutral atoms devices, the preparation of the register is obtained using arrays of optical tweezers [23]. Initially the register is initialized with atoms in random positions and afterward the single atoms are moved in the desired positions. The quantum computation is performed analogically using laser pulses that interact with the register atoms and can excite them. The laser pulses are characterized by the values and shapes of the Rabi frequency \(\Omega(t)\) and detuning \(\delta(t)\). Finally, the register readout is performed by taking a fluorescence image to capture the energy levels of the atoms. In Pasqal NISQ devices, it is possible to prepare registers of maximum 100 atoms with a minimum distance of \(4\mu m\) between them arranged in bidimensional structures in an area of maximum radius \(50\mu m\). NISQ devices, as the name suggests, are affected by several noise effects that limit their applicability and the operations that can be reliably executed on them. The devices used in the realization of a quantum computer are not ideal, as they are affected by noise: for example, lasers are not exactly monochromatic, and atoms are cooled by lasers to very low temperatures, but still non-zero. These imperfections have an impact by introducing errors during the preparation of the system, its evolution over time, and the measurement. The effect is that the measured probabilities of occupation are different from those we would have obtained from an ideal environment. In general, there are different parameters that can be used to indicate different sources of noise in the device [24]. In the present work we will focus on five parameters that are considered predominant for their effects: the laser intensity fluctuation \(\sigma_{R}\) indicates the standard deviation of the fluctuation of the desired Rabi frequency of the laser pulse; the laser waist \(w\) is the diameter of the gaussian laser beam; the false positive measurements \(\varepsilon\) represents the probability of wrongly measure as excited an atom that was in the ground state; false negative measurements \(\varepsilon^{\prime}\) is the probability of measuring an excited atom in ground state. Table 1 shows those sources of noise and their estimated values provided informally by Pasqal. The objective of our work is the implementation of ML models to: (i) provide a quantitative estimate of the noise; (ii) mitigate the effects of the noise. We decided to formulate a supervised regression task to quantitatively estimate the noise [16] and to use a Reinforcement Learning (RL) framework [17] to mitigate the noise effect. Regarding the noise characterization, our aim is to show that it is possible to estimate the noise parameters in the form of mean values and error intervals. As depicted in fig. 1, the workflow begins with the simulation of various executions, with different noise parameters, of a quantum dynamic where a global pulse irradiates all the \(n\) atoms of a register. Afterward, the atoms occupation probabilities, that we call \(\mathbf{\mathcal{P}}=\mathcal{P}_{1},\ldots,\mathcal{P}_{2^{n}}\), are collected and used to train ANN models to predict the noise parameters that were used to perturb the dynamics: _temperature_, laser _waist_, false positive measurement rate \(\varepsilon\), false negative measurement rate \(\varepsilon^{\prime}\) and intensity fluctuation \(\sigma_{R}\). At the end, the trained models are used on prediction with the real data, obtaining an estimation of the noise parameters. For the the simulations used in the generation of the data and for the training of the models, we use our servers with Nvidia TITAN RTX and GeForce RTX 3090 GPUs. Moreover we could also make use of the CINECA Marconi100 supercomputer. The rest of the paper is structured as in the following. First, in section III.1 we consider the simpler problem of characterizing only a single noise parameter, then in section III.2 we show the results of the characterization of all the aforementioned parameters. In section IV we illustrate the RL error correction protocol that we adopt. \begin{table} \begin{tabular}{c|c|c} **Description** & **Parameter** & **Value** \\ \hline Laser Intensity fluctuation & \(\sigma_{R}\) & \(3\%\) \\ Laser waist & \(w\) & \(68\mu m\) \\ Temperature & \(T\) & \(30\mu K\) \\ False positive measurement & \(\varepsilon\) & \(3\%\) \\ False negative measurement & \(\varepsilon^{\prime}\) & \(8\%\) \\ \hline \end{tabular} \end{table} Table 1: Summary of the main noise parameters with their respective values. We considered the parameters that are expected to have a predominant effect. ## III Noise characterization ### Single parameter scenario In this section we consider the estimation of a single noise parameter. After preliminary analysis, we decided to focus on the noise effects that comes from the laser intensity fluctuations \(\sigma_{R}\). Before describing the used methods, let us introduce the notation. We will denote by \(s_{i}\) the system composed of \(i\) qubits. Globally, we consider systems with a number of qubits from 2 to 5 and in the case of 4-qubit systems we denote 6 different topologies with an extra alphanumeric index from \(a\) to \(f\). Specifically, \(s_{4a},s_{4b},\ldots,s_{4f}\). Globally, we collected the measurements of nine different runs on the real Pasqal NISQ devices (6 different topologies with 4 atoms and single topologies with 2,3 and 5 atoms) characterized by a pulse with constant Rabi frequency \(2\,\pi\,rad/\mu s\) of duration \(660\,ns\) and null detuning but with different number and positions of the atoms. In order to train the ML models to predict the values of \(\sigma_{R}\), we simulate the data for computation on the nine registers with different amount of simulated noise Figure 1: Scheme of the noise estimation pipeline. A global pulse is defined by the shapes of Rabi frequency \(\Omega\) and detuning \(\delta\) (a). A register is prepared with the positions of a set of \(n\) atoms (6 in the specific case) that are irradiated by the laser pulse (b). When the pulse ends, the excitation states of the atoms are measured and the process is repeated to gather statistics on the occupation probabilities \(\mathbf{\mathcal{P}}=\mathcal{P}_{1},\ldots,\mathcal{P}_{2^{n}}\) (c). The probabilities are used as input to an Artificial Neural Network (ANN) that predicts the noise parameters (d). The ANN is trained collecting a simulated dataset of probabilities labelled with the corresponding values of noise. The depicted setting is for the more general multiple parameters estimation. The difference for the single parameter estimation is that the neural network have only one output for \(\sigma_{R}\) and the adopted pulses and atoms registers are different. effects. In detail, we preliminarily generate a sequence of \(10\,000\)\(\sigma_{R}\) values extracted from a uniform distribution \(\mathcal{U}(0,0.15)\). These values are used to add noise in an equal number of simulations, whose results are occupation probability vectors. Therefore, in the end, \(10\,000\) samples are obtained. This procedure is repeated for each of the 9 quantum systems we mentioned above. The occupation probabilities associated with the corresponding values of \(\sigma_{R}\) for the 9 systems are used to evaluate two different scalings: (i) in the quantum register size comparing increasingly larger systems of 2,3,4 and 5 qubits and (ii) in the number of measurements of multiple systems with 4 qubits where the occupation probabilities of all the systems simulated with the same values of \(\sigma_{R}\) contributes to gather information on the noise effects during the training of the ML models. In detail, we decided to use as input to the ML models the concatenation of the probabilities of the systems and, for two systems \(s_{A}\) and \(s_{B}\), we indicate the latter with the notation \(s_{A}\oplus s_{B}=\mathcal{P}_{1,A},\ldots,\mathcal{P}_{2^{n},A},\mathcal{P}_ {1,B},\ldots,\mathcal{P}_{2^{n},B}\). In both scaling, the procedure is always the same: 20 models are trained on each dataset through a 20-fold cross validation. From the 20 predicted parameter values, the average value and the standard deviation can be obtained to include the variability of the models' predictions. Both analyses are performed with linear regression as baseline model and with ANNs. Regarding ANNs, they are trained for 150 epochs with the Adam optimizer and with hyperparameter optimization. For a more in-depth discussion of the technical details related to model design and hyperparameters optimization reef to section VI.1. In the following, the ML models are trained and validated on the simulated data, and subsequently they are also tested on the real measurements. Using the simulated validation data, it is possible to monitor how the model is capable of generalization to unseen measurements. In this regard, we report in fig. 2 (the scaling (i) in fig. 2(a) and (ii) in fig. 2(b)) the Mean Absolute Error (MAE), averaged for all the samples of the validation set, between the predicted values of \(\sigma_{R}\) and the ground truth that we recall is the value of \(\sigma_{R}\) used to perform the simulation. Again, having 20 estimates (one for each model), we calculate mean value and standard deviation of the MAE to provide more robust results with associated uncertainty. Regarding the estimation on the real data, we show in fig. 3 (fig. 3(a) for the scaling (i) and fig. 3(b) for the scaling (ii)) the mean values and standard deviations along the 20 models of the predicted values of \(\sigma_{R}\). In both fig. 2 and fig. 3, the result of the training of linear regression models are depicted in black and the results of ANN in blue. Additionally, in fig. 2(b) and fig. 3(b) we highlight in green for the linear regression and in red for the ANN a specific case: the concatenation of the measurements on two peculiar settings with four atoms, \(s_{4a}\) and \(s_{4b}\), that have not only the same amount of atoms but also exactly the same topology. Therefore, the latter can be seen as a special case of the scaling (ii) where multiple measures of the same system are performed. Moreover, for the real measurements we consider both orderings \(s_{4a}\oplus s_{4b}\) and \(s_{4b}\oplus s_{4a}\) whose prediction results are reported with two couples of green and red points in fig. 3(b) (not clearly visibles in the plot because they are almost overlapping). As expected, the prediction error is decreasing with the number of atoms in the system because we get more information on the dynamic and thus on the noise influencing it. In fig. 2 we can also observe that ANN are in general more powerful respect to linear regression models (at the cost of more resource-intensive computations). In fact, the errors for the ANN models are always lower respect to the errors of linear regression models and the difference is more pronounced increasing the number of atoms and measurements. This can be explained with a better capacity of ANNs to model complex dynamics. Overall, comparing fig. 2(a) with 5 atoms and fig. 2(b) with number of measurements equal to 2, seems to be more convenient to consider more measurements respect to increase the number of atoms of the setting. Also, comparing the green and red points with the black and blue ones for the same number of measurements in fig. 2(b), can observe that can be slightly better to consider multiple measurements of the same setting with the same topology respect to collect measurements of a different setting with the same number of atoms. We observe in fig. 3 that the values of \(\sigma_{R}\) predicted for the measurements of settings with 2 and 5 atoms are close to the estimated value of 3%, however the prediction for the setting of 3 atoms is lower and the predictions for all the settings with 4 atoms, and concatenation of them, are around 7%. An explanation for this mismatch can be that the real data used for the experiments was collected when the device was still under development. Moreover the predictions consider only \(\sigma_{R}\) as a variable source for the noise, thus variations of the other noise parameters in the real machine influence the predictions of \(\sigma_{R}\). Nevertheless, it is remarkable that the trained models have low standard deviations for the predictions that, even if this does not exclude an high bias error, still suggest a low variance error for the models. We can also observe that the order of the measurement for the settings \(s_{4a}\) and \(s_{4b}\) do not influence the predicted values - in fact the two green circles and the two red circles in fig. 3 are almost overlapping. To summarize, noise estimation based on supervised learning is possible. The protocol we presented seems to suggest merging data from multiple similar registers instead of larger registers directly. This may be useful because of the difficulty in simulating larger systems. In addition, the estimates obtained are derived by averaging estimates from 20 models. Moreover, the associated standard deviation is small relative to the predicted value, so all 20 models converge to very similar values. Finally, we repeat that having neglected several noise sources, the parameter values found could be effective values. ### Multiple parameters characterization In this section we train a deep learning model in a multioutput regression setting to estimate the values of all the noise parameters in table 1. We simulated a dataset of \(54\,000\) labelled samples for the 6-qubit system whose topology can be observed in fig. 1(b). The used pulse sequence that defines the dynamics is shown in fig. 1(a). Analogously to the scaling experiments in the previous section, the measurement for each simulation is obtained sampling \(500\) runs. The values used in the simulations for each parameter are: \(\sigma_{R}=\mathcal{U}(0,0.15)\), \(w(\mu m)=\mathcal{U}(0,200)\), \(T(\mu K)=\mathcal{U}(0,100)\), \(\varepsilon=\mathcal{U}(0,0.15)\) and \(\varepsilon^{\prime}=\mathcal{U}(0,0.15)\). After finding the best set of hyper-parameters, \(20\) models are trained using the cross validation procedure to exploit the entire dataset and to obtain the standard deviations of the predictions. Each one of the \(20\) models is trained with early stopping for a maximum of \(150\) epochs. Further technical details related to ANNs design and hyperparameter optimization can be found in section VI.2. In table 2 we show the resulting estimation of the main noise factors. Each reported value is the average of the \(20\) models trained on different splits with the corresponding standard deviation. We observe that the predicted values do not match those estimated by Pasqal, although all \(20\) models always converge to very similar values of the predictions. In this regard, the same considerations expressed at the end of section III.1 are also valid for multi-parameter estimation: ie, that the parameter predictions obtained could be effective values that incorporate other neglected effects (noise sources, influence of other neighboring atoms, etc.). Another possible factor could be that the measurements came from a prototype NISQ, just as in the case of those used in section III.1. Therefore, we can expect more agreement in the future as a result of technical improvements. Moreover, it is worth noting that, even if for the experiments in this section \begin{table} \begin{tabular}{r|c c|c} **Parameter** & **Predicted value** & **Estimated value** \\ \hline \(\sigma_{R}\) & \(0.079\)\(\pm\)\(0.005\) & \(0.03\) \\ \(w\) & \(122\mu m\)\(\pm\)\(6\) & \(68\mu m\) \\ \(T\) & \(56\mu K\)\(\pm\)\(4\) & \(30\mu K\) \\ \(\varepsilon\) & \(0.082\)\(\pm\)\(0.010\) & \(0.03\) \\ \(\varepsilon^{\prime}\) & \(0.078\)\(\pm\)\(0.005\) & \(0.08\) \\ \hline \end{tabular} \end{table} Table 2: Predicted values on real data expressed as average and standard deviation of \(20\) models trained on cross validation. The last column report for practicity the same estimated values of table 1. Figure 2: Scaling of single measurement for systems with an increasing number of atoms (a) and scaling in the number of measurements for systems with four atoms (b). We report the average absolute errors and standard deviations for \(20\) linear regression (in black and green) and \(20\) ANN (in blue and red) models in the predictions of \(\sigma_{R}\) on the synthetic validation set. The models in (a) uses as input the measurements of \(s_{2}\), \(s_{3}\), \(s_{4a}\) and \(s_{5}\). The models in (b) uses as input one or more concatenated measurements of runs of the settings with four atoms (the fourth pair of points in (a) is equal to the first pair in (b)). Indicating with \(\cdot\oplus\) - the concatenation of the measurements of the settings, we report in (b) in black and blue \(s_{4a},s_{4a}\oplus s_{4c},s_{4a}\oplus s_{4c}\oplus s_{4d},s_{4a}\oplus s_{4c} \oplus s_{4d}\oplus s_{4e},s_{4a}\oplus s_{4c}\oplus s_{4d}\oplus s_{4d}\) and in green and red \(s_{4a}\oplus s_{4b}\). the setting and the pulse are different to the ones used in section III.1, the predicted value for \(\sigma_{R}\) is comparable to the ones obtained for the estimation of the same parameter in the settings with four atoms previously illustrated. ## IV Error Correction Many techniques have been developed in the theory of classical error-correcting codes [25; 26]. The key idea on which they are based is mainly redundancy. Nonetheless, the addition of redundancy is not immediate in NISQ devices because of the no cloning theorem [27]. However, some sort of redundancy can be achieved in quantum devices by expanding the system to more qubits [28]. In fact, all the most used quantum error correction techniques require the use of more qubits than the ones strictly necessary for the computation [29] but it is not feasible with NISQ devices. Therefore, we propose to verify that it is possible to mitigate the effects of quantum noise without extra qubits through the use of RL techniques. RL is a ML area where an agent learns which actions to perform in order to maximize a reward [17]. Schematically, we can say that this is a closed-loop problem because the actions of the learning system influence subsequent inputs. In addition, the learner does not know a priori which action to perform and has to find out for himself through trials and errors which actions lead to larger rewards. Actions can influence not only the immediate reward but also future rewards. RL, unlike Supervised Learning, does not require labelled input-output pairs, but focuses on finding a balance between exploration of the actions space in an environment and exploitation of the acquired knowledge. The agent must exploit what it already knows in order to obtain reward, but it must also explore in order to make better action selections in the future. The trade-off is that neither exploration nor exploitation can be exclusively pursued without failing in the task. The agent must try a variety of actions and progressively favour those that seem to be the best. Any problem of learning goal-oriented behaviour can be reduced to three signals that are exchanged between an agent and its environment: a signal to represent the choices made by the agent (the actions), a signal to represent the basis on which the choices are made (the states) and a signal to define the agent's goal (the re Figure 3: Predictions on real data of the value of \(\sigma_{R}\) for the models trained for the scaling in the number of atoms (a) and in the number of measurements (b) reported in fig. 2. We report the average values and standard deviations for the 20 linear regression (in black and green) and the 20 ANN (in blue and red) models in the predictions of \(\sigma_{R}\) using a set of real measurements of the settings described in table 3 run on the Pasqal NISQ devices. The models in (a) uses as input the measurements of \(s_{2}\), \(s_{3}\), \(s_{4a}\) and \(s_{5}\). The models in (b) uses as input one or more concatenated measurements of runs of the settings with four atoms (the fourth pair of points in (a) is equal to the first pair in (b)). We report in (b) in black and blue the incremental concatenation of \(s_{4a}\), \(s_{4c}\), \(s_{4d}\), \(s_{4e}\) and \(s_{4f}\). In green and red we report the concatenation of \(s_{4a}\) and \(s_{4b}\). The order of the real measurements for the latter concatenation is irrelevant, thus we report two green and two red points (almost overlapping and not clearly discernible) to consider the two possible concatenations. The horizontal red line indicates the value of 3% for \(\sigma_{R}\) estimated by Pasqal. wards). In detail, for each action of the agent at time \(t\), its effects on the environment are quantified by a reward \(r_{t}\). Then the objective of the training is to maximize the discounted cumulative reward \(R_{t_{0}}=\sum_{t=t_{0}}^{\infty}\gamma^{t-t_{0}}r_{t}\), where the discount \(\gamma\in(0,1)\) is an hyperparameter that controls the importance of rewards far in the future respect to the ones immediately after \(t_{0}\). This objective is implemented with the idea that if we would have a function \(Q^{*}:State\times Action\rightarrow\mathbb{R}\) that given a state and an action performed over that state, returns the cumulative discounted reward, then the policy can be implemented with \(\pi^{*}(s)=\arg\max_{a}Q^{*}(s,a)\). In general, \(Q^{*}\) is unknown and is approximated by a neural network. For a defined policy \(\pi\), the \(Q\) function obeys the Bellman equation \(Q^{\pi}(s,a)=r+\gamma Q^{\pi}(s^{\prime},\pi(s^{\prime}))\) where \(r\) and \(s^{\prime}\) are respectively the reward and the next state obtained after the action \(a\) on the state \(s\). The neural network that defines \(Q\), and then the agent, is trained minimizing over a batch of transitions the Huber loss \(\mathcal{L}(\delta)\) of the temporal difference error \(\delta=Q(s,a)-(r+\gamma\max_{a}Q(s^{\prime},a))\). We choose to correct the standard impulse \(P\) depicted in fig. 4(a) applied to a single qubit. \(P\) has a Gaussian profile in the Rabi frequency \(\Omega\) of duration \(T=500\ ns\) and area \(\pi/2\) and a ramp profile in detuning \(\delta\) of duration \(T=500\ ns\) with \(\delta_{0}=-20\ \mathrm{rad}/\mu s\) and \(\delta_{T}=20\ \mathrm{rad}/\mu s\). The choosen approach to correct the noise is to apply the correction pulse fig. 4(b) to be placed after the pulse to be corrected and having the same characteristics and length of \(T=500\ ns\). In detail, we choose a Gaussian profile in the Rabi frequency with variable area \(a\) and a ramp profile in detuning \(\delta\) with variable initial \(\delta_{i}\) and final \(\delta_{f}\). In such a way, the final atoms occupation probabilities with the application of the corrected pulse \(\mathbf{\mathcal{P}}^{noisy}_{P+P^{\prime}}\) and after the ideal pulse \(\mathbf{\mathcal{P}}^{ideal}_{P}\) are _closer_ than \(\mathbf{\mathcal{P}}^{noisy}_{P}\) and \(\mathbf{\mathcal{P}}^{ideal}_{P}\). By the notation \(\mathbf{\mathcal{P}}^{i}_{j}\) we denote the measurement \(\mathbf{\mathcal{P}}\) obtained after running a simulation with the pulse \(j\) with or without noise (respectively, \(i=noisy\) or \(i=ideal\)). The training allows to find the three optimal parameters \(a\), \(\delta_{i}\) and \(\delta_{f}\) for the correction impulse \(P^{\prime}\). In our RL framework, the state is represented by the occupation probabilities that are estimated from the average of 10 independent noisy simulations whose probabilities are extracted from the amplitudes of 25 quantum states uniformly sampled along the simulated dynamic. At the beginning of each episode we choose \(a=\pi/20\) and \(\delta_{i}=\delta_{f}=0\) and they can have values in the ranges \(a\in[0,\pi/2]\) and \(\delta_{i},\delta_{f}\in[-20,20]\). The agent, implemented with an ANN that have an input layer of 50 units (2 basis for each one of the 25 intermediate states), two ReLU hidden layer of 128 neurons and an output layer of 6 neurons, selects one among four possible actions: \(a^{t}=a^{t-1}+\Delta a\), \(a^{t}=a^{t-1}-\Delta a\), \(\delta_{i}^{t}=\delta_{i}^{t-1}+\Delta\delta_{i}\), \(\delta_{i}^{t}=\delta_{i}^{t-1}-\Delta\delta_{i}\), \(\delta_{f}^{t}=\delta_{f}^{t-1}+\Delta\delta_{f}\), \(\delta_{f}^{t}=\delta_{f}^{t-1}-\Delta\delta_{f}\). We choose fixed values for \(\Delta a=\pi/200\) and \(\Delta\delta_{i}=\Delta\delta_{f}=0.2\). Each episode is constituted of a series of steps at increasing values of \(t\). For each step, the chosen action is applied, a correction impulse \(P^{\prime}_{t}\) characterized by \(a^{t}\), \(\delta_{0}^{t}\) and \(\delta_{f}^{t}\) is generated and used in a new simulation obtaining a new probability vector \(\mathbf{\mathcal{P}}^{noisy}_{P+P^{\prime}_{t}}\) for the final quantum state of the corrected noisy simulation and the reward \(r(t)\) before proceeding with the next step. The episode ends when the action causes \(a\), \(\delta_{0}\) or \(\delta_{f}\) to go out of boundaries or after 100 steps. The reward is defined as: \[r(t)=\begin{cases}1&\text{if}\quad\left|\mathbf{\mathcal{P}}^{noisy}_{P+P^{\prime }_{t}}-\mathbf{\mathcal{P}}^{ideal}_{P}\right|_{1}\;<\;\left|\mathbf{\mathcal{P}}^{ noisy}_{P+P^{\prime}_{t-1}}-\mathbf{\mathcal{P}}^{ideal}_{P}\right|_{1}\;,\\ 0&\text{otherwise},\end{cases} \tag{1}\] where \(|\cdot|_{1}\) is the \(\ell_{1}\) norm. Specifically, the reward is 1 if the last action at step \(t\) makes the corrected noisy simulation closer to the ideal one respect to the previous step \(t-1\) and 0 otherwise. During the training we monitor the Kullback-Leibler (KL) divergence between \(\mathbf{\mathcal{P}}^{noisy}_{P+P^{\prime}_{t}}\) and \(\mathbf{\mathcal{P}}^{ideal}_{P}\): \[D_{KL}(\mathbf{\mathcal{P}}^{noisy}_{P+P_{t}},\mathbf{\mathcal{P}}^{ideal}_{P})=\sum_{i= 1}^{2}\left(\mathbf{\mathcal{P}}^{noisy}_{P+P^{\prime}_{t}}\right)_{i}\log\left( \frac{\left(\mathbf{\mathcal{P}}^{noisy}_{P+P^{\prime}_{t}}\right)_{i}}{\left(\mathbf{ \mathcal{P}}^{ideal}_{P}\right)_{i}}\right), \tag{2}\] averaged for all the steps \(t\) within each episode. The evolution of the averaged KL divergence for the \(1\,000\) training episodes is reported in fig. 5 where we can observe that it effectively decreases below the reference value of \(D_{KL}(\mathbf{\mathcal{P}}^{noisy}_{P},\mathbf{\mathcal{P}}^{ideal}_{P})=0.0011\) reported with the red line and calculated with the average for 100 noisy simulations without the correction pulse. ## V Conclusions and Outlooks We presented two applications of ML to the context of quantum noise characterization and correction. To characterize the noise we collected a dataset of multiple simulated noisy measurement of different settings in Pasqal quantum machines to train ML models and we test them on real data. For the noise correction we trained a RL model to find a correction pulse to counteract the effects of the noise affecting a simulated test setting. Regarding the noise characterization, we compared ANN with linear regression models in predicting the value of the laser intensity fluctuation \(\sigma_{R}\), scaling the number of qubit in the register and the number of measurements of the system. We found that ANN perform better than linear regression and that the model accuracies increases both with the number of qubits and with the number of measurements. Moreover, we have insights that in order to better characterize the noise parameters it is more effective to increase the number of measurements respect to the number of qubits. When we tried to predict the noise parameters on real NISQ devices we found that, for every set of measurement, 40 different models (ANN and linear regression trained independently in a 20 fold cross validation setting) agree on the predictions and therefore the variance error is low. Finally, we trained 20 ANN models in a multiregression setting to predict five different noise parameter values and also in this case the models agree between them when tested on real data. Regarding the noise correction, the proposed approach successfully learns to correct a simulated noisy pulse and to make the measured probabilities closer to the ideal ones. We believe that the results presented in this work can be used to better quantify the effects of the noise affecting the Pasqal, and in general neutral atoms, NISQ Figure 4: Standard pulse \(P\) (a) to be corrected with a correction pulse \(P^{\prime}\) (b) to be added after \(P\) to counteract the effects of the noise. The Rabi frequency \(\Omega\) is depicted in green and the detuning \(\delta\) in purple. \(P\) is a pulse of duration \(T=500ns\), Gaussian Rabi profile with area equal to \(\pi/2\) and detuning in the form of a ramp from \(\delta_{0}=-20\) rad\(/\mu s\) and \(\delta_{T}=20\) rad\(/\mu s\). \(P^{\prime}\) is a pulse with the same duration and characteristics of \(P\) but with variable Rabi area \(a\), initial detuning \(\delta_{i}\) and final detuning \(\delta_{f}\). Figure 5: Evolution of the KL divergence between the corrected noisy simulation and the ideal one averaged for each episode. The red line is the reference value of 0.0011 for the KL divergence between the uncorrected noisy simulation and the ideal one averaged over 100 simulations. devices and to counteract those effects. The presented tecniques are dependent on the atoms topology and the pulse shape. Thus, the ML models can be trained to characterize and correct the noise of single quantum gates that compose more complex Hamiltonians. The accuracy of the predicted noise parameters depends on the accuracy of the simulation and in particular on the accuracy of the simulator noise model. In previous works [30, 31, 13, 11] and in preliminary experiments using Pasqal simulator, there is an evidence of the improvement of the noise characterization when more temporal statistics are collected. We adopted this strategy in this paper for the noise correction, where the occupation probabilities are obtained from the amplitudes of the intermediate quantum states sampled at regular steps within the simulated dynamic. However, in real NISQ devices, intermediate measurements of the dynamic are less straightforward because of the impossibility of observing a system without changing it. We can obtain the same effect independently measuring incremental subdynamics from \(t=0\) to subsequent time steps of the full dynamic. To implement this approach on Pasqal machines, we can design a full pulse that is subsequently split in sub-pulses at times \([t_{0},t_{1}],[t_{1},t_{2}],\ldots,[t_{n-1},t_{n}]\). The measurements at time \(t_{k}\) for \(k=1,\ldots,n\) can be obtained initialising the register always to the same initial setting and performing the computation considering the effects of all the sub-pulses spanning the times \([t_{0},t_{k}]\) from the first to the one before \(t_{k}\). The ML models can then process all the measurements obtained at times \(t_{1},\ldots,t_{k}\) and in that way we expect to obtain better results for the characterization of the noise. Moreover, we can also use ANN more suitable for data organized in temporal sequences, i.e. Recurrent Neural Network (RNN). Finally, in the context of Quantum Machine Learning (QML) [32, 33] our work is framed as a classical ML approach to process quantum data. Future research lines may include the design of QML models for the noise characterization and correction implemented directly within the quantum dynamic of neutral atoms devices or of other NISQ devices. For instance pattern matching QML techniques [34] can be adapted for the identification of noise patterns [13] characteristics to the neutral atoms dynamics. **Acknowledgements** This work was financially supported by the European Union's Horizon 2020 research and innovation programme under FET-OPEN GA n. 828946-PATHOS. We acknowledge the CINECA award under the ISCRA initiative, for the availability of high performance computing resources, as _Marconi100_ supercomputer, and their support. S.M. acknowledges financial support from PNRR MUR project PE0000023-NQSTI. Finally, we are also thankful to Pasqal for the provided data that we have used to test our protocol.
2301.06195
Calibrated Data-Dependent Constraints with Exact Satisfaction Guarantees
We consider the task of training machine learning models with data-dependent constraints. Such constraints often arise as empirical versions of expected value constraints that enforce fairness or stability goals. We reformulate data-dependent constraints so that they are calibrated: enforcing the reformulated constraints guarantees that their expected value counterparts are satisfied with a user-prescribed probability. The resulting optimization problem is amendable to standard stochastic optimization algorithms, and we demonstrate the efficacy of our method on a fairness-sensitive classification task where we wish to guarantee the classifier's fairness (at test time).
Songkai Xue, Yuekai Sun, Mikhail Yurochkin
2023-01-15T21:41:40Z
http://arxiv.org/abs/2301.06195v1
# Calibrated Data-Dependent Constraints with Exact Satisfaction Guarantees ###### Abstract We consider the task of training machine learning models with data-dependent constraints. Such constraints often arise as empirical versions of expected value constraints that enforce fairness or stability goals. We reformulate data-dependent constraints so that they are _calibrated_: enforcing the reformulated constraints guarantees that their expected value counterparts are satisfied with a user-prescribed probability. The resulting optimization problem is amendable to standard stochastic optimization algorithms, and we demonstrate the efficacy of our method on a fairness-sensitive classification task where we wish to guarantee the classifier's fairness (at test time). ## 1 Motivation In machine learning (ML) practice, accuracy is often only one of many training objectives. For example, algorithmic fairness considerations may require a credit scoring system to perform comparably on men and women. Here are a few other examples. Churn rate and stabilityThe churn rate of an ML model compared to another model is the fraction of samples on which the predictions of the two models differ [21, 30]. In ML practice, one may wish to control the churn rate between a new model and its predecessor because a high churn rate can disorient users and downstream system components. One way of training models with small churn is to enforce a churn rate constraint during training. Precision, recall, _etc._Classification and information retrieval models must often balance precision and recall. To train such models, practitioners carefully trade off one metric for the other by optimizing for one metric subject to constraints on the other. Resource constraintsPractitioners sometimes wish to control how often a classifier predicts a certain class due to budget or resource constraints. For example, a company that uses ML to select customers for a targeted offer may wish to constrain the fraction of customers selected for the offer. Another prominent example of a stochastic optimization problem with resource constraints is the newsvendor problem, which we come back to in section 4. Unlike constraints on the structure of model parameters (_e.g._, sparsity), the constraints encoding the preceding training objectives are _data-dependent_. This leads to the issue of _constraint generalization_: whether the constraints _generalize_ out-of-sample. For example, if a classifier is trained to have comparable accuracy on two subpopulations in the training data, will it also have comparable accuracy on samples from the two subpopulations at test time? In this paper, we consider the out-of-sample generalization of _expected-value_ constraints. To keep things simple, consider a stochastic optimization problem with a single _expected-value_ constraint: \[\theta^{\star}\in\left\{\begin{aligned} &\operatorname*{arg\,min}_{\theta\in\Theta}& \quad\mathbb{E}_{P_{0}}\big{[}f(\theta;Z)\big{]}=\int_{Z}f(\theta;z)dP_{0}(z) \\ &\text{subject to}&\quad\mathbb{E}_{P_{0}}\big{[}g( \theta;Z)\big{]}=\int_{Z}g(\theta;z)dP_{0}(z)\leq 0\end{aligned}\right\}, \tag{1.1}\] where \(\Theta\) is a (finite-dimensional) parameter space, \(f,g:\Theta\times\mathcal{Z}\to\mathbb{R}\) are (known) cost and constraint functions, and \(Z\in\mathcal{Z}\) is a random variable that represents a sample. The distribution of \(Z\) is unknown, so we cannot solve (1.1) directly. Instead, we obtain IID training samples \(\{Z_{i}\}_{i=1}^{n}\) from the true underlying distribution \(P_{0}\) and solve the empirical version of (1.1): \[\widehat{\theta}_{n}\in\left\{\begin{aligned} &\operatorname*{arg\,min}_{\theta\in \Theta}&\quad\frac{1}{n}\sum_{i=1}^{n}f(\theta;Z_{i})\\ &\text{subject to}&\quad\frac{1}{n}\sum_{i=1}^{n}g( \theta;Z_{i})\leq 0\end{aligned}\right\}. \tag{1.2}\] The estimator \(\widehat{\theta}_{n}\) (of \(\theta^{\star}\)) is guaranteed to satisfy the empirical constraint (_i.e._, \(\frac{1}{n}\sum_{i=1}^{n}g(\widehat{\theta}_{n};Z_{i})\leq 0\)), but it is unclear whether \(\widehat{\theta}_{n}\) satisfies the actual (population) constraint \(\mathbb{E}_{P_{0}}\big{[}g(\theta;Z)\big{]}\leq 0\). As we shall see, under standard assumptions on (1.1), \(\widehat{\theta}_{n}\) only satisfies the actual constraint with probability approaching \(\frac{1}{2}\) (see corollary 2.2). This is especially problematic for constraints that encode algorithmic fairness goals. For example, the 80% rule published by the US Equal Employment Opportunity Commission, interpreted in the machine learning context, requires the rate at which a classifier predicts the advantaged label in minority groups to be at least 80% of the rate at which the classifier predicts the advantaged label in the majority group [3]. In this paper, we propose a distributionally robust version of (1.2) that _guarantees_ the actual constraint \(\mathbb{E}_{P_{0}}\big{[}g(\theta;Z)\big{]}\leq 0\) will be satisfied with probability \(1-\alpha\): \[\widehat{\theta}_{n}\in\left\{\begin{aligned} &\operatorname*{arg\,min}_{\theta\in \Theta}&\quad\frac{1}{n}\sum_{i=1}^{n}f(\theta;Z_{i})\\ &\text{subject to}&\quad\sup_{P:D_{\varphi}(P\|\widehat{P}_{ n})\leq\frac{\Theta}{n}}\mathbb{E}_{P}\big{[}g(\theta;Z)\big{]}\leq 0 \end{aligned}\right\}, \tag{1.3}\] where \(D_{\varphi}\) is a \(\varphi\)-divergence (see section 2 for details), \(\widehat{P}_{n}\) is the empirical distribution of the training samples, and \(\sqrt{\rho_{\alpha}}\) is the \(1-\alpha\) quantile of a standard normal random variable. More concretely, we show that \(\widehat{\theta}_{n}\) achieves _asymptotically exact constraint satisfaction_ \[\lim_{n\to\infty}\mathbb{P}\left\{\mathbb{E}_{P_{0}}\big{[}g(\widehat{\theta} _{n};Z)\big{]}\leq 0\right\}=1-\alpha. \tag{1.4}\] Here the inner expectation is with respect to \(Z\); the outer probability is with respect to the training samples \(\{Z_{i}\}_{i=1}^{n}\). Three desirable properties of (1.3) are 1. **exact constraint satisfaction:** If the actual probability of constraint satisfaction exceeds \(1-\alpha\), then the method is too conservative. This may (unnecessarily) increase the cost of the model. By picking \(\rho_{\alpha}\) in (1.3) carefully, constraints are satisfied with asymptotically exact probability \(1-\alpha\). 2. **computationally efficient:** As we shall see, the computational cost of solving (1.3) is comparable to the cost of solving distributionally robust sample average approximation (SAA) problems. 3. **pivotal:** There are no nuisance parameters to estimate (_e.g._, asymptotic variances) in (1.3). The user merely needs to look up the correct quantile of the standard normal distribution for their desired level of constraint generalization. The rest of this paper is organized as follows. In Section 2, we develop method, theory, and algorithm for stochastic optimization problems with single constraint. In Section 3, we extend our method, theory, and algorithm to stochastic optimization problems with multiple constraints. In Section 4, we validate our theory by simulating a resource-constrained newsvendor problem. In Section 5, we demonstrate the efficacy of our method by using it to train an algorithmically fair income classifier. In addition, we show how to apply our method to a fairness constrained learning problem and discuss two practical considerations for fair ML application scenarios. Finally, we summarize our work in Section 6 and point out an interesting avenue of future work. ### Related work The closest work to our work is [27]. They seek to pick a (data-dependent) _uncertainty set_\(\mathcal{U}\) such that \[\lim_{n\to\infty}\mathbb{P}\left\{\sup_{\theta}\left\{\mathbb{E}_{P_{0}}\big{[} g(\theta;Z)\big{]}-\sup_{P\in\mathcal{U}}\mathbb{E}_{P}\big{[}g(\theta;Z) \big{]}\right\}\leq 0\right\}=1-\alpha. \tag{1.5}\] This condition is stronger than necessary: we only require \[\lim_{n\to\infty}\mathbb{P}\left\{\mathbb{E}_{P_{0}}\big{[}g(\widehat{\theta}_{n}; Z)\big{]}-\sup_{P\in\mathcal{U}}\mathbb{E}_{P}\big{[}g(\widehat{\theta}_{n};Z) \big{]}\leq 0\right\}=1-\alpha \tag{1.6}\] where \(\widehat{\theta}_{n}\) is a (data-dependent) estimator (not necessarily (1.2) or (1.3)). [27] study (asymptotic) constraint satisfaction (1.4) for all deterministic objective functions (see [27], SS1.1 for details). They advocate picking a KL divergence ball with radius that depends on the excursion probability of a certain \(\chi^{2}\) process. Another closely related line of work is on data-splitting approaches for ensuring constraint generalization [37, 7]. At a high level, they split the training data into a training and validation subsets and use the validation subset to tune models trained on the training subset so that they satisfy the constraints. Although (computationally) simple and intuitive, their approach does not allow users to precisely control the constraint violation probability. [27] is the latest in a line of work on distributionally robust optimization (DRO) that show the optimal value of DRO problems \[\min_{\theta\in\Theta}\sup_{P\in\mathcal{U}}\mathbb{E}_{P}\big{[}g(\theta;Z) \big{]}, \tag{1.7}\] where \(\mathcal{U}\) is a (data-dependent) uncertainty set of probability distributions, are upper confidence bounds for the optimal values of stochastic optimization problems. Common choices of uncertainty sets in DRO include uncertainty sets defined by moment or support constraints [6, 12, 22], \(\varphi\)-divergences [4, 26, 31], and Wasserstein distances [34, 5, 18, 28, 35]. This line of work is motivated by Owen's seminal work on empirical likelihood [32]. In recent work, [26, 15] show that the optimal value of DRO problems with empirical likelihood uncertainty sets leads to asymptotically exact upper confidence bounds for the optimal value of stochastic optimization problems ([15] consider more general \(\varphi\)-divergence uncertainty sets). [5] establish similar coverage results for Wasserstein uncertainty sets. Our work is also closely related to the work on the variance regularization properties of DRO [31], which uses DRO to approximate the variance regularization cost function (see (2.4)). [20] establish similar results for Wasserstein DRO. Lastly, we relate our work to the literature on chance constrained optimization (see [24] and the references therein). The general goal of chance constrained optimization is to minimize a loss function subject to the probability of satisfying uncertain constraints is above a prescribed level. While our methods reformulate expected value constraints and we show that the solution of the reformulated problem enjoys an asymptotically exact probabilistic guarantees of constraint satisfaction. In addition, the data-dependent constraints in our work are also unknown in practice, which differs from the common setup in the chance constrained optimization literature. ## 2 Single expected value constraint We motivate (1.3) by considering a few alternatives. First, we note that the results later in this section show that (1.2) violates the actual constraint in (1.1) approximately half the time (see corollary 2.2). The most straightforward modification of (1.2) to ensure \(\widehat{\theta}_{n}\) satisfies the (actual) constraint \(\mathbb{E}_{P_{0}}\big{[}g(\theta;Z)\big{]}\leq 0\) is to add a "margin" in (1.3); _i.e._ enforce the constraint \[\tfrac{1}{n}\sum_{i=1}^{n}g(\theta;Z_{i})+\epsilon_{n}\leq 0 \tag{2.1}\] in (1.2). If we pick the slack term \(\epsilon_{n}\) such that \[\mathbb{P}\left\{\sup_{\theta\in\Theta}\left\{\mathbb{E}_{P_{0}}\big{[}g( \theta;Z)\big{]}-\tfrac{1}{n}\sum_{i=1}^{n}g(\theta;Z_{i})\right\}>\epsilon_{ n}\right\}\leq\alpha,\] then it is not hard to check that the resulting \(\widehat{\theta}_{n}\) satisfies the (actual) constraint with probability greater than \(1-\alpha\)[36, 29]. However, this approach is most likely conservative because the constraint is unnecessarily stringent for \(\theta\)'s such that \(\tfrac{1}{n}\sum_{i=1}^{n}g(\theta;Z_{i})\) is less variable. It is also not pivotal: \(\epsilon_{n}\) is often set using bounds from (uniform) concentration inequalities, which typically depend on unknown problem parameters. To relax the empirical constraint in a way that adapts to the variability of the empirical constraints, we replace the uniform margin in (2.1) with a parameter-dependent margin: \[\tfrac{1}{n}\sum_{i=1}^{n}g(\theta;Z_{i})+z_{\alpha}\tfrac{\widehat{\boldsymbol {\sigma}}(\theta)}{\sqrt{n}}\leq 0, \tag{2.2}\] where \(z_{\alpha}\) is the \(1-\alpha\) quantile of a standard normal random variable and \(\widehat{\mathbf{\sigma}}^{2}(\theta)\) is an estimate of the asymptotic variance of \(g(\theta;Z)\). We recognize the (parameter-dependent) margin as (a multiple of) the standard error of the empirical constraint. It is possible to show that enforcing (2.2) achieves asymptotically exact constraint generalization (1.4) [27]. The main issue with this method is it is not amenable to standard stochastic optimization algorithms. In particular, even if the original constraint in (1.2) is convex, (2.2) is generally non-convex. Another issue is that it is not pivotal: the user must estimate the asymptotic variance of \(g(\theta;Z)\). To overcome these two issues, we consider a distributionally robust version of (1.2); _i.e._ enforcing \[\sup_{P:D_{\varphi}(P\|\widehat{P}_{n})\leq\frac{\rho_{\alpha}}{n}}\mathbb{E} _{P}\big{[}g(\theta;Z)\big{]}\leq 0, \tag{2.3}\] where \(D_{\varphi}(P\|Q)\triangleq\int\varphi(\frac{dP}{dQ})dQ\) is a \(\varphi\)-divergence. Common choices of \(\varphi\) include \(\varphi(t)=(t-1)^{2}\) (which leads to the \(\chi^{2}\)-divergence) and \(\varphi(t)=-\log t+t-1\) (which leads to the Kullback-Leibler divergence). Although there are many other choices for the uncertainty set in (2.3), we pick an \(\varphi\)-divergence ball because (i) (2.3) with an \(\varphi\)-divergence ball is asymptotically equivalent to (2.2): \[\sup_{P:D_{\varphi}(P\|\widehat{P}_{n})\leq\frac{\rho_{\alpha}}{n}}\mathbb{E} _{P}\big{[}g(\theta;Z)\big{]}\approx\tfrac{1}{n}\sum_{i=1}^{n}g(\theta;Z_{i}) +z_{\alpha}\tfrac{\widehat{\mathbf{\sigma}}(\theta)}{\sqrt{n}}, \tag{2.4}\] and (ii) it leads to pivotal uncertainty sets. For theoretical analysis, we always use \(\varphi(t)=(t-1)^{2}\) and \(\chi^{2}\)-divergence in the remainder of this paper. Before we state the asymptotically exact constraint satisfaction property of (1.3) rigorously, we describe our assumptions on the problem. 1. **smoothness and concentration:**\(f\) and \(g\) are twice continuously differentiable with respect to \(\theta\), and \(f(\theta^{\star};Z)\), \(\nabla f(\theta^{\star};Z)\), \(g(\theta^{\star};Z)\), \(\nabla g(\theta^{\star};Z)\) are sub-Gaussian random variables. 2. **uniqueness:** the stochastic optimization problem with a single expected value constraint (1.1) has a unique optimal primal-dual pair \((\theta^{\star},\lambda^{\star})\), and \(\theta^{\star}\) belongs to the interior of the compact set \(\Theta\). 3. **strict complementarity:**\(\lambda^{\star}>0\). 4. **positive definiteness:** The Hessian of the Lagrangian evaluated at \((\theta^{\star},\lambda^{\star})\) is positive definite. The preceding assumptions are not the most general, but they are easy to interpret. The smoothness conditions on \(f\) and \(g\) with respect to \(\theta\), the concentration conditions of \(f(\theta^{\star};Z)\) and \(g(\theta^{\star};Z)\), and the uniqueness condition facilitate the use of standard tools from asymptotic statistics to study the large sample properties of the constraint value. The strict complementarity condition rules out problems in which the constraint is extraneous; _i.e._ problems in which the unconstrained minimum coincides with the constrained minimum. We are ready to state the asymptotically exact constraint satisfaction property of (1.3) rigorously. The main technical result characterizes the limiting distribution of the constraint value. **Theorem 2.1**.: _Let \(\widehat{\theta}_{n}\) be an optimal solution of (1.3) converging in probability as \(n\to\infty\) to \(\theta^{\star}\). Under the standing assumptions, we have_ \[\sqrt{n}\left(\mathbb{E}_{P_{0}}\big{[}g(\widehat{\theta}_{n};Z)\big{]}- \mathbb{E}_{P_{0}}\big{[}g(\theta^{\star};Z)\big{]}\right)\overset{d}{\to} \mathcal{N}\left(-\sqrt{\rho_{\alpha}\operatorname{Var}_{P_{0}}[g(\theta^{ \star};Z)]},\operatorname{Var}_{P_{0}}[g(\theta^{\star};Z)]\right).\] We translate this result on the constraint value to a result on constraint generalization. **Corollary 2.2**.: _Let \(\sqrt{\rho_{\alpha}}\) be the \(1-\alpha\) quantile of a standard normal random variable. Under the conditions of theorem 2.1, we have_ \[\lim_{n\to\infty}\mathbb{P}\left\{\mathbb{E}_{P_{0}}\big{[}g(\widehat{\theta }_{n};Z)\big{]}\leq 0\right\}=\mathbb{P}\left\{U\leq\sqrt{\rho_{\alpha}} \right\}=1-\alpha,\] _where \(U\sim\mathcal{N}(0,1)\) is a standard Gaussian random variable._ From theorem 2.1 and corollary 2.2 (see proofs in Appendix A), we find that 1. picking \(\rho_{\alpha}=0\) (_i.e._, equivalently solving (1.2)) leads to a constraint violation probability that approaches \(\frac{1}{2}\) in the large sample limit. 2. the relation between the mean and variance of the limiting distribution of the constraint value in Theorem 2.1 allows us to pick \(\rho_{\alpha}\) in a pivotal way (_i.e._ does not depend on nuisance parameters). ### Stochastic approximation for (1.3) In the rest of this section, we derive a stochastic optimization algorithm to solve (1.3) efficiently. As we shall see, the computational cost of this algorithm is comparable to the cost of solving a DRO problem. The key insight is that the robust constraint function has a dual form (see Appendix J): \[\sup_{P:D_{\varphi}(P\|\bar{P}_{n})\leq\rho}\mathbb{E}_{P}\big{[}g( \theta;Z)\big{]}=\inf_{\mu\geq 0,\nu\in\mathbb{R}}\left\{\frac{1}{n}\sum_{i=1}^{ n}\mu\varphi^{*}\big{(}\tfrac{g(\theta;Z_{i})-\nu}{\mu}\big{)}+\mu\rho+\nu\right\}, \tag{2.5}\] where \(\varphi^{*}(s)\triangleq\sup_{t}\{st-\varphi(t)\}\) is the convex conjugate of \(\varphi\). As we use \(\chi^{2}\)-squared divergence and \(\varphi(t)=(t-1)^{2}\), the corresponding \(\varphi^{*}(s)=\frac{s^{2}}{4}+s\). The Lagrangian of (1.3) is \[L(\theta,\lambda) \triangleq\tfrac{1}{n}\sum_{i=1}^{n}f(\theta;Z_{i})+\lambda\sup _{P:D_{\varphi}(P\|\bar{P}_{n})\leq\frac{\rho_{k}}{n}}\mathbb{E}_{P}\big{[}g( \theta;Z)\big{]}\] \[=\tfrac{1}{n}\sum_{i=1}^{n}f(\theta;Z_{i})+\lambda\inf_{\mu\geq 0,\nu\in\mathbb{R}}\left\{\frac{1}{n}\sum_{i=1}^{n}\mu\varphi^{*}\big{(}\tfrac{ g(\theta;Z_{i})-\nu}{\mu}\big{)}+\mu\frac{\rho_{n}}{n}+\nu\right\}.\] We see that evaluating the dual function \(\inf_{\theta}L(\theta,\lambda)\) (at a fixed \(\lambda\)) entails solving a stochastic optimization problem that is suitable for stochastic approximation. This suggests a dual ascent algorithm for solving (1.3): 1. evaluate the dual function at \(\lambda_{t}\) by solving a stochastic optimization problem. 2. update \(\lambda_{t}\) with a dual ascent step. We summarize this algorithm in Algorithm 1. The main cost of Algorithm 1 is incurred in the third line: evaluating the dual function. Fortunately, this step is suitable for stochastic approximation, so we can leverage recent advances in the literature to reduce the (computational) cost of this step. The total cost of this algorithm is comparable to that of distributionally robust optimization. ``` 1:Input: starting dual iterate \(\lambda_{0}\geq 0\) 2:repeat 3: Evaluate dual function: \[(\theta_{t},\mu_{t},\nu_{t})\leftarrow\arg\min_{\theta,\mu\geq 0,\nu}\tfrac{1}{n} \sum_{i=1}^{n}f(\theta;Z_{i})+\lambda_{t}\left\{\frac{1}{n}\sum_{i=1}^{n}\mu \varphi^{*}\big{(}\tfrac{g(\theta;Z_{i})-\nu}{\mu}\big{)}+\mu\frac{\rho_{n}}{ n}+\nu\right\}\] 4: Dual ascent update: \(\lambda_{t+1}\leftarrow\left[\lambda_{t}+\eta_{t}\left\{\frac{1}{n}\sum_{i=1 }^{n}\mu_{t}\varphi^{*}(\tfrac{g(\theta_{t};Z_{i})-\nu_{t}}{\mu_{t}})+\mu_{t} \frac{\rho_{n}}{n}+\nu_{t}\right\}\right]_{+}\) 5:until converged ``` **Algorithm 1** Dual ascent algorithm for (1.3) ## 3 Multiple expected value constraints In this section, we extend the results from the preceding section to stochastic optimization problems with multiple data-dependent constraints. Consider a stochastic optimization problem with \(K\) expected value constraints \[\theta^{\star}\in\begin{Bmatrix}\arg\min_{\theta\in\Theta}& \mathbb{E}_{P_{0}}\big{[}f(\theta;Z)\big{]}\\ \text{subject to}&\left\{\mathbb{E}_{P_{0}}\big{[}g_{k}(\theta;Z)\big{]}\leq 0 \big{\}}_{k=1}^{K}\right\},\end{Bmatrix} \tag{3.1}\] Following the development in Section 2, we enforce the expected value constraints with robust versions of the sample average constraints: \[\widehat{\theta}_{n}\in\begin{Bmatrix}\arg\min_{\theta\in\Theta}& \tfrac{1}{n}\sum_{i=1}^{n}f(\theta;Z_{i})\\ \text{subject to}&\left\{\sup_{P:D_{\varphi}(P\|\bar{P}_{n})\leq\frac{\rho_{k}}{n}} \mathbb{E}_{P}\big{[}g_{k}(\theta;Z)\big{]}\leq 0\right\}_{k=1}^{K}\end{Bmatrix}, \tag{3.2}\] where \(\boldsymbol{\rho}=(\rho_{1},\ldots,\rho_{K})^{\top}\) are uncertainty set radii for the constraints. There are other approaches to enforcing multiple constraints that result in constraint generalization; we focus on (3.2) here because it allows the user to adjust the constraint generalization probability for different constraints. First, we extend theorem 2.1 and corollary 2.2 to problems with multiple (expected value) constraints. We assume 1. **smoothness and concentration:** for \(k\in[K]\), \(f,g_{k}\) are twice continuously differentiable with respect to \(\theta\), and \(f(\theta^{\star};Z),\nabla f(\theta^{\star};Z)\), \(g_{k}(\theta^{\star};Z),\nabla g_{k}(\theta^{\star};Z)\) are sub-Gaussian random variables. 2. **uniqueness:** the stochastic optimization problem with \(K\) expected value constraints (3.1) has a unique optimal primal-dual pair \((\theta^{\star},\mathbf{\lambda}^{\star})\), and \(\theta^{\star}\) belongs to the interior of the compact set \(\Theta\). 3. **strict complementarity:**\(\mathbf{\lambda}^{\star}\in\operatorname{int}(\mathbb{R}_{+}^{K})\), _i.e._, each component of \(\mathbf{\lambda}^{\star}\) is strictly positive. 4. **positive definiteness:** The Hessian of the Lagrangian evaluated at \((\theta^{\star},\mathbf{\lambda}^{\star})\) is positive definite. The strict complementarity constraint seems especially strong here because it requires all the constraints to be active. It is possible (with extra notational overhead) to state the result in terms of just the active constraints. We refer to Section 5.1 for more information about the unknown active set. Further, as long as the sample size is large enough, the active constraints in (3.2) coincide with the active constraints in (3.1). To keep things simple, we assume all the constraints are active. **Theorem 3.1**.: _Let \(\widehat{\theta}_{n}\) be an optimal solution of (3.2) converging in probability as \(n\to\infty\) to \(\theta^{\star}\). Under the standing assumptions, we have_ \[\sqrt{n}\begin{bmatrix}\mathbb{E}_{P_{0}}\big{[}g_{1}(\widehat{\theta}_{n};Z )\big{]}\\ \vdots\\ \mathbb{E}_{P_{0}}\big{[}g_{K}(\widehat{\theta}_{n};Z)\big{]}\end{bmatrix} \overset{d}{\to}\mathcal{N}\left(-\begin{bmatrix}\sqrt{\rho_{1}\operatorname{ Var}_{P_{0}}[g_{1}(\theta^{\star};Z)]}\\ \vdots\\ \sqrt{\rho_{K}\operatorname{Var}_{P_{0}}[g_{K}(\theta^{\star};Z)]}\end{bmatrix} \right),\operatorname{Var}_{P_{0}}\begin{bmatrix}g_{1}(\theta^{\star};Z)\\ \vdots\\ g_{K}(\theta^{\star};Z)\end{bmatrix}\right).\] **Corollary 3.2**.: _Under the conditions of theorem 3.1, we have_ \[\lim_{n\to\infty}\mathbb{P}\left\{\begin{bmatrix}\mathbb{E}_{P_{0}}\big{[}g_ {1}(\widehat{\theta}_{n};Z)\big{]}\\ \vdots\\ \mathbb{E}_{P_{0}}\big{[}g_{K}(\widehat{\theta}_{n};Z)\big{]}\end{bmatrix} \in-\mathbb{R}_{+}^{K}\right\}=\mathbb{P}\{\mathbf{U}\leq\sqrt{\mathbf{\rho}}\},\] _where \(\sqrt{\mathbf{\rho}}=(\sqrt{\rho_{1}},\ldots,\sqrt{\rho_{K}})^{\top}\), and \(\mathbf{U}\) is a Gaussian random vector with mean zero and covariance_ \[\operatorname{Corr}_{P_{0}}\begin{bmatrix}g_{1}(\theta^{\star};Z )\\ \vdots\\ g_{K}(\theta^{\star};Z)\end{bmatrix} \triangleq D^{-\frac{1}{2}}\operatorname{Cov}_{P_{0}}\begin{bmatrix}g _{1}(\theta^{\star};Z)\\ \vdots\\ g_{K}(\theta^{\star};Z)\end{bmatrix}D^{-\frac{1}{2}}, \tag{3.3}\] \[D \triangleq\operatorname{diag}\left(\{\operatorname{Var}_{P_{0}}[g _{k}(\theta^{\star},Z)]\}_{k=1}^{K}\right).\] From theorem 3.1 and corollary 3.2 (see proofs in Appendix B and C), we find that the probability of constraint satisfaction decreases _exponentially_ as the number of constraints increases. We also see that our method is no longer pivotal for multiple expected value constraints: the uncertainty set radii depends on the (unknown) correlation structure among the constraint values. Fortunately, it is not hard to estimate this correlation structure. The most straightforward way is with the empirical correlation matrix. Let \(\widehat{\Sigma}_{n}\) be the empirical covariance matrix of the constraint values. The empirical correlation matrix is then given by \(\widehat{R}_{n}\triangleq\operatorname{diag}(\widehat{\Sigma}_{n})^{-\frac{1 }{2}}\widehat{\Sigma}_{n}\operatorname{diag}(\widehat{\Sigma}_{n})^{-\frac{1 }{2}}\). Finally, it is straightforward to extend the algorithm for solving (1.3) to (3.2). The Lagrangian of (3.2) is \[L(\theta,\mathbf{\lambda}) \triangleq\tfrac{1}{n}\sum_{i=1}^{n}f(\theta;Z_{i})+\sum_{k=1}^{K} \lambda_{k}\sup_{P:D_{\varphi}(P\|P_{n})\leq\frac{\rho_{k}}{n}}\mathbb{E}_{P} \big{[}g_{k}(\theta;Z)\big{]}\] \[=\tfrac{1}{n}\sum_{i=1}^{n}f(\theta;Z_{i})+\sum_{k=1}^{K}\lambda_ {k}\inf_{\mu_{k}\geq 0,\nu_{k}\in\mathbf{\delta}}\left\{\frac{1}{n}\sum_{i=1}^{n}\mu_{k} \varphi^{\star}(\frac{g_{k}(\theta;Z_{i})-\nu_{k}}{\mu_{k}})+\mu_{k}\frac{\rho_ {k}}{n}+\nu_{k}\right\},\] where we recalled the dual form of the robust constraint function (2.5) in the second step. We see that evaluating the dual function \(\inf_{\theta}L(\theta,\mathbf{\lambda})\) (at a fixed \(\mathbf{\lambda}\)) entails solving a stochastic optimization problem that is suitable for stochastic approximation. This suggests a similar dual ascent algorithm for solving (1.3); we skip the details here (see Algorithm 2 in Appendix D). ## 4 Simulations We simulate the frequency of constraint satisfaction for the following multi-item newsvendor problem: \[\begin{array}{ll}\max_{\theta\in\Theta}&\mathbb{E}_{P_{0}}\big{[}p^{\top} \min\{Z,\theta\}-c^{\top}\theta\big{]}\\ \text{subject to}&\mathbb{E}_{P_{0}}[(\|Z^{(1)}\|_{2}^{2}-\|\theta^{(1)}\|_{2}^{ 2})_{+}]\leq\varepsilon_{1}\\ &\mathbb{E}_{P_{0}}[(\|Z^{(2)}\|_{2}^{2}-\|\theta^{(2)}\|_{2}^{2})_{+}]\leq \varepsilon_{2}\end{array} \tag{4.1}\] where \(c\in\mathbb{R}_{+}^{d}\) is the manufacturing cost, \(p\in\mathbb{R}_{+}^{d}\) is the sell price, \(\theta\in\Theta=[0,100]^{d}\) is the number of items in stock, \(Z\in\mathbb{R}^{d}\) is a random variable with probability distribution \(P_{0}\) representing the demand, and there are \(d\) items in total. The distribution \(P_{0}\) is unknown but we observe IID samples \(Z_{1},\ldots,Z_{n}\) from \(P_{0}\). All of the items have been partitioned into two groups so that the corresponding demand and stock can be written as \(Z=(Z^{(1)},Z^{(2)})\) and \(\theta=(\theta^{(1)},\theta^{(2)})\). The constraints in the problem exclude stock levels that underestimate the demand too much for each group of items, where \(\varepsilon_{1},\varepsilon_{2}>0\) indicate tolerance level of such underestimation. The target of the problem is to maximize the profit while satisfying the constraints. It is easy to rewrite the maximization problem (4.1) as a minimization problem with expected value constraints in the form of (3.1) so that we can apply our method (3.2). We pick \(P_{0}\) as multivariate Gaussian with independent components so that the two constraints are generally uncorrelated with each other (see Appendix E for details). Throughout the simulations, we solve (3.2) with \(\boldsymbol{\rho}=(z_{\alpha},z_{\alpha})^{\top}\) for \(\alpha\in\{0.4,0.25,0.1,0.05,0.005\}\). As suggested by our asymptotic theory in Section 3, the nominal probability of constraint satisfaction is \(1-\alpha\) for each constraint and \((1-\alpha)^{2}\) for both constraints due to the independence setup. In Figure 1, we plot frequencies of constraint satisfaction for each constraint and both constraints, all of which are averaged over \(1000\) replicates. As the sample size \(n\) grows, the frequency versus probability curve converges to the theoretical dashed line of limiting probability of constraint satisfaction, validating our theory in the large sample regime. For more simulations (_e.g._, single constraint, two dependent constraints) we refer to Appendix E. ## 5 Application to fair machine learning As ML models are deployed in high-stakes decision making and decision support roles, the fairness of the models has come under increased scrutiny. In response, there is a flurry of recent work on mathematical definitions of algorithmic fairness [16; 23; 25] and algorithms to enforce the definitions [1; 38; 10]. A prominent class of fairness definitions is _group fairness_; such definitions require equality of certain metrics (_e.g._ false/true positive rates) among demographic groups. For example, consider a fair binary classification problem. Let \(\mathcal{X}\subset\mathbb{R}^{d}\) be the input space, \(\{0,1\}\) be the set of possible labels, and \(\mathcal{A}\) be the set of possible values of the protected/sensitive attribute. In this setup, training and test examples are tuples of the form \((X,A,Y)\in\mathcal{X}\times\mathcal{A}\times\mathcal{Y}\), and a classifier is a map \(f:\mathcal{X}\to\{0,1\}\). A popular definition of algorithmic fairness for binary classification is _equality of opportunity_[23]. **Definition 5.1** (equality of opportunity).: _Let \(Y=1\) be the advantaged label that is associated with a positive outcome and \(\widehat{Y}\triangleq f(X)\) be the output of the classifier. Equality of opportunity entails \(\mathbb{P}\{\widehat{Y}=1\mid A=a,Y=1\}=\mathbb{P}\{\widehat{Y}=1\mid A=a^{ \prime},Y=1\}\) for all \(a,a^{\prime}\in\mathcal{A}\)._ Equality of opportunity, or true positive rate parity, means that the prediction \(\widehat{Y}=h(X)\) conditioned on the advantaged label \(Y=1\) is statistically independent of the protected attribute \(A\). Furthermore, an approximate version of equality of opportunity can be readily defined. We say that \(\widehat{Y}=h(X)\) satisfies \(\varepsilon\)_-equality of opportunity_ if \(\mathbb{P}\{\widehat{Y}=1\mid A=a,Y=1\}-\mathbb{P}\{\widehat{Y}=1\mid A=a^{ \prime},Y=1\}\leq\varepsilon\) for for all \(a,a^{\prime}\in\mathcal{A}\). In this case, \(\varepsilon>0\) represents a practitioner's _tolerance_ for fairness violations. Figure 1: Frequency versus limiting probability of constraint satisfaction of the first constraint (left), the second constraint (middle), and both of the constraints (right). Given a parametric model space \(\mathcal{H}=\{f_{\theta}(\cdot):\theta\in\Theta\}\) and loss function \(\ell:\Theta\times\mathcal{X}\times\mathcal{Y}\to\mathbb{R}_{+}\), an in-processing fair ML routine is to minimize the (empirical) risk \(\mathbb{E}\left[\ell(\theta;X,Y)\right]\) while satisfying some fairness constraints. Most commonly, definitions of group fairness (including equality of opportunity, demographic parity, and more) can be written as a special example of a general set of linear constraints (Bickel and Rubin, 1985; Bickel and Rubin, 1985) of the form \(\mathbf{M}\boldsymbol{\mu}(\theta)\leq\mathbf{c}\), where matrix \(\mathbf{M}\in\mathbb{R}^{K\times T}\) and vector \(\mathbf{c}\in\mathbb{R}^{K}\) encode the constraints; \(\boldsymbol{\mu}(\theta):\Theta\to\mathbb{R}^{T}\) is a vector of (conditional) moments \(\mu_{t}(\theta)=\mathbb{E}\left[h_{t}(X,A,Y,\theta)\mid\mathcal{E}_{t}\right]\) for \(t\in[T]\); \(g_{t}:\mathcal{X}\times\mathcal{A}\times\mathcal{Y}\times\Theta\to\mathbb{R}\); event \(\mathcal{E}_{t}\) is defined with respect to \((X,A,Y)\). This framework fits to our methodology if we note that each (conditional) moment can be written as \[\mu_{t}(\theta)=\frac{\mathbb{E}_{(X,A,Y)\sim P_{0}}\big{[}h_{t}(X,A,Y,\theta )\times\mathbf{1}\left\{\mathcal{E}_{t}(X,Y,A)\right\}\big{]}}{\mathbb{E}_{(X,A,Y)\sim P_{0}}\big{[}\mathbf{1}\left\{\mathcal{E}_{t}(X,Y,A)\right\}\big{]}}. \tag{5.1}\] Here the indicator \(\mathbf{1}\left\{\mathcal{E}_{t}\right\}\) takes value \(1\) if the event \(\mathcal{E}_{t}\) happens, and \(0\) otherwise. Moreover, we use \(\mathcal{E}_{t}(X,A,Y)\) to emphasize that \(\mathcal{E}_{t}\) only depends on \((X,Y,A)\) but not on \(\theta\) in any way. Note that (5.1) is a ratio of expected values, which is a non-linear statistical functional of \(P_{0}\). To use our method, we first replace the denominator of \(\mu_{t}(\theta)\) with an estimator, such as the unbiased estimator \(\widehat{\mathbb{P}}(\mathcal{E}_{t})=\frac{1}{n}\sum_{i=1}^{n}\mathbf{1} \left\{\mathcal{E}_{t}(X_{i},A_{i},Y_{i})\right\}\). The resulting plug-in estimation of \(\mu_{t}(\theta)\) then becomes linear in \(P_{0}\), allowing us to apply our method (see similar tricks in (Bickel and Rubin, 1985)). We describe the application of our method to \(\varepsilon\)-equality of opportunity in Appendix F. ### A two-stage method for unknown active set In practice, it is probable that only a subset of the constraints are active. Furthermore, we do not know beforehand whether or not a constraint is active in the true population problem. To handle this scenario, we propose a two-stage method: 1. At the first stage, we solve the sample average approximation (SAA) problem (3.2) with \(\boldsymbol{\rho}=\mathbf{0}_{K}\). By doing so, we identify the active set of the SAA problem. 2. At the second stage, we solve (3.2) with \(\boldsymbol{\rho}\) such that \(\rho_{k}\) is a positive number only if the \(k\)-th constraint, \(k\in[K]\), was identified as active at the first stage. In Appendix G, we show that the two-stage method also enjoys the calibration property (similar to Theorem 3.1 and Corollary 3.2) under standard assumptions (_i.e._, strict complementarity). At a high level, the limiting probability of satisfying the true constraints depends solely on the correlation structure between active constraints and the uncertainty set radii for active constraints, as long as the SAA problem identifies active constraints with probability tending to \(1\). ### Proxy dual function for non-differentiable constraints Constraint functions in fair ML are often non-differentiable. For instance, fairness metrics are typically linear combinations of indicators that result in non-differentiable rate constraints (Bickel and Rubin, 1985; Bickel and Rubin, 1985; Bickel and Rubin, 1985). This prevents the use of any gradient-based optimization algorithms. Fortunately, only the dual function evaluation step in Algorithm 1 requires access to gradients. Therefore, we can modify the algorithm by: (1) introducing proxy dual function, which uses a differentiable surrogate \(\tilde{g}\) instead of the non-differentiable \(g\) in the dual function evaluation step; (2) keeping \(g\) in the dual ascent step. For an indicator function \(h(t)=\mathbf{1}\{t>0\}\), one can replace it by sigmoidal function \(h_{1}(t)=(1+e^{-at})^{-1}\) or hinge upper bound \(h_{2}(t)=\max\{0,t+1\}\) to produce smooth surrogates for non-differentiable rate constraints (Bickel and Rubin, 1985; Bickel and Rubin, 1985; Bickel and Rubin, 1985). We summarize the proxy dual ascent algorithm in Appendix H. ### Adult experiments We compare the frequency of constraint satisfaction (at test time) of the sample average approximation and our methods with nominal probability \(0.60,0.75,0.90,0.95\) using the Adult dataset from UCI (Kal directly leads to one half chance of constraint violation, while our method's constraint satisfaction frequency matches its nominal value. The price of a higher chance of test-time fairness satisfaction is an increase in classification error rate as shown in the right panel. From the baseline to \(95\%\) chance of fairness satisfaction, we basically trade off \(2\%\) increase in error rate. We refer to Appendix I and K for details and more experiments. ## 6 Summary and discussion We explore the problem of exact constraint satisfaction probability in stochastic optimization with expected-value constraints. We propose a distributionally robust reformulation of data-dependent constraints and provide a theoretical guarantee of constraint satisfaction with an asymptotically exact probability specified by the user. For solving the reformulated problem, a scalable dual ascent algorithm and its variants are proposed. The computational cost of our algorithm is comparable to that of a standard distributionally robust optimization problem. Our theory on exact constraint satisfaction probability is validated via simulations on the resource-constrained newsvendor problem. The efficacy of our methods is empirically demonstrated on fair machine learning applications. Some data-dependent constraints are by nature _non-linear_ in the underlying probability measure. For example, (5.1) is a ratio of expected values. An intriguing direction for future research is to generalize the methods and theory developed in this work to constraints on non-linear functions of expected values. Such forms of constraints are known as _statistical functionals_ in statistics literature [19]. The non-linear dependence of the constraint function on the probability measure precludes the stochastic approximation as a general way of evaluating the dual function, as the constraint function no longer admits a dual form (2.5), calling for the development of a new algorithm. ## Acknowledgments and Disclosure of Funding This paper is based upon work supported by the National Science Foundation (NSF) under grants no. 1916271, 2027737, and 2113373.
2303.11290
An importance sampling method for Feldman-Cousins confidence intervals
In various high-energy physics contexts, such as neutrino-oscillation experiments, several assumptions underlying the typical asymptotic confidence interval construction are violated, such that one has to resort to computationally expensive methods like the Feldman-Cousins method for obtaining confidence intervals with proper statistical coverage. By construction, the computation of intervals at high confidence levels requires fitting millions or billions of pseudo-experiments, while wasting most of the computational cost on overly precise intervals at low confidence levels. In this work, a simple importance sampling method is introduced which reuses pseudo-experiments produced for all tested parameter values in a single mixture distribution. This results in a significant error reduction on the estimated critical values, especially at high confidence levels, and simultaneously yields a correct interpolation of these critical values between the parameter values at which the pseudo-experiments were produced. The theoretically calculated performance is demonstrated numerically using a simple example from the analysis of neutrino oscillations. The relationship to similar techniques applied in statistical mechanics and $p$-value computations is discussed.
Lukas Berns
2023-03-20T17:27:27Z
http://arxiv.org/abs/2303.11290v1
# An importance sampling method for Feldman-Cousins confidence intervals ###### Abstract In various high-energy physics contexts, such as neutrino-oscillation experiments, several assumptions underlying the typical asymptotic confidence interval construction are violated, such that one has to resort to computationally expensive methods like the Feldman-Cousins method for obtaining confidence intervals with proper statistical coverage. By construction, the computation of intervals at high confidence levels requires fitting millions or billions of pseudo-experiments, while wasting most of the computational cost on overly precise intervals at low confidence levels. In this work, a simple importance sampling method is introduced which reuses pseudo-experiments produced for all tested parameter values in a single mixture distribution. This results in a significant error reduction on the estimated critical values, especially at high confidence levels, and simultaneously yields a correct interpolation of these critical values between the parameter values at which the pseudo-experiments were produced. The theoretically calculated performance is demonstrated numerically using a simple example from the analysis of neutrino oscillations. The relationship to similar techniques applied in statistical mechanics and \(p\)-value computations is discussed. ## I Introduction An essential part of any experiment is the statistical analysis to extract information about the model parameters, such as physics constants, from the measurement outcome. As measurements inherently include statistical fluctuations, one often reports these constraints in the form of confidence intervals (or confidence regions in higher dimensions). These are intervals over the parameter space calculated from the observed data, which are constructed in such a way that for any true value of the parameters, at least a pre-defined percentage of the possible experimental outcomes would produce an interval that covers the true parameter value. The pre-defined percentage over possible experimental outcomes is called the confidence level (CL). For the rest of this paper we shall use the following notation: \(x\) denotes the experimental outcome, which can be a vector of many observations within the single experiment. \(\theta\) denotes the model parameters, which can contain one or higher dimensional continuous degrees of freedom, and may contain discrete degrees of freedom as well. \(p(x\mid\theta)\) denotes the probability distribution function for the experimental outcomes given some model parameters. \(p(x\mid\theta)\) seen as a function of \(\theta\) for a given experimental outcome is called the likelihood function and denoted \(L(\theta\mid x):=p(x\mid\theta)\). The parameter value for which the likelihood is maximized is denoted \(\hat{\theta}(x):=\arg\max_{\theta}L(\theta\mid x)\), and the difference of the log-likelihood at some parameter value to the maximum likelihood is denoted as \(\Delta\chi^{2}(\theta\mid x):=-2\log L(\theta\mid x)/L(\hat{\theta}(x)\mid x)\). The confidence level is denoted \(1-\alpha\). In many cases a useful theorem by Wilks [1] can be applied, which greatly simplifies the construction of such confidence intervals. The theorem says that in the asymptotic limit, \(\Delta\chi^{2}(\theta\mid x)\) evaluated at the true parameter value is distributed as a chi-squared distribution with \(k\) degrees of freedom, where \(k\) is the dimension of the parameter space \(\theta\), which has to be continuous. The theorem holds under suitable conditions which ensure that a maximum likelihood value can be found in the neighborhood of the true parameter value with a quadratic Taylor expansion of the likelihood. Given this asymptotic distribution, one can thus construct a confidence interval by all values of \(\theta\) that satisfy \(\Delta\chi^{2}(\theta\mid x)\leq\Delta\chi^{2}_{c}\), where the critical value \(\Delta\chi^{2}_{c}\) is easily computed from the quantile function of the chi-squared distribution. Due to the necessary assumptions, confidence intervals based on Wilks' theorem are not suitable if the number of observations is small, or the parameter space is unsuitable because of physical boundaries (such as \(\theta\geq 0\)), discrete degrees of freedom, or periodicities that cannot be captured by the quadratic expansion. Neutrino oscillation experiments for example suffer from all of these deficiencies, for which we will present an example later. In this situation, one has to resort to actually producing ensembles of pseudo-experiments for selected parameter values to study the distribution of a suitable statistic to be used for the construction of the confidence interval. A commonly used method is the Feldman-Cousins (FC) method [2], where for each pseudo-experiment \(x^{\prime}\) generated assuming a true value \(\theta_{t}\), the \(\Delta\chi^{2}(\theta_{t}\mid x^{\prime})\) value at the true parameter value is computed to obtain its distribution. Then the critical value \(\Delta\chi_{c}^{2}\) is obtained by the empirical \(1-\alpha\) percentile of this distribution. Since the distribution of \(\Delta\chi^{2}(\theta_{t}\mid x^{\prime})\) will in general be different for each true parameter value, the critical values are now a function of the true value at which they are computed, which we denote as \(\Delta\chi_{c}^{2}(\theta_{t})\). Finally, the confidence interval for the actually observed data \(x\) is constructed by choosing \(\Delta\chi^{2}(\theta\mid x)\leq\Delta\chi_{c}^{2}(\theta)\). In practice, it is only possible to compute \(\Delta\chi_{c}^{2}(\theta)\) at selected parameter values, which need to be interpolated, for example linearly, in order to compute the confidence intervals. The Feldman-Cousins method is very inefficient for obtaining high-CL intervals, because by definition, only a small fraction of pseudo-experiments contribute to the quantile computation. For example, in particle physics the threshold for "discovery" is commonly chosen at \(\alpha=5.7\times 10^{-7}\) (the "\(5\sigma\)" threshold), in which case only one in 1.7 million pseudo-experiments would (by definition) have a \(\Delta\chi^{2}(\theta_{t}\mid x^{\prime})\) value larger than the critical value. As a result, one easily ends up with millions of pseudo-experiments to be fitted in order to obtain the necessary critical values, while simultaneously "wasting" most of this computation time for over-precise critical values at lower CL. In practice, FC confidence intervals are often computed only up to \(2\sigma\) (\(\alpha=4.6\times 10^{-2}\)) or \(3\sigma\) CL (\(\alpha=2.7\times 10^{-3}\)) for such reasons. In this work, we show that it is actually extremely easy to introduce an alternative sampling distribution that generates high-CL pseudo-experiments much more frequently: one simply reuses the pseudo-experiments generated at the values of the parameters in the form of a mixture distribution. By appropriate reweighting, this results in an exponential reduction in the errors on critical values for high CL. The method also introduces a method for correctly interpolating the critical values between the subset of true parameter values, thus removing the need of naive interpolation methods that are commonly employed. The paper is organized as follows. First, we review the conventional FC method. Next we define the new method, deriving it from a discussion of an ideal importance sampling distribution. Bounds for the importance sampling weights are calculated, which are used to calculate the reduction of errors on the estimated critical values compared to the conventional FC method. The ability to interpolate critical values and the calculation of errors and other other diagnostics are discussed. Next, a toy example from the analysis of neutrino oscillations is used to compare the two computation methods and the improvement is checked against the theoretical upper bounds from the previous section. Finally, we discuss the relationship to similar techniques in statistical mechanics and \(p\)-value calculations, the relationship to Bayesian marginalized likelihoods, and the limit of applicability in the presence of nuisance parameters. ## II Critical values in the conventional Feldman-Cousins method To prepare the notation, we briefly review the computation of critical values in the conventional Feldman-Cousins method. First, we make a choice of \(S\) points in the parameter space, which we denote \(\theta_{s}\) with \(s\) going from 1 to \(S\). At each \(\theta_{s}\), we now generate an ensemble of \(n_{\rm exp}\) pseudo-experiments \(\{x\}_{s}\) by sampling from \(p(x\mid\theta_{s})\). While all pseudo-experiments are assumed to live in the same space, the \(s\) suffix on the curly brackets representing the ensemble keeps track of the distribution that generated the experiments. For each pseudo-experiment \(x\in\{x\}_{s}\), we now compute \(\Delta\chi^{2}(\theta_{s}\mid x)\) and find the \(1-\alpha\) quantile \(\Delta\chi_{c,s}^{2}\) through any suitable estimator. For example, one may simply sort the \(\Delta\chi^{2}(\theta_{s}\mid x)\) values and take the \(\lfloor\alpha\times n_{\rm exp}\rfloor\) largest value as \(\Delta\chi_{c,s}^{2}\), in which case we have \[\sum_{x\in\{x\}_{s}}I\big{(}\Delta\chi^{2}(\theta_{s}\mid x)\geq\Delta\chi_{c, s}^{2}\big{)}=\lfloor\alpha\times n_{\rm exp}\rfloor. \tag{1}\] Here, \(I(\cdot)\) is the indicator function returning 1 if the logical statement in the parentheses is true, and 0 otherwise. \(\lfloor\cdot\rfloor\) denotes the floor function. Finally, the critical value function \(\Delta\chi_{c}^{2}(\theta)\) is obtained by some interpolation scheme. For example, one may set \(\Delta\chi_{c}^{2}(\theta_{s}):=\Delta\chi_{c,s}^{2}\) and linearly interpolate for any \(\theta\) values in between. To reduce the interpolation error, one typically has to either manually or automatically [3] adjust the choice of sampling parameter values \(\{\theta\}_{S}\) in an iterative scheme. The asymptotic variance on the critical values is proportional to the binomial error \(\alpha(1-\alpha)/n_{\rm exp}\), so high-CL (\(\alpha\ll 1\)) generally means that one needs \(n_{\rm exp}\gg 1/\alpha\) for reliable critical values. Since the whole process is repeated for all \(S\) points in the parameter space, the total number of generated (and fitted) pseudo-experiments is \(S\times n_{\rm exp}\gg S/\alpha\). ## III The mixture Feldman-Cousins method ### Definition Our new method, which we shall refer to as the "mixture Feldman-Cousins" method, differs from the conventional method mainly in the reuse of _all_ generated pseudo-experiments for the critical-value computation of _each_ target parameter space point \(\theta_{t}\) with an additional weight \[w(x\mid\theta_{t}):=\frac{p(x\mid\theta_{t})}{\frac{1}{3}\sum_{s=1}^{S}p(x\mid \theta_{s})}=\frac{1}{\frac{1}{3}\sum_{s=1}^{S}\exp[-\frac{1}{2}\{\Delta\chi^{2 }(\theta_{s}\mid x)-\Delta\chi^{2}(\theta_{t}\mid x)\}]}. \tag{2}\] The value in the denominator is the sampling probability distribution of \(x\in\{x\}_{\text{mix}}:=\bigcup_{s=1}^{S}\{x\}_{s}\), which is the mixture distribution of \(p(x\mid\theta_{s})\) for all \(\theta_{s}\) values. Since the weights are based on the sampling probabilities which are nothing but the likelihood function, they are computable using the same procedure that calculates the \(\Delta\chi^{2}(\theta\mid x)\) for each pseudo-experiment. Due to taking the difference of two \(\Delta\chi^{2}\) values, the contribution from the minimum \(\chi^{2}\) at \(\hat{\theta}(x)\), as well as any \(\theta\)-independent offsets (e.g. the \(n!\) factor in the Poisson likelihood) vanish in the denominator and hence do not need to be known accurately. While in the conventional method one only needs to compute \(\Delta\chi^{2}(\theta_{s}\mid x)\) for the \(\theta_{s}\) value at which the pseudo-experiment was generated, here we need it for all \(\theta_{s^{\prime}}\) (including \(s^{\prime}\neq s\)) and \(\theta_{t}\). Now we can define the critical value \(\Delta\chi^{2}_{c,t}\) as the \(w\)-weighted \(1-\alpha\) quantile of \(\Delta\chi^{2}(\theta_{t}\mid x)\) for \(x\sim\{x\}_{\text{mix}}\), for example \[\sum_{x\in\{x\}_{\text{mix}}}w(x\mid\theta_{t})I\big{(}\Delta\chi^{2}(\theta_ {t}\mid x)\geq\Delta\chi^{2}_{c,t}\big{)}\lesssim\alpha\times Sn_{\text{exp}}, \tag{3}\] where the \(\lesssim\) is meant to represent that we take the smallest \(\Delta\chi^{2}_{c,t}\) that satisfies the inequality. ### Derivation In order to obtain more pseudo-experiments at large \(\Delta\chi^{2}\) values, which would yield more precise high-CL critical values, we use an importance sampling approach: instead of directly sampling from the target distribution \(p(x\mid\theta_{t})\), we sample from a different distribution and weight the sampled the toys by the ratio of probability distributions to calculate the relevant quantities under the target distribution (the critical values). The question therefore becomes: what is the ideal sampling distribution to generate the desired pseudo-experiments? Note that it is important to find a sampling distribution that is as close as possible to the target distribution apart from generating high \(\Delta\chi^{2}\) pseudo-experiments with higher probability. In particular, if each experiment \(x\) consists of \(m\) measurements, the experiments are points in an \(m\)-dimensional space and there are \(m\) dimensions in which we can stretch or shrink the sampling distribution. Instead of thinking about estimating quantiles, let's think of estimating the probability density \(p(Y(x)\mid\theta_{t})\) using histograms for \(Y(x):=\Delta\chi^{2}(\theta_{t}\mid x)\). When using reweighting, in addition to the binomial error \(n_{\text{exp}}p(1-p)\) for the number of pseudo-experiments falling into a bin, there will be an additional contribution due to the variance of weights: given the estimator \[\hat{P}_{b}:=\frac{1}{n_{\text{exp}}}\sum_{i=1}^{n_{\text{exp}}}w(x_{i})I(y_{ b}\leq Y(x_{i})<y_{b+1}) \tag{4}\] and using \(I(\cdot)^{2}=I(\cdot)\) we get \[\mathbb{E}[\hat{P}_{b}] =\pi_{b}\mathbb{E}_{b}[w] \tag{5}\] \[\text{Var}[\hat{P}_{b}] =\frac{1}{n_{\text{exp}}}\left(\pi_{b}(1-\pi_{b})\mathbb{E}_{b}[ w]^{2}+\pi_{b}\text{Var}_{b}[w]\right)\] (6) \[\pi_{b} :=\mathbb{E}[I(y_{b}\leq Y(x)<y_{b+1})]\] (7) \[\mathbb{E}_{b}[w^{k}] :=\frac{1}{\pi_{b}}\mathbb{E}[w(x)^{k}I(y_{b}\leq Y(x)<y_{b+1})]\] (8) \[\text{Var}_{b}[w] :=\mathbb{E}_{b}[w^{2}]-\mathbb{E}_{b}[w]^{2} \tag{9}\] where in Eq. (6) the first term is the usual binomial error due to the number of pseudo-experiments falling into the bin, and the second is the additional term due to the variance of weights among pseudo-experiments falling into the bin. We therefore want to increase the number (\(n_{\text{exp}}\pi_{b}\)) of pseudo-experiments falling into a high-\(y(x)\) bin to reduce the binomial error, while at the same time keeping the weight-variance within the bin as small as possible. This means the ideal case of 0-variance would be for the weights to depend on \(x\) through \(y(x)\) alone. Or equivalently, since the weights are the ratio of the target and sampling distribution, we want to use a sampling distribution that differs from the target distribution only by a functional factor of \(\Delta\chi^{2}(\theta_{t}\mid x)\). The key idea is to think about the meaning of a high \(\Delta\chi^{2}(\theta_{t}\mid x)\) value. The likelihood \(L(\theta\mid x)\) is the probability to sample the given pseudo-experiment \(x\) from \(\theta\). A high \(\Delta\chi^{2}(\theta_{t}\mid x)=-2\log L(\theta_{t}\mid x)/L(\hat{\theta}(x) \mid x)\) means there exists a value \(\hat{\theta}(x)\) where it's more likely to sample the given pseudo-experiment than at the "target" \(\theta_{t}\) value. Thus by using pseudo-experiments generated at \(\theta\neq\theta_{t}\), we can more efficiently obtain ones with high \(\Delta\chi^{2}(\theta_{t}\mid x)\). The naive choice of simply using pseudo-experiments generated at some \(\theta^{\prime}\) (\(\neq\theta_{t}\)) weighted by the ratio of sampling probabilities \(p(x\mid\theta_{t})/p(x\mid\theta^{\prime})\) however will do worse than before. This is because \(\hat{\theta}(x)\) depends on the pseudo-experiment \(x\), such that for some pseudo-experiments it may be more preferable to sample \(x\) from \(p(x\mid\theta_{t})\) than from \(p(x\mid\theta^{\prime})\), resulting in an exponentially large (often unbounded) variance of weights. The solution is simple: by using a mixture distribution \(p_{\text{sample}}(x)=\frac{1}{S}\sum_{s=1}^{S}p(x\mid\theta_{s})\) over a set \(\{\theta\}_{S}:=\{\theta_{1},\theta_{2},\cdots,\theta_{S}\}\) which includes \(\theta_{t}\), we can guarantee the weights to be bounded from above, because \(p_{\text{sample}}(x)\geq\frac{1}{S}p(x\mid\theta_{t})\) and hence \[w(x\mid\theta_{t})\leq S. \tag{10}\] ### Bounds on pseudo-experiment weights for a good grid If we choose the grid \(\{\theta\}_{S}\) dense _and_ wide enough (a _good_ grid) such that we may assume to have a good minimum \(\hat{\theta}_{S}(x)\) on \(\{\theta\}_{S}\) in the sense of \[\Delta\chi^{2}(\hat{\theta}_{S}(x)\mid x) \leq\begin{cases}\epsilon&\text{if }\Delta\chi^{2}(\theta_{t}\mid x)\leq \Delta\chi^{2}_{\text{max}}\\ \Delta\chi^{2}(\theta_{t}\mid x)&\text{otherwise}\end{cases} \tag{11}\] \[\Delta\chi^{2}(\hat{\theta}_{S}(x)\mid x) :=\min_{s}\Delta\chi^{2}(\theta_{s}\mid x) \tag{12}\] for all \(x\), we can put a much stricter bound on the weights than Eq. (10). Here, \(\epsilon\lesssim 1\) will be smaller for denser spacing of \(\{\theta\}_{S}\) and \(\Delta\chi^{2}_{\text{max}}\) will be larger for a wider range covered by \(\{\theta\}_{S}\). Since this additional condition can deal with the case of \(\theta_{t}\notin\{\theta\}_{S}\) as well, let us define a symbol \(C\) which is \(1\) if \(\theta_{t}\in\{\theta\}_{S}\) and \(0\) otherwise. Note that to guarantee Eq. (11) under \(C=0\) one generally needs to have parameter values in \(\{\theta\}_{S}\) that surround \(\theta_{t}\) sufficiently well. For example, with a \(1\)-dimensional continuous \(\theta\) parameter, one needs \(\min_{s}\theta_{s}\leq\theta_{t}\leq\max_{s}\theta_{s}\). First, we focus on the pseudo-experiments with \(\Delta\chi^{2}(\theta_{t}\mid x)\leq\Delta\chi^{2}_{\text{max}}\), which are our primary interest, and note that \[\frac{p(x\mid\hat{\theta}_{S}(x))}{p(x\mid\theta_{t})}=\exp\big{[}\tfrac{1}{2} \{\Delta\chi^{2}(\theta_{t}\mid x)-\Delta\chi^{2}(\hat{\theta}_{S}(x)\mid x) \}\big{]}\geq\exp\big{[}\tfrac{1}{2}\Delta\chi^{2}(\theta_{t}\mid x)-\tfrac{ \epsilon}{2}\big{]}. \tag{13}\] The sum of probability ratios is now bounded from below by \[\sum_{s=1}^{S}\frac{p(x\mid\theta_{s})}{p(x\mid\theta_{t})}\geq C\times\frac{ p(x\mid\theta_{t})}{p(x\mid\theta_{t})}+\frac{p(x\mid\hat{\theta}_{S}(x))}{p(x \mid\theta_{t})}\geq C+\exp\big{[}\tfrac{1}{2}\Delta\chi^{2}(\theta_{t}\mid x )-\tfrac{\epsilon}{2}\big{]} \tag{14}\] because \(\{\theta\}_{S}\) includes both \(\hat{\theta}_{S}(x)\) and (if \(C=1\)) \(\theta_{t}\). This means for any pseudo-experiment with \(\epsilon<\Delta\chi^{2}(\theta_{t}\mid x)\leq\Delta\chi^{2}_{\text{max}}\), it is more likely to be sampled in \(Sn_{\text{exp}}\) samples from \(\frac{1}{S}\sum_{s=1}^{S}p(x\mid\theta_{s})\) than in \(n_{\text{exp}}\) samples from the target distribution \(p(x\mid\theta_{t})\). The sum of probability ratios is further bounded from above by \[\sum_{s=1}^{S}\frac{p(x\mid\theta_{s})}{p(x\mid\theta_{t})}=C+\sum_{s(\neq \neq t)}\frac{L(\theta_{s}\mid x)}{L(\theta_{t}\mid x)}\leq C+(S-C)\exp\big{[} \tfrac{1}{2}\Delta\chi^{2}(\theta_{t}\mid x)\big{]} \tag{15}\] because \(L(\theta_{s}\mid x)\leq L(\hat{\theta}(x)\mid x)\). This means the weights are bounded by \[\frac{S}{C+(S-C)\exp\big{[}\tfrac{1}{2}\Delta\chi^{2}(\theta_{t}\mid x)\big{]} }\leq w(x\mid\theta_{t})\leq\frac{S}{C+\exp\big{[}\tfrac{1}{2}\Delta\chi^{2}( \theta_{t}\mid x)-\tfrac{\epsilon}{2}\big{]}}. \tag{16}\] We see that the bounds depend on the pseudo-experiments through \(\Delta\chi^{2}(\theta_{t}\mid x)\) only, and also note that for sufficiently large \(\Delta\chi^{2}(\theta_{t}\mid x)\) the ratio of upper \(w_{\text{max}}\) to lower bound \(w_{\text{min}}\) converges to \[\frac{w_{\text{max}}}{w_{\text{min}}}\rightarrow(S-C)e^{\epsilon/2}, \tag{17}\] which indicates a small relative variance of weights as long as the number of grid points \(S\) is not a very large number. For pseudo-experiments with \(\Delta\chi^{2}(\theta_{t}\mid x)\) above the threshold \(\Delta\chi^{2}_{\max}\), we have \[\frac{p(x\mid\hat{\theta}_{S}(x))}{p(x\mid\theta_{t})}=\exp\big{[}\tfrac{1}{2} \{\Delta\chi^{2}(\theta_{t}\mid x)-\Delta\chi^{2}(\hat{\theta}_{S}(x)\mid x) \}\big{]}\geq 1 \tag{18}\] by Eq. (11), and hence an upper bound on the weights \[w(x\mid\theta_{t})\leq\frac{S\times p(x\mid\theta_{t})}{C\times p(x\mid\theta_ {t})+p(x\mid\hat{\theta}_{S}(x))}\leq\frac{S}{C+1}. \tag{19}\] ### Critical value estimator performance with a good grid Since quantiles (the critical values) are just the inverse function of the cumulative distribution function (CDF), we can estimate the relative reduction of the quantile estimation variation by the reduction of the CDF estimation variance. The relationship for an observable \(y\sim f(y)\) is given by \(\text{Var}[\hat{y}(P)]=f(y)^{-2}\text{Var}[\hat{P}(y)]\) where \(\hat{y}(P)\) is the quantile function estimator, \(\hat{P}(y)\) the CDF estimator, and \(f(y)\) the probability distribution function. Following Eq. (3), and using the shorthand notation \(Y(x):=\Delta\chi^{2}(\theta_{t}\mid x)\), our CDF estimator is \[\hat{P}(y\mid\theta_{t})=\frac{1}{Sn_{\text{exp}}}\sum_{x\in\{x\}=\text{inv}}w (x\mid\theta_{t})I(Y(x)\geq y) \tag{20}\] with \(x\sim\frac{1}{S}\sum_{s=1}^{S}p(x\mid\theta_{s})\). This is an unbiased estimator for the target CDF \(P(y\mid\theta_{t})\) \[\mathbb{E}\big{[}\hat{P}(y\mid\theta_{t})\big{]} =\mathbb{E}\big{[}w(x\mid\theta_{t})I(Y(x)\geq y)\big{]} \tag{21}\] \[=\mathbb{E}\big{[}I(Y(x)\geq y)\mid\theta_{t}\big{]}\] (22) \[=P(y\mid\theta_{t}). \tag{23}\] where \(\mathbb{E}[\,\cdot\,]\) means to take the expectation with \(x\sim\frac{1}{S}\sum_{s=1}^{S}p(x\mid\theta_{s})\), and \(\mathbb{E}[\,\cdot\,\mid\theta_{t}]\) to take the expectation with \(x\sim p(x\mid\theta_{t})\). Now defining \[y_{\max}:=\max\{\Delta\chi^{2}_{\max},y\}, \tag{24}\] the variance from a single pseudo-experiment is \[\text{Var}\big{[}w(x\mid\theta_{t})I(Y(x)\geq y)\big{]} \tag{25}\] \[=\mathbb{E}\big{[}w(x\mid\theta_{t})^{2}I(Y(x)\geq y)^{2}\big{]}- \mathbb{E}\big{[}w(x\mid\theta_{t})I(Y(x)\geq y)\big{]}^{2}\] (26) \[=\mathbb{E}\big{[}w(x\mid\theta_{t})I(Y(x)\geq y)\mid\theta_{t} \big{]}-P(y\mid\theta_{t})^{2}\] (27) \[\leq\mathbb{E}\bigg{[}\frac{S\times I(y\leq Y(x)\leq y_{\max})}{C +\exp\big{[}\tfrac{1}{2}\big{(}Y(x)-\epsilon\big{)}\big{]}}\biggm{|}\theta_{ t}\bigg{]}+\frac{S}{C+1}\mathbb{E}\big{[}I(Y(x)\geq y_{\max})\bigm{|}\theta_{ t}\big{]}-P(y\mid\theta_{t})^{2}\] (28) \[\leq\frac{S}{C+\exp\big{[}\tfrac{1}{2}(y-\epsilon)\big{]}} \mathbb{E}\left[I(y\leq Y(x)\leq y_{\max})\mid\theta_{t}\right]+\frac{S}{C+1} P(y_{\max}\mid\theta_{t})-P(y\mid\theta_{t})^{2}\] (29) \[=S\times\left[\frac{P(y\mid\theta_{t})-P(y_{\max}\mid\theta_{t})} {C+\exp\big{[}\tfrac{1}{2}(y-\epsilon)\big{]}}+\frac{P(y_{\max}\mid\theta_{t} )}{C+1}\right]-P(y\mid\theta_{t})^{2} \tag{30}\] where in going from the second to the third line we used \(I(\cdot)^{2}=I(\cdot)\), and in going to the fourth line we used the upper bound from Eq. (16) and Eq. (19), and in going to the fifth line we used \(Y(x)\geq y\) from the argument of the indicator function. The variance of the CDF estimator is therefore \[\text{Var}\big{[}\hat{P}(y\mid\theta_{t})\big{]} =\frac{1}{Sn_{\text{exp}}}\text{Var}\big{[}w(x\mid\theta_{t})I( \Delta\chi^{2}(\theta_{t}\mid x)\geq y)\big{]} \tag{31}\] \[\leq\frac{1}{n_{\text{exp}}}\left(\frac{P(y\mid\theta_{t})-P(y_{ \max}\mid\theta_{t})}{C+\exp\big{[}\tfrac{1}{2}(y-\epsilon)\big{]}}+\frac{P(y_ {\max}\mid\theta_{t})}{C+1}-\frac{P(y\mid\theta_{t})^{2}}{S}\right) \tag{32}\] where we note that the \(S\) factors in the first two terms were cancelled thanks to being able to reuse the pseudo-experiments generated at all \(S\) values for the CDF estimation of each \(\theta_{t}\) value. For reference, the variance on the CDF estimator in the conventional FC method (denoted in the following equations by "conv") is given by the binomial error \[\text{Var}[\hat{P}_{\text{conv}}(y)\mid\theta_{t}]=\frac{1}{n_{\text{exp}}} \left(P(y\mid\theta_{t})-P(y\mid\theta_{t})^{2}\right), \tag{33}\] so the variance on the estimated critical values \(\hat{y}(P\mid\theta_{t})\) in the mixture-FC method is smaller by the factor \[\gamma :=\frac{\text{Var}[\hat{y}(P\mid\theta_{t})]}{\text{Var}[\hat{y}_{ \text{conv}}(P\mid\theta_{t})\mid\theta_{t}]} \tag{34}\] \[=\frac{\text{Var}[\hat{P}(y\mid\theta_{t})]}{\text{Var}\big{[} \hat{P}_{\text{conv}}(y\mid\theta_{t})\mid\theta_{t}\big{]}}\] (35) \[\leq\frac{A(y)+B(y)P(y_{\text{max}}\mid\theta_{t})/P-\frac{1}{S }P}{1-P}\] (36) \[A(y) :=\frac{1}{C+\exp\big{[}\frac{1}{2}(y-\epsilon)\big{]}}\] (37) \[B(y) :=\frac{1}{C+1}-A(y). \tag{38}\] where \(y\) is the true \(P\)-quantile satisfying \(P(y\mid\theta_{t})=P\). The typical functional shape of the upper bound is shown in Fig. 1a. Let us first consider the case of \(P\ll P(y_{\text{max}}\mid\theta_{t})\). For the small \(P\leq 1/2\) values one is typically interested in, the mixture model method obtains more precise critical values than the conventional method (i.e. \(\gamma\leq 1\)) for all \(y\geq\epsilon\) if \(\theta_{t}\in\{\theta\}_{S}\) (\(C=1\)), or all \(y\geq\epsilon+2\log 2\) if \(\theta_{t}\notin\{\theta\}_{S}\) (\(C=0\)). As \(y\) increases, the relative variance first decreases linearly, and for \(y\gtrsim 2+\epsilon\) it starts to decrease exponentially as \(\gamma\lesssim\exp(-y/2)\). As \(y\) further increases toward \(\Delta\chi^{2}_{\text{max}}\), and \(P\ll P(y_{\text{max}}\mid\theta_{t})\) fails to hold anymore, the \(B(y)P(y_{\text{max}}\mid\theta_{t})/P\) term becomes dominant, which saturates to \(P(y_{\text{max}}\mid\theta_{t})/P=1\) for \(y\geq\Delta\chi^{2}_{\text{max}}\). Hence the improvement flattens out to \(\gamma\leq\frac{1}{C+1}\) for \(y\geq\Delta\chi^{2}_{\text{max}}\), which is still at least as good as the conventional FC method. By choosing suitable parameter points \(\{\theta\}_{S}\) and thus a suitable \(\Delta\chi^{2}_{\text{max}}\), critical values of the desired precision can be calculated. As the exponential reduction in variance cancels the typically exponential dependence of the CDF on the test-statistic (\(\exp(-y/2)\) in the case of a chi-squared distribution), the relative error on the estimated CDF becomes approximately flat over a wide range of test-statistic values (Fig. 1b), which is much more efficient than for the conventional FC where low-CL become over-precise with more pseudo-experiments, while high-CL still suffer from large errors. ### Interpolation While for the conventional FC method one can only compute the critical values at the parameter value \(\theta_{s}\) where the pseudo-experiments were generated, in the mixture-FC method it is sufficient to guarantee that the target parameter value \(\theta_{t}\) is sufficiently close and surrounded by the sampling points \(\{\theta\}_{S}\) such that condition Eq. (11) holds. Considering that for a typical setup the toys to be generated are the same as those used in the conventional FC method, this means that the mixture-FC method not only reduces the uncertainty on the critical values at the sampling points \(\{\theta\}_{S}\), but also allows interpolating the critical values between these points with similar performance. ### Diagnostics and error estimation As the mixture-FC method exploits the relationship of the \(\Delta\chi^{2}\) statistic to the probability of sampling pseudo-experiments, it is essential that the calculation of \(\Delta\chi^{2}\) matches the process used to generate the pseudo-experiments. It is for example not allowed to sample from a poisson random number generator while using an approximation like Pearson's \(\chi^{2}\) for \(\Delta\chi^{2}\). A simple diagnostic is to calculate the average weight across all pseudo-experiments in \(\{x\}_{\text{mix}}\) and check that this is equal to 1 up to statistical fluctuations. Since the same pseudo-experiments will be used for all target parameter values \(\theta_{t}\) (which the weights are a function of), the statistical fluctuations of these average weights will be correlated for different \(\theta_{t}\) values. To estimate the error of the computed critical values, we recommend using resampling methods such as the non-parametric bootstrap [4] or jackknife [5] instead of simple methods like binomial errors, in order to capture not only the statistical fluctuations in the number of pseudo-experiments that fall into a range of \(\Delta\chi^{2}\) values, but also the statistical fluctuations in their weights. ## IV Example with a single cyclic parameter We consider a simple example that uses a binned-Poisson model, inspired by the search for CP violation in a long-baseline neutrino oscillation experiment, here in particular the T2K experiment [6]. The model has a single angular parameter called the "CP violation phase" \(\delta_{\rm CP}\in[-\pi,\pi]\) which is constrained by \(B=10\) Poisson-distributed observations \(n_{b}\sim\text{Poisson}(\lambda_{b})\) with the predicted event rate \[\lambda_{b}(\delta_{\rm CP}) :=10\times(1-\phi_{b}^{2})\times\left(1-\frac{1}{4}\sin(\delta_{ \rm CP}+\phi_{b})\right) \tag{39}\] \[\phi_{b} :=\frac{b-5.5}{10} \tag{40}\] for each bin with index \(b=1,2,\cdots,10\) (Fig. 2). The main feature of this model is that one is mostly sensitive to \(\sin\delta_{\rm CP}\) through the overall normalization of approximately 100 total observations (\(\sum_{b}n_{b}\)), and weakly sensitive to the \(\cos\delta_{\rm CP}\) component through the "shape" of the observations as a function of \(b\) (meant to represent bins of increasing neutrino energy). Deviations from Wilks' theorem are caused by \(\sin\delta_{\rm CP}\) having physical boundaries at \(\pm 1\) (resulting in reduced critical values around \(\sin\delta_{\rm CP}=\pm 1\)), the sign of \(\cos\delta_{\rm CP}\) acting as an effectively discrete degree of freedom (resulting in increased critical values at some \(\sin\delta_{\rm CP}\neq\pm 1\) values), as well as the Poisson nature of the observations. In an actual experiment, one would have further continuous discrete physics parameters degenerate with \(\delta_{\rm CP}\) as well as various systematic uncertainties treated as nuisance parameters. For simplicity and clarity however we focus on Figure 1: (a) Example functional shape of upper bound on the ratio of estimated critical value variance from Eq. (36). The red line indicates the error contribution from pseudo-experiments with \(y\leq Y(x)<\Delta\chi^{2}_{\rm max}\) (first term with \(A(y)\)) which is responsible for the exponential reduction of total uncertainty until the contribution from pseudo-experiments with \(y\geq\Delta\chi^{2}_{\rm max}\) (second term with \(B(y)\) shown by green line) takes over for very high CL critical values. (b) The relative error on the calculated CDF estimator \(\hat{P}(y\mid\theta_{t})\) assuming \(n_{\rm exp}=10,000\) pseudo-experiments at each sampling value \(\theta_{s}\). A reference 10% error threshold is indicated by the dotted line. The example used for both plots is constructed assuming \(\epsilon=1\), \(S=10\), \(C=1\), \(\Delta\chi^{2}_{\rm max}=35\) and the true \(Y(x)\)-CDF is assumed to be chi-squared with 1 degree of freedom. In (a), the exponential growth factor for the green line depends on the assumed CDF, unlike the red line whose decay factor is given by Eq. (36). \(\delta_{\rm CP}\) alone, which for continuity with the earlier sections will be referred to as \(\theta=(\delta_{\rm CP})\), and the observations as \(x=(n_{1},n_{2},\cdots,n_{10})\). We generate \(n_{\rm exp}=10,000\) pseudo-experiments at each of \(S=16\) values of \(\theta\) evenly distributed in the parameter range \([-\pi,\pi]\). We first focus on the target value of \(\theta_{t}=-\pi/2\). Fig. 3a shows the distribution of \(\Delta\chi^{2}(\theta_{t}\mid x)\) obtained for pseudo-experiments \(x\) sampled from different \(\theta_{s}\) values. In the conventional FC method, only those generated at \(\theta_{s}=\theta_{t}\) are used, which correspond to the black histogram which falls off quickly for large \(\Delta\chi^{2}(\theta_{t}\mid x)\). In the mixture-FC method we further make use of the pseudo-experiments generated at all other \(\theta_{s}\) values, of which \(\theta_{s}=0\) and the other extreme of \(\theta_{s}=\pi/2\) are shown by the red and green histograms respectively. Clearly, the pseudo-experiments sampled from the shifted \(\theta_{s}\) values have a significantly higher fraction of large \(\Delta\chi^{2}(\theta_{t}\mid x)\) values. At the same time, one can see one of the problems arising from using only the pseudo-experiments generated at \(\theta_{s}=\pi/2\), in that one would need to apply very large weights for the small \(\Delta\chi^{2}(\theta_{t}\mid x)\) region where \(\theta_{s}=\pi/2\) has a very small sampling probability. The mixture of pseudo-experiments generated at all 16 \(\theta\) values however, shown by the blue histogram, is able to generate more pseudo-experiments for all \(\Delta\chi^{2}(\theta_{t}\mid x)\) values, with the difference in slope compared to the black target histogram showing the exponential increase is pseudo-experiments for larger \(\Delta\chi^{2}(\theta_{t}\mid x)\) values. This is even clearer to see in Fig. 3b where the mixture distribution was reweighted using the assigned weights. Good agreement with the target distribution as simulated by the conventional FC method is seen, and the total number of unweighted pseudo-experiments in the mixture-FC method exceeds the theoretical lower bound. We now check some of the diagnostics for the mixture-FC method. The distribution of importance sampling weights \(w(x\mid\theta_{t})\) are shown in Fig. 4a which are found to be mostly a function of \(\Delta\chi^{2}(\theta_{t}\mid x)\) with small additional variance. The weights are found to be well contained by the theoretical bounds from Eq. (16), which were drawn assuming a \(\epsilon=0.3\) value by looking at the \(\Delta\chi^{2}(\hat{\theta}_{S}(x)\mid x)\) distributions in Fig. 4b. The sum of weights is found to be consistent with 1 (Fig. 5). Next, we look at the critical values. Fig. 6 shows the critical values as function of the (true/target) parameter value \(\theta\) using both the standard FC method (black error bars) and the mixture-FC method. Despite using the same set of pseudo-experiments, the critical values obtained with the mixture-FC method have significantly smaller uncertainty especially at higher CL, and also provide access to details of the functional shape between the 16 sampling values of \(\theta\). For the 1\(\sigma\) critical values (Fig. 6b) we see that despite the relatively fine spacing of sampling values, the interpolation error as indicated by the non-overlap of red and gray error bands next to the \(\theta=\pm\pi/2\) values is larger than the size of the binomial error band in the conventional method. As these binomial error bands do not capture the interpolation error, their smallness can be misleading, and renders the interpolation feature of the mixture-FC method very useful. For the 2\(\sigma\) (Fig. 6c) and 3\(\sigma\) critical values (Fig. 6d) we see good consistency between the two methods while also noting the significantly smaller errors in the mixture-FC calculation. For \(3\sigma\) CL (Fig. (d)d) we see the errors in the conventional method are already so large that some of the features of the critical values are not recognizable, such as the bumps at \(\theta=0,\pi\) and the asymmetry of critical values for a flip of the \(\sin\delta_{\rm CP}\) sign, caused by Poisson statistics. For \(4\sigma\) and higher CL the conventional FC method is unable to determine the critical values except for a lower limit. The mixture-FC method on the other hand still produces critical values with comparable relative error sizes to the lower CL critical values. The estimated relative errors are plotted in Fig. 7 and are consistent with the typical shape from theoretical arguments (Fig. (b)b). To draw the upper bound from \(\gamma\) in Eq. (36), we conservatively assume \(\Delta\chi^{2}_{\rm max}=32\) based on Fig. (b)b, i.e. we will only assume \(\Delta\chi^{2}(\hat{\theta}_{S}(x)\mid x)\leq\epsilon\) up to \(\Delta\chi^{2}(\theta_{t}\mid x)\leq 32\). In this example, the actual mixture-FC error estimated with the bootstrap is smaller than the theoretical upper limit from \(\gamma\) by about factor 2 for \(\Delta\chi^{2}_{t}<16\). This can be interpreted as more than one sampling value \(\theta_{s}\) contributing to the sampling of each pseudo-experiment, rather than the assumption in the theoretical upper limit that only \(\hat{\theta}_{S}(x)\) would contribute. For \(\Delta\chi^{2}_{t}>16=\Delta\chi^{2}_{\rm max}/2\) on the other hand the theoretical upper limit starts to increase significantly, whereas the actual error estimated with the bootstrap only grows slowly. This can be interpreted as our choice of \(\Delta\chi^{2}_{\rm max}=32\) being overly conservative: with the present example, the chosen sampling grid \(\{\theta\}_{S}\) appears to be effective up to significantly higher \(\Delta\chi^{2}\) values. This is partly due to the convenient situation of having a parameter \(\theta=(\delta_{\rm CP})\) with a bounded parameter space \(\delta_{\rm CP}\in[-\pi,\pi]\). Figure 3: \(\Delta\chi^{2}_{t}:=\Delta\chi^{2}(\theta_{t}\mid x)\) distributions with target \(\theta_{t}=-\pi/2\) for (a) various sampling parameter values \(\theta_{s}\), and (b) comparison of estimated distributions at \(\theta_{t}\) obtained using standard FC and mixture-FC methods. In both plots, error bars indicate \(1\sigma\) binomial confidence intervals. Vertical dashed lines indicate \(1,2,3,4,5\sigma\) confidence level critical values obtained by the mixture-FC method. (a) Error bars are omitted for bins with zero entries for clarity. (b) The red “weighted mixture” histogram is also drawn with boxes representing the error from number of pseudo-experiments in each bin and their weight variance, but these errors are smaller than the line width and not visible. The “theoretical lower limit” on the total number of pseudo-experiments in the mixture distribution is obtained by multiplying the lower bound in Eq. (14) to the red “weighted mixture” histogram assuming \(C=1\) and \(\epsilon=0.3\). Figure 5: The sample mean of mixture-FC pseudo-experiment weights \(w(x\mid\theta_{t})\) as a function of the target \(\theta_{t}\) value. The error bands indicate the \(1\sigma\) standard error on the mean, which are correlated between different target \(\theta_{t}\) values. Figure 6: \(1,2,3,4,5\sigma\) confidence level critical values (given as \(\sqrt{\Delta\chi_{c}^{2}}\)) obtained from the same set of pseudo-experiments with the standard FC (black error bars) and mixture-FC method (red error bands). In both cases the error bars/bands indicate the \(1\sigma\) error on the csrtical values obtained with binomial/bootstrap errors for the standard/mixture FC method respectively. Dashed lines indicate critical values by Wilks’ theorem, which are not valid here, but still drawn for reference. (a) The black error bars for 4 and \(5\sigma\) have been slightly offset to prevent overlap. (b,c,d) The gray error bands indicate the linear interpolation of the error bar end-points in the standard FC method. ## V Discussion ### Relation to techniques in statistical mechanics The presented method is similar in spirit to the "Multiple Histogram Reweighting" ("multi-histogram") method [7] in statistical mechanics, where statistical ensembles are simulated for various parameter values and combined by reweighting to the desired parameter value. In the multi-histogram method, the ensembles are combined with an additional per-ensemble weight, which is adjusted to minimize the overall error on the variable to be estimated. A similar per-ensemble weighting could be applied in the presented mixture-FC method as well, where these additional weights would be allowed to depend on the target \(\Delta\chi_{t}^{2}:=\Delta\chi^{2}(\theta_{t}\mid x)\) value as well, in order to reduce the variance on the critical value estimator as much as possible. One difference to the multi-histogram method however, is that because we do not resort to Markov-Chain Monte-Carlo techniques to sample the pseudo-experiments, the sampling distribution of pseudo-experiments \(x\) at each parameter value \(\theta\) is known exactly including the normalization constant. Hence the iterative procedure that is required at the end of the multi-histogram method to self-consistently determine these normalization constants (the free energies) is not necessary in the mixture-FC method. ### Relation to the marginal distribution The sampling distribution constructed as a mixture over several parameter values \(\{\theta\}_{S}\) can be considered a marginal probability distribution with prior \(\pi(\theta)=\frac{1}{S}\sum_{s=1}^{S}\delta(\theta-\theta_{s})\), where \(\delta(\cdot)\) is the Dirac delta function. Additional per-ensemble weights as discussed in the previous paragraph would correspond to an alternative prior \(\pi(\theta\mid\Delta\chi_{t}^{2})=\sum_{s=1}^{S}r_{s}(\Delta\chi_{t}^{2}) \,\delta(\theta-\theta_{s})\) where \(r_{s}(\Delta\chi_{t}^{2})\) can be optimized to reduce errors subject to the condition \(\sum_{s}r_{s}(\Delta\chi_{t}^{2})=1\) for all \(\Delta\chi_{t}^{2}\). One can even generalize the discussion to continuous priors \(\pi(\theta\mid\Delta\chi_{t}^{2})\), where in order to preserve the arguments on efficiency reduction, we would need to extend the single-point condition from Eq. 11 to a condition on a finite-size region on \(\pi(\theta\mid\Delta\chi_{t}^{2})\). Unlike in the conventional FC method, where one needs a large number of pseudo-experiments at each target parameter value, it can be preferable in the mixture-FC method to generate less pseudo-experiments at each sampling Figure 7: Estimated relative errors on the CDF estimator \(\hat{P}(y\mid\theta_{t})\) with target \(\theta_{t}=-\pi/2\). For the standard FC the standard error from the binomial distribution is shown (black solid line), where the more precise CDF estimate from the mixture-FC method was used in computing these errors. For the mixture-FC method the bootstrap error estimate (red solid line) is well below the theoretical upper limit of Eq. (36) calculated assuming \(\epsilon=0.3\) and \(\Delta\chi_{\max}^{2}=32\) (blue dashed line). value, but instead increase the number of considered sampling points \(S\). If \(Sn_{\rm exp}\) is held fixed, this results in a reduction of the variance of critical values by reducing the variance in weights bounded from above by \(\exp(\epsilon/2)\). Given this relation to the marginal distribution, let us now consider the computation of \(\Delta\chi^{2}(x\mid\theta_{t})=-2\log L(x\mid\theta_{t})/L(x\mid\hat{\theta}(x))\) as being approximated by \(-2\log L(x\mid\theta_{t})/L_{m}(x)\), where in the denominator, the profiling operation was replaced by a marginalization over \(\theta\) with some prior over \(\theta\). We have therefore a simple likelihood ratio test between \(p(x\mid\theta_{t})\) and \(p_{m}(x):=\int\mathrm{d}\theta\,\pi(\theta)p(x\mid\theta)\) and it now becomes evident that in order to efficiently generate pseudo-experiments with small \(p\)-values under the null hypothesis \(p(x\mid\theta_{t})\), one should simply generate the pseudo-experiments from the alternative hypothesis \(p_{m}(x)\), which is what is being done in the mixture-FC method. In practice, it will be easier to use the discrete "prior" over \(\{\theta\}_{S}\) as was discussed in the text, because unless the likelihood is gaussian, the numerical integration required for marginalization usually increases the computational cost and complexity. This relation to the profiling/marginalization similarities can nevertheless be exploited to motivate an ideal spacing of \(\{\theta\}_{S}\) values. Out of the well-known objective priors, Jeffreys' prior [8] is known to produce a prior that would be uniform in the parameterization in which the likelihood is gaussian, if such a parameterization exists. Since profiling and marginalization with a uniform prior over a gaussian likelihood produce equivalent results up to a constant offset, Jeffreys' prior can be considered a good candidate for choosing the \(\{\theta\}_{S}\) values at which to generate pseudo-experiments. For example, in the CP-violation analysis that was discussed in the earlier section, it would be more suitable to choose a uniform spacing of parameter values not in \(\delta_{\rm CP}\) but in \(\sin\delta_{\rm CP}\) with equal probabilities for the sign of \(\cos\delta_{\rm CP}\), since the dominant constraint is due to the total number of events \(N\sim\mathrm{Poisson}(\lambda=A+B\sin\delta_{\rm CP})\) for some constants \(A\) and \(B\), resulting in an approximately gaussian likelihood over \(\sin\delta_{\rm CP}\). ### Nuisance parameters Because the significant error-reduction in the mixture-FC method exploits the specific relation of the \(\Delta\chi^{2}(\theta_{t}\mid x)\) statistic to the distribution that generates the pseudo-experiments, one cannot assume all features to directly translate to an analysis with nuisance parameters or "systematic" parameters as they are often called in physics. Especially for the commonly used methods of profile-FC [9] or posterior Highland-Cousins methods [10], where the space of nuisance parameters from which to generate the pseudo-experiments is significantly reduced based on constraints by the observed experimental data, it is possible to have situations where the straightforward application of the mixture-FC method does not yield the exponential reduction of errors on the estimated critical values given by Eq. 36. One should therefore not rely on these to estimate the number of required pseudo-experiments. In a relatively general setting, when the target distribution is directly a part of the mixture distribution (so \(C=1\)), one can show that even in the worst case, the variance on the critical values only increases very slightly compared to the conventional method, by a factor \(1/(1-P(y))\) (see Appendix ). This factor is negligible considering that for high CL we have \(P(y)\ll 1\). The weights are bounded from above by a similar limit, which is important for well defined importance sampling behavior. The naive application of the mixture-FC method to Feldman-Cousins confidence intervals is therefore still worth a try. In fact, certain situations may yield near-exponential reduction of errors as in the case without nuisance parameters, but due the lack of theoretical guarantees it is suggested to carefully study the distribution of weights and the reliability of bootstrap error estimates in this situation. Because one cannot guarantee an exponential reduction of errors in a setting with nuisance parameters, the ability to interpolate critical values will be more interesting in this setting. Here it is important that the pseudo-experiments generated between neighboring \(\theta_{s}\) values (and suitable values of nuisance parameters) sufficiently overlap in the space of pseudo-experiments. Otherwise, the mismatch between pseudo-experiment generation and the statistical model behind the test statistic may quickly result in a large spread of weight values, that would make both the estimated critical values as well as their error estimates unreliable. This is because with nuisance parameters, there are significantly more dimensions in which the pseudo-experiments can differ, even if they have similar values for the test statistic. In one specific situation however, all properties discussed in earlier sections are directly applicable despite the presence of nuisance parameters. This is when using the prior Highland-Cousins method in conjunction with a marginal-\(\Delta\chi^{2}\) statistic, where it is essential to use the same prior distribution \(\pi(\eta)\) for the nuisance parameters \(\eta\) in both cases. This is because here the effect of nuisance parameters is entirely absorbed by the probability model to generate the pseudo-experiments, in the sense of \(p(x\mid\theta)=\int\mathrm{d}\eta\,\pi(\eta)\,p(x\mid\theta,\eta)\), such that as far as the mixture-FC method is concerned, no nuisance parameters exist. More detailed discussions with examples and possible modifications to the sampling distributions for pseudo-experiments will be discussed in a separate publication. ### Relation to similar techniques for statistical inference Very similar importance sampling techniques have been used for the calculation of \(p\)-values under a null hypothesis with a likelihood ratio statistic. For example, Woodroofe [11] discusses the case with a continuous prior over the parameter of interest. In our notation, \[p_{\text{sample}}(x)=\int\text{d}\theta\,\pi(\theta)p(x\mid\theta) \tag{41}\] with only a lower bound on the weights \[w(x\mid\theta_{t}):=\frac{p(x\mid\theta_{t})}{p_{\text{sample}}(x)}\geq\frac{ p(x\mid\theta_{t})}{p(x\mid\bar{\theta}(x))}=\exp\bigl{[}-\tfrac{1}{2}\Delta \chi^{2}(\theta_{t}\mid x)\bigr{]} \tag{42}\] given, rather than an upper bound which would be essential for showing small errors on the estimated \(p\)-values. An asymptotic formula for the weights using the Saddle-point method is also given. Ref. [12] (Sect. 5.6) describes a method developed in the search for the Higgs boson by the ATLAS experiment [13]. They point out the difficulty of performing the integral over the continuous prior in Woodroofe's method and instead use a set of discrete points \(\{\theta\}_{S}\) including \(\theta_{t}\), as we used for the mixture-FC method (with \(C=1\)). The choice of weight function however is different in that a pseudo-experiment is used only if the \[\omega_{s}(x\mid\theta_{t}):=\frac{p(x\mid\theta_{t})}{p(x\mid\theta_{s})}= \exp\bigl{[}-\tfrac{1}{2}\left(\Delta\chi^{2}(\theta_{t}\mid x)-\Delta\chi^{ 2}(\theta_{s}\mid x)\right)\bigr{]} \tag{43}\] value at the parameter value \(\theta_{s}\) from which the pseudo-experiment was sampled from is the smallest among all other values in \(\{\theta\}_{S}\) -- i.e. \(\omega_{s}(x\mid\theta_{t})=\min_{s^{\prime}}\omega_{s^{\prime}}(x\mid\theta_ {t})\) -- and discarded otherwise. If the pseudo-experiment is used, it is weighted by \(\omega_{s}(x\mid\theta_{t})\). Then by combining the pseudo-experiments sampled from all \(\{\theta\}_{S}\) values with their weights, the desired distribution \(p(x\mid\theta_{t})\) is attained with higher probability to sample pseudo-experiments of large \(\Delta\chi^{2}(\theta_{t}\mid x)\). Since \(\theta_{t}\in\{\theta\}_{S}\), this procedure ensures that \(w(x\mid\theta_{t})\leq 1\) for well-behaved weights. One downside of this vetoing technique, as explained by the authors, is that the spacing of \(\{\theta\}_{S}\) must not be too dense in order not to reduce the efficiency of the method with a high vetoing probability. The mixture-FC method does not have this problem, because the weights are computed using the actual sampling probability, which is the sum of probabilities over \(\{\theta\}_{S}\), and no vetoing is necessary. While the claimed benefit of the vetoing technique is its independence from the exact normalization of the sampling probability distribution -- due to only using the probability _ratios_\(\omega_{s}\) -- the same is true for the choice of weights in the mixture-FC method, whose weights from Eq. (2) can be written as \[w(x\mid\theta_{s})=\frac{1}{\frac{1}{S}\sum_{s=1}^{S}\bigl{[}\omega_{s}(x\mid \theta_{t})\bigr{]}^{-1}}. \tag{44}\] For the problem of finding \(p\)-values under a null hypothesis with a likelihood-ratio statistic, the relevant part of the mixture-FC can therefore be regarded a slight improvement to the method by Ref. [12]. Furthermore, we have explicitly shown that under suitable conditions, which for a typical setup requires the absence of nuisance parameters, the variance on the estimated \(p\)-values is reduced exponentially for large values of the test statistic. Finally, we note some of the differences of computing \(p\)-values to the FC confidence interval construction in the context of importance sampling. When computing \(p\)-values, we are typically interested in the distribution of the test statistic under a _single_ null hypothesis. In contrast, in the FC method we need the test statistic distribution for _all_ plausible parameter values, which in practice is achieved by computing them for a finite set \(\{\theta\}_{S}\), and interpolating in between. The FC construction therefore benefits from the ability to interpolate critical values with importance sampling, which is not always of interest in the computation of \(p\)-values. In addition, the pseudo-experiments sampled from different parameter values as required for the construction of the mixture distribution are already available even in the conventional FC method, making the transition to the mixture-FC method straightforward. ## VI Summary We presented a new method to compute critical values for Feldman-Cousins confidence intervals. The method is a simple extension of the conventional method in that the same sets of pseudo-experiments generated at different parameter values are simply combined with suitable weights. We showed that this results in a significant reduction of the errors on the critical values, with exponential reduction for high confidence level critical values, at almost no additional computational cost. The method was further shown to enable accurate interpolation of critical values between the parameter values at which the pseudo-experiments were generated. The theoretically calculated performance was confirmed using a simple example for the analysis of neutrino oscillations. While the exponential reduction of errors is currently only guaranteed for analyses without nuisance parameters, the general technique is applicable to any analysis making use of the Feldman-Cousins method. ## Appendix A Analysis of critical value variances for generic mixtures Let us denote the target distribution of pseudo-experiments at \(\theta_{t}\) by \(p_{t}(x)\). In a setting with nuisance parameters \(\eta\) with probability distribution \(p(x\mid\theta,\eta)\), this could for example be \(p\big{(}x\mid\theta_{t},\hat{\eta}(\theta_{t}\mid x_{\text{obs}})\big{)}\) for the profile-FC method, or \(\int\mathrm{d}\eta\,\pi(\eta\mid x_{\text{obs}},\theta_{t})\,p(x\mid\theta_{t },\eta)\) in the posterior HC method, with \(\hat{\eta}(\theta\mid x_{\text{obs}})=\text{arg min}_{\eta}\,\chi^{2}(\theta, \eta\mid x_{\text{obs}})\) the profile best-fit values and \(\pi(\eta\mid x_{\text{obs}},\theta)\) the posterior distribution for nuisance parameters conditioned by the target \(\theta\) value for a fit to the observed data \(x_{\text{obs}}\). The other pseudo-experiments are sampled from \(p_{a}(x)\), whose distribution we don't explicitly specify here, but could for example be a mixture over different \(\theta\) and \(\eta\) values. The mixture of \(N_{t}\) pseudo-experiments sampled from \(p_{t}(x)\) and \(N_{a}\) pseudo-experiments sampled from \(p_{a}(x)\) weighted by \(w(x)=(N_{t}+N_{a})p_{t}(x)/\big{[}N_{t}p_{t}(x)+N_{a}p_{a}(x)\big{]}\) can be evaluated analogously to the main text and using the estimators \[\hat{P}(y) :=\frac{1}{N_{t}+N_{a}}\sum_{i=1}^{N_{t}+N_{a}}w(x_{i})I\big{(}Y( x_{i})\geq y\big{)} \tag{10}\] \[\hat{P}_{\text{conv}}(y) :=\frac{1}{N_{t}}\sum_{i=1}^{N_{t}}I\big{(}Y(x_{i}^{(t)})\geq y \big{)}\] (11) \[\mathbb{E}[\hat{P}(y)] =\mathbb{E}[\hat{P}_{\text{conv}}(y)]=P(y) \tag{12}\] yield a variance reduction of \[\gamma =\frac{\text{Var}[\hat{P}(y)]}{\text{Var}[\hat{P}_{\text{conv}}(y )]} \tag{13}\] \[\leq\frac{\frac{1}{N_{t}}P(y)-\frac{1}{N_{t}+N_{a}}P(y)^{2}}{ \frac{1}{N_{t}}P(y)-\frac{1}{N_{t}}P(y)^{2}}\] (14) \[\leq\frac{1}{1-P(y)} \tag{15}\] where \[x_{i} \sim\frac{N_{t}p_{t}(x)+N_{a}p_{a}(x)}{N_{t}+N_{a}} \tag{16}\] \[x_{i}^{(t)} \sim p_{t}(x). \tag{17}\] ###### Acknowledgements. We would like to thank Christophe Bronner and Louis Lyons for useful discussions and connecting us to Kyle Cranmer, whom we would like to thank for introducing Refs. [11; 12]. This research was supported by JSPS KAKENHI Grant Number 19J22440.
2306.14536
Boundary Strichartz estimates and pointwise convergence for orthonormal systems
We consider maximal estimates associated with fermionic systems. First we establish maximal estimates with respect to the spatial variable. These estimates are certain boundary cases of the many-body Strichartz estimates pioneered by Frank, Lewin, Lieb and Seiringer. We also prove new maximal-in-time estimates, thereby significantly extending work of Lee, Nakamura and the first author on Carleson's pointwise convergence problem for fermionic systems.
Neal Bez, Shinya Kinoshita, Shobu Shiraki
2023-06-26T09:19:40Z
http://arxiv.org/abs/2306.14536v1
# Boundary Strichartz estimates and pointwise convergence for orthonormal systems ###### Abstract. We consider maximal estimates associated with fermionic systems. First we establish maximal estimates with respect to the spatial variable. These estimates are certain boundary cases of the many-body Strichartz estimates pioneered by Frank, Lewin, Lieb and Seiringer. We also prove new maximal-in-time estimates, thereby significantly extending work of Lee, Nakamura and the first author on Carleson's pointwise convergence problem for fermionic systems. This work was supported by JSPS Kakenhi grant numbers 19H00644, 19H01796, 22H00098 and 23H01080 (Bez), 21J00514, 22KJ0446 (Kinoshita), and 19H01796, 22H00098 (Shiraki). The third author is also supported by Centro de Analise Matematica, Geometria e Sistemas Dinamicos (CAMGSD) The pointwise convergence problem in particular provided an especially important source of motivation for the present paper and our focus here is on _maximal estimates_, either with respect to the spatial variable or the temporal variable. Maximal-in-space estimates correspond to Strichartz estimates (1.1) with \(r=\infty\) and this delicate case has, to a large extent, been left open. Maximal-in-time estimates correspond to variants of (1.1) with a \(L_{x}^{\frac{q}{2}}L_{t}^{\infty}\) mixed-norm on the left-hand side (possibly in localised form) and yield pointwise convergence of \(\gamma\) to \(\gamma_{0}\) at the level of the density functions. As mentioned above, such estimates were first considered in [5] and here we significantly develop this line of investigation by understanding the effect of adding smoothness to the initial data \(\gamma_{0}\) and also, in the spirit of work of Sjogren-Sjolin [56] and Barcelo-Bennett-Carbery-Rogers [1] (in the classical single-particle case), to provide information on the size of the so-called divergence sets where pointwise convergence to the initial data fails to hold. Our new results will appear in Section 2; in advance of that, it will be helpful to include a more detailed discussion of the known results regarding the Strichartz estimates (1.1) and the maximal-in-time estimates. ### Prior work on (1.1) Firstly, we observe that the case \(\beta=1\) is equivalent to the classical Strichartz estimate \[\|Uf\|_{L_{t}^{q}L_{x}^{r}}\lesssim\|f\|_{\dot{H}^{s}}. \tag{1.3}\] Indeed, given (1.1) and taking all but one of the sequence \((\lambda_{j})_{j}\) to be zero (and rescaling), we obtain (1.3). Conversely, from the triangle inequality and an application of (1.3) for each \(f_{j}\), one quickly obtains (1.1) with \(\beta=1\). Notice that the latter argument makes no use of the orthogonality of the \(f_{j}\) and thus one would like to understand how much gain (if any) can be sought from the orthogonality by raising \(\beta\geq 1\) as far as possible. It turns out that the optimal value of \(\beta\) depends on \(q\) and \(r\) in an interesting way and we now describe the known results in this direction. In order to do so, we first consider the case \(r<\infty\) and divide the discussion into the so-called _sharp admissible_ and _non-sharp admissible_ cases, and following that focus on the case \(r=\infty\) (with a brief discussion of the other boundary cases \(q=2,\infty\)). #### 1.2.1. The sharp admissible case with \(r<\infty\) Following standard terminology, when \(\frac{1}{q}=\frac{d}{2}(\frac{1}{2}-\frac{1}{r})\) holds, we shall refer to this case as the _sharp admissible_ case; here the Sobolev exponent \(s\) coincides with zero and the initial data belong to \(L^{2}(\mathbb{R}^{d})\). First, let us consider the case \(d\geq 3\). The Keel-Tao endpoint corresponds to \((q,r)=(2,\frac{2d}{d-2})\) and it was shown in [38] that (1.3) holds. This answered a long-standing question about the validity of the endpoint case and interpolation with the easy estimate at \((q,r)=(\infty,2)\) (i.e. conservation of energy) yields all possible Strichartz estimates (1.3) in the sharp admissible case. We refer the reader to [38] for discussion and references for earlier work on the non-endpoint cases. Somewhat curiously, at the Keel-Tao endpoint \((q,r)=(2,\frac{2d}{d-2})\), the optimal value of \(\beta\) for (1.1) is \(1\) and there is in fact no room to extract any gain from the orthogonality of the \(f_{j}\). This phenomenon was observed by Frank and Sabin [28] in which, more generally, they established that (1.1) fails if \(\beta>\frac{q}{2}\). Earlier, Frank _et al._[26] showed that (1.1) also fails if \(\beta>\frac{2r}{r+2}\). These thresholds coincide when \(r=\frac{2(d+1)}{d-1}\) and this turns out to be the endpoint case in the sense that the estimate (1.1) is known to be true with \[\beta\leq\frac{2r}{r+2}\text{ when }r\in\bigg{[}2,\frac{2(d+1)}{d-1}\bigg{)} \tag{1.4}\] and \[\beta<\frac{q}{2}\text{ when }r\in\bigg{[}\frac{2(d+1)}{d-1},\frac{2d}{d-2} \bigg{)}. \tag{1.5}\] The estimates in (1.4) for \(r\in[2,\frac{2(d+2)}{d}]\) were first established in [26] and later extended to \(r\in[2,\frac{2(d+1)}{d-1})\) in [27]. As observed in [28], the estimates in (1.5) follow from those in (1.4) by an interpolation argument between (1.1) with \(\beta=1\) at the Keel-Tao endpoint \(r=\frac{2d}{d-2}\) and estimates from (1.4) with \(r\) less than, but arbitrarily close to1, the exponent \(\frac{2(d+1)}{d-1}\). Footnote 1: This argument leaves open the case \(\beta=\frac{g}{2}\) in (1.5) and, as far as we are aware, this remains a very interesting open problem. This would follow if we could extend (1.4) to \(r=\frac{2(d+1)}{d-1}\) but unfortunately, as shown in [26], such an estimate is false. When \(d=2\), the known results are of a somewhat similar nature; for instance, (1.4) holds without modification (due to [26, 27]). However, at the Keel-Tao exponent we have \(r=\infty\) and it is known that the classical Strichartz estimate \[\|Uf\|_{L^{2}_{t}L^{\infty}_{x}(\mathbb{R}\times\mathbb{R}^{2})}\lesssim\|f\|_ {L^{2}(\mathbb{R}^{2})} \tag{1.6}\] is false; see, for example, Montgomery-Smith [49]. However, one may still obtain (1.5) as it stands for \(d=2\) (in particular, not including \(r=\infty\)) by using (1.3) with finite and sufficiently large values of \(r\). Finally, when \(d=1\), the estimates in (1.4) again hold as stated (thanks to [26, 27]) and since the situation in (1.5) does not arise, this completes the picture for the sharp admissible cases where \(r<\infty\). #### 1.2.2. The non-sharp admissible case with \(r<\infty\) We refer to the case \(\frac{1}{q}<\frac{d}{2}(\frac{1}{2}-\frac{1}{r})\) as the _non-sharp admissible_ case. As long as \(q,r<\infty\), the results in [26, 27, 28] were extended to the non-sharp admissible case in [3] (see also [6]). To state the result, we introduce the notation \(\beta(q,r)\) for the exponent satisfying the relation \[\frac{d}{2\beta(q,r)}=\frac{1}{q}+\frac{d}{r},\] and observe that \(\beta(q,r)=\frac{2r}{r+2}\) in the sharp admissible case. **Theorem 1.1** ([3]).: _Let \(d\geq 1\), \(q,r\in[2,\infty)\), and \(\frac{1}{q}<\frac{d}{2}(\frac{1}{2}-\frac{1}{r})\). Then the estimate (1.1) holds in each of the following cases._ (i)_\(\beta\leq\beta(q,r)\) when \(\frac{d}{r}>\frac{d-1}{q}\)._ (ii)_\(\beta<\frac{g}{2}\) when \(\frac{d}{r}\leq\frac{d-1}{q}\)._ It was also shown in [3] that \[\beta\leq\min\left\{\beta(q,r),\frac{q}{2}\right\}\] is a necessary condition for (1.1) to hold, and thus Theorem 1.1 is close to optimal. ### The boundary cases We shall refer the cases in the admissible region \(\frac{1}{q}\leq\frac{d}{2}(\frac{1}{2}-\frac{1}{r})\), \(q,r\in[2,\infty]\), when \(q=2\), \(q=\infty\) and \(r=\infty\) as boundary cases. As we have already noted, when \(q=2\) the only available estimates of the form (1.1) are when \(\beta=1\). The case \(q=\infty\) is interesting and discussing this case will also naturally lead us to the wider context of extending classical estimates to the setting of orthonormal systems. To obtain (1.1) with \(q=\infty\), one may invoke Lieb's version of the Sobolev inequality2 Footnote 2: In fact, Lieb [44] proved a somewhat stronger estimate with \(\|\lambda\|_{\infty}^{1-\frac{2}{r}}\|\lambda\|_{1}^{\frac{2}{r}}\) on the right-hand side of (1.7). \[\left\|\sum_{j}\lambda_{j}|f_{j}|^{2}\right\|_{L^{\frac{r}{r}}(\mathbb{R}^{d })}\lesssim\|\lambda\|_{\beta} \tag{1.7}\] for orthonormal systems \((f_{j})_{j}\) in \(\dot{H}^{s}(\mathbb{R}^{d})\), where \(r\in(2,\infty)\), \(s=\frac{d}{2}-\frac{d}{r}\) and \(\beta<\frac{r}{2}\). Using (1.7), and the obvious fact that orthonormality in \(\dot{H}^{s}(\mathbb{R}^{d})\) is preserved under the Schrodinger flow \(e^{it\Delta}\) for each fixed \(t\in\mathbb{R}\), we obtain (1.1) when \(q=\infty\), \(r\in(2,\infty)\) and \(\beta<\frac{r}{2}\). Moreover, as shown in [3, Proposition 7.1], this result cannot be extended to \(\beta=\frac{r}{2}\) (even with a weak-type norm \(L_{x}^{\frac{r}{2},\infty}\) on the left-hand side). Since \(\beta(\infty,r)=\frac{r}{2}\), this gives a complete picture for the case \(q=\infty\). At this point, we digress very slightly, and note that prior to the appearance of (1.7), a few years earlier Lieb and Thirring [46] established an inequality of the same spirit associated with the Gagliardo-Nirenberg-Sobolev inequality. This inequality, referred to as the _Lieb-Thirring inequality_, was key to their proof of stability of matter; in addition to [46], we also refer the reader to [45] for further details. For wider discussion on the pursuit of obtaining versions of classical inequalities for orthonormal systems, and further examples, we direct the interested reader to [23, 24, 29, 50, 51]. Returning to (1.1) in the boundary case \(r=\infty\), let us first point out that a resolution of the boundary cases of the classical Strichartz estimates (1.3) has only very recently been completed thanks to the work of Guo-Li-Nakanishi-Yan [34]. In particular, it is shown in [34] that the estimate (1.6) is also false in higher dimensions; that is \[\|Uf\|_{L_{t}^{2}L_{x}^{\infty}(\mathbb{R}\times\mathbb{R}^{d})}\lesssim\|f\| _{H^{\frac{d-2}{2}}(\mathbb{R}^{d})} \tag{1.8}\] fails for all \(d\geq 3\). We also remark that, for any \(d\geq 1\), \[\|Uf\|_{L_{t}^{q}L_{x}^{\infty}(\mathbb{R}\times\mathbb{R}^{d})}\lesssim\|f\| _{H^{\frac{d}{2}-\frac{2}{q}}(\mathbb{R}^{d})} \tag{1.9}\] is also false when \(q=\infty\) (this follows from the well-known failure of the corresponding Sobolev embedding estimate), but holds for \(q\in(2,\infty)\) when \(d\geq 2\), and holds for \(q\in[4,\infty)\) when \(d=1\) (see, for example, [34, Section 2]). For orthonormal systems, very few results regarding (1.1) are currently available in the case \(r=\infty\). Here \(d=1\) is somewhat special in the sense that there is a sharp-admissible case (i.e. \((q,r)=(4,\infty)\)) and whether one can go beyond \(\beta=1\) was raised by Frank-Sabin [28]. In this specific case, it seems natural to conjecture that \[\bigg{\|}\sum_{j}\lambda_{j}|Uf_{j}|^{2}\bigg{\|}_{L_{t}^{2}L_{x}^{\infty}( \mathbb{R}\times\mathbb{R})}\lesssim\|\lambda\|_{\ell^{\beta}} \tag{1.10}\] holds for \(\beta<2\). However, as far as we are aware, whether this holds or not is a challenging open problem. The weak-type version \[\bigg{\|}\sum_{j}\lambda_{j}|Uf_{j}|^{2}\bigg{\|}_{L_{t}^{2,\infty}L_{x}^{ \infty}(\mathbb{R}\times\mathbb{R})}\lesssim\|\lambda\|_{\ell^{\beta}} \tag{1.11}\] for \(\beta<2\) was established very recently in [5], but it appears to be non-trivial to upgrade this to a strong-type estimate3. We also remark that one cannot hope for (1.11) with \(\beta\leq 2\); as we have already pointed out, the failure of the strong-type estimate (1.10) (i.e. the missing endpoint in (1.4) when \(d=1\)) was demonstrated in [26]. In fact, by using the fact that Kakeya sets in \(\mathbb{R}^{2}\) with zero Lebesgue measure exist, one can show that even the weak-type estimate (1.11) fails when \(\beta=2\) (see [3]). Footnote 3: In [5], the strong-type estimate (1.10) was observed to hold in the smaller range \(\beta\leq\frac{4}{3}\). As far as we are aware, there are no further results available in the literature regarding the case \(r=\infty\) and we present our new results in this direction below in Section 2. ### The pointwise convergence problem Carleson's pointwise convergence problem for the Schrodinger flow \(e^{it\Delta}f\) is concerned with identifying as large as possible class of initial data \(f\) for which \[\lim_{t\to 0}e^{it\Delta}f(x)=f(x) \tag{1.12}\] holds for almost all \(x\in\mathbb{R}^{d}\) (with respect to Lebesgue measure). Typically this is formulated in terms of data in the inhomogenous Sobolev space \(H^{s}(\mathbb{R}^{d})\) and then one wishes to identify the minimal regularity \(s\in\mathbb{R}\) which guarantees (1.12) holds for all \(f\in H^{s}(\mathbb{R}^{d})\). The sharpness of \(s=\frac{1}{4}\) as the regularity threshold when \(d=1\) goes back to [10, 19], and remarkable recent breakthroughs in [8, 20, 21] have identified the threshold regularity4 to be \(\frac{d}{2(d+1)}\) for all \(d\geq 2\). Footnote 4: More precisely, when \(d\geq 2\), for almost everywhere pointwise convergence to hold for all \(f\in H^{s}(\mathbb{R}^{d})\) it is known that \(s>\frac{d}{2(d+1)}\) is sufficient and \(s\geq\frac{d}{2(d+1)}\) is necessary, for which the reader is also referred to a survey paper [53] by Pierce, but the critical case remains open. When \(d=1\), it is a classical result that \(s\geq\frac{1}{4}\) is necessary and sufficient. The problem described above has a natural analogue in the context of solutions to the reduced Hartree-Fock equation (1.2). The interaction-free version of this equation (1.2), which we shall refer to as the von Neumann-Schrodinger equation (also known as the quantum Liouville equation) takes the form \[i\partial_{t}\gamma=[-\Delta,\gamma],\quad\gamma(0)=\gamma_{0} \tag{1.13}\] and, we recall, describes the time-evolution of a density operator. One may reconcile this with the Schrodinger flow \(e^{it\Delta}f\) by identifying the initial data \(f\) with the operator \(\gamma_{0}=\Pi_{f}\). Here we are assuming that \(\|f\|_{L^{2}}=1\) and \(\Pi_{f}\) is the orthogonal projection operator onto the span of \(f\) given by \[\Pi_{f}g=\langle g,f\rangle_{L^{2}}f.\] Indeed, in this case, one can easily verify that the solution of the von Neumann-Schrodinger equation is \(\Pi_{e^{it\Delta}f}\). The above formulation in terms of the density operator provides a natural framework for modeling infinitely many (fermionic) particles. In general, the solution to (1.13) is given by \[\gamma(t)=e^{it\Delta}\gamma_{0}e^{-it\Delta}.\] Naturally associated with the flow \(\gamma(t)\) is the density \(\rho_{\gamma(t)}\) which, formally, is given by evaluating the integral kernel of the operator (a function on \(\mathbb{R}^{d}\times\mathbb{R}^{d}\)) on the diagonal. For example, we have \(\rho_{\gamma_{0}}(x)=\sum_{j=1}^{N}\lambda_{j}|f_{j}(x)|^{2}\) and \(\rho_{\gamma(t)}(x)=\sum_{j=1}^{N}\lambda_{j}|e^{it\Delta}f_{j}(x)|^{2}\) if \(\gamma_{0}\) is the finite-rank operator associated with orthonormal functions \(f_{1},\dots,f_{N}\in L^{2}(\mathbb{R}^{d})\) and scalars \(\lambda_{1},\dots,\lambda_{N}\) given by \[\gamma_{0}f(x)=\sum_{j=1}^{N}\lambda_{j}\Pi_{f_{j}}f(x)=\int_{\mathbb{R}^{d}}f (y)\sum_{j=1}^{N}\lambda_{j}f_{j}(x)\overline{f_{j}(y)}\,\mathrm{d}y.\] With some care, one may extend the meaning of these densities to the infinite-rank case and beyond the trace class of operators; see the beginning of Section 6. Associated with such densities, a natural analogue of Carleson's problem for the von Neumann-Schrodinger equation was raised in [5] and is concerned with finding as large as possible class of initial data \(\gamma_{0}\) such that \[\lim_{t\to 0}\rho_{\gamma(t)}(x)=\rho_{\gamma_{0}}(x) \tag{1.14}\] holds almost everywhere with respect to Lebesgue measure. At the critical regularity \(s=\frac{1}{4}\) in one spatial dimension, the following was proved in [5]. **Theorem 1.2** ([5]).: _The weak-type maximal-in-time estimate_ \[\bigg{\|}\sum_{j}\lambda_{j}|Uf_{j}|^{2}\bigg{\|}_{L^{2,\infty}_{x}L^{\infty}_ {t}(\mathbb{R}\times\mathbb{R})}\lesssim\|\lambda\|_{\ell^{\beta}} \tag{1.15}\] _holds for orthonormal functions \((f_{j})_{j}\) in \(\dot{H}^{\frac{1}{4}}(\mathbb{R})\) and \(\beta<2\). Consequently, if \(\gamma_{0}\in\mathcal{C}^{\beta,\frac{1}{4}}\) and \(\beta<2\), then the density function \(\rho_{\gamma(t)}\) satisfies (1.14) for almost every \(x\in\mathbb{R}\)._ In the above statement, \(\mathcal{C}^{\beta,\frac{1}{4}}\) denotes a Sobolev-type Schatten space. More generally, \(\mathcal{C}^{\beta,s}\) is given by the norm5 Footnote 5: Although Carleson’s problem is typically considered with initial data in the inhomogeneous Sobolev spaces \(H^{s}(\mathbb{R}^{d})\), as in Theorem 1.2, we shall work in the setting of homogeneous Sobolev spaces \(\dot{H}^{s}(\mathbb{R}^{d})\) and their associated Schatten spaces \(\mathcal{C}^{\beta,s}\). Our arguments may be easily be modified to the inhomogenous setting. \[\|\gamma\|_{\mathcal{C}^{\beta,s}}=\||D|^{s}\gamma|D|^{s}\|_{\mathcal{C}^{ \beta}},\] where \(|D|^{s}\) is the Fourier multiplier operator with multiplier \(|\xi|^{s}\) and \(\mathcal{C}^{\beta}\) is the Schatten space of order \(\beta\) built over \(L^{2}(\mathbb{R}^{d})\); we refer the reader forward to Sections 3 and 6 for more precise definitions. For now, we remark that the Schatten spaces \(\mathcal{C}^{\beta}\) are nested (they increase in size as \(\beta\) increases) and thus, for a fixed level of regularity \(s\), it is natural to try and identify the largest possible \(\beta\) for which (1.14) holds. Although we are unaware of a proof, for \(s=\frac{1}{4}\) it seems reasonable to believe that \(\beta<2\) is the optimal range. Indeed, the maximal estimate (1.15) was shown to fail for \(\beta\geq 2\) in [5]. There are many directions to develop Theorem 1.2, some of which have already been raised in [5] and these open problems provided an important source of inspiration for the present paper. In particular, one may ask about: (a) _the effect of imposing higher regularity on the initial data_, (b) _extending to higher dimensions_, (c) _generalizations to other equations_, (d) _how large is the set of points at which convergence fails._ Regarding (a), one may expect a gain in the range of allowable \(\beta\) if the initial data is assumed to be sufficiently smooth. When \(d=1\) and \(\gamma_{0}\in\mathcal{C}^{\beta,s}\) with \(s\in(\frac{1}{4},\frac{1}{2})\), an application of our boundary Strichartz estimates will reveal that \(\beta<\frac{1}{1-2s}\) is allowable; this argument is in the spirit of [5] and relies heavily on the assumption the spatial dimension is one (we elaborate on this in Section 2.1). Making significant progress on (a)-(d) beyond this seems to require a different, more direct, approach. Even if we remain in the case \(d=1\), we can illustrate this by considering Carleson's problem \[\lim_{t\to 0}e^{it(-\Delta)^{m/2}}f(x)=f(x)\quad\text{a.e.} \tag{1.16}\] for the fractional Schrodinger propagators \(e^{it(-\Delta)^{m/2}}\) with \(m\in(0,\infty)\). For this, it is known that the threshold is remarkably different when \(m\in(0,1)\) and \(m\in(1,\infty)\)6. Indeed, when \(m\in(0,1)\), Walther [61] proved \(s>\frac{m}{4}\) is sufficient for (1.16) and \(s\geq\frac{m}{4}\) is necessary. On the other hand, for \(m\in(1,\infty)\), Sjolin [57] has shown that (1.16) holds if and only if \(s\geq\frac{1}{4}\). Since Strichartz estimates are ultimately built on the standard dispersive estimates for \(e^{it(-\Delta)^{m/2}}\), and since these dispersive estimates do not depend on \(m\), it seems necessary to adopt a more direct approach to obtain (1.16), or more generally, to satisfactorily address problem (c) above. Footnote 6: The case \(m=1\) also has a different nature; see the remark at the end of Section 2.1 for further details. Let us also mention that (1.16) has also been (partially) addressed in higher dimensions and in terms of the divergence sets. For example, it is known that (1.16) holds if \(s>\frac{m}{2}\) for \(m\in(0,1)\) and \(d\geq 1\) (see [17, 61]). More recently, by building on [20, 21], Cho-Ko [13] showed that \(s>\frac{d}{2(d+1)}\) is sufficient for (1.16) when \(m\in(1,\infty)\) and \(d\geq 2\). Regarding divergence sets of the form \[\mathfrak{D}(f):=\{x\in\mathbb{R}^{d}:\lim_{t\to 0}e^{it(-\Delta)^{m/2}}f(x)\neq f (x)\},\] the idea of estimating their size seems to stem from work of Sjogren-Sjolin [56], with fresh impetus coming from the more recent paper by Barcelo _et al._[1]. Amongst other results, Barcelo _et al_. proved that7 Footnote 7: Here, \(\dim_{H}\) denotes Hausdorff dimension. \[\sup_{f\in\dot{H}^{s}}\dim_{H}\mathfrak{D}(f)=d-2s\] when \(s\in[\frac{d}{4},\frac{d}{2})\) if either \(d=1\) and \(m>1\), or \(d\geq 2\) and \(m=2\). The argument in [1, Proposition 3.1] for \(d\geq 2\) makes special use of the assumption \(m=2\) in order to reduce matters to one-dimensional considerations. We note that a consequence of our approach in this paper is that we can extend the \(d\geq 2\) result to general \(m\in(1,\infty)\). We also remark that whilst the result in [1] is definitive when \(d=1\), the higher dimensional problem appears to be very challenging and remains open for \(s\in(\frac{d}{2(d+1)},\frac{d}{4})\) (see for example [21, 22, 47, 48] for further details). In the case of \(m\in(0,1)\) and \(d=1\), Cho and the third author [16] very recently proved \[\sup_{f\in\dot{H}^{s}}\dim_{H}\mathfrak{D}(f)\leq\max\left\{1-2s,\frac{1}{2}+ \frac{1-4s}{2(1-m)}\right\},\] and it seems reasonable to believe that equality holds. Including [16], ideas from the literature on the classical (single function) version of Carleson's problem have also been a source of inspiration for the present work. In particular, we deviate significantly from [5] and approach (a)-(d) in the context of density functions (1.14) using direct arguments for proving maximal-in-time estimates. Our new results in this direction appear in Section 2.2. ## 2. Main new results ### Maximal-in-space estimates Our main result concerning boundary Strichartz estimates is the following. **Theorem 2.1**.: _The estimate_ \[\bigg{\|}\sum_{j}\lambda_{j}|Uf_{j}|^{2}\bigg{\|}_{L_{t}^{\frac{q}{2}}L_{x}^{ \infty}(\mathbb{R}\times\mathbb{R}^{d})}\lesssim\|\lambda\|_{\ell^{\beta}} \tag{2.1}\] _holds for systems of orthonormal functions \((f_{j})_{j}\) in \(\dot{H}^{s}(\mathbb{R}^{d})\), \(s=\frac{d}{2}-\frac{2}{q}\), and \(\beta<\frac{q}{2}\) in each of the following cases._ (i)_\(d=1\) and \(q\in(4,\infty)\)._ (ii)_\(d\geq 2\) and \(q\in(2,\infty)\)._ _Furthermore, the estimate (2.1) fails when \(\beta>\frac{q}{2}\)._ For \(d\geq 2\), thanks to the failure of (1.9) in both cases \(q=2,\infty\), the range of \(q\) in (ii) cannot be extended. It remains an interesting open problem to determine whether one can extend the range of \(q\) in (i) to \([4,\infty)\). Regarding the summability exponent, the failure of (2.1) when \(\beta>\frac{q}{2}\) can be seen by following the argument in [3, Section 4.8] and so we omit the details. What happens at the critical summability exponent \(\beta=\frac{q}{2}\) seems to be a delicate matter. In Section 3.1 we shall sketch an argument yielding restricted weak-type estimates when \(\beta=\frac{q}{2}\) and \(d\geq 2\). Interestingly, such estimates are not valid when \(d=1\). Indeed, if we take \(q\in[4,\infty)\) and assume that \[\bigg{\|}\sum_{j}\lambda_{j}|Uf_{j}|^{2}\bigg{\|}_{L_{t}^{\frac{q}{2},\infty} L_{x}^{\infty}(\mathbb{R}\times\mathbb{R})}\lesssim\|\lambda\|_{\ell^{\frac{q}{2},1}} \tag{2.2}\] were to hold for all orthonormal systems \((f_{j})_{j}\) in \(\dot{H}^{\frac{1}{2}-\frac{2}{q}}(\mathbb{R})\), then by a semi-classical limiting argument (see [54, 3]), we get the ("velocity average") estimate \[\bigg{\|}\int_{\mathbb{R}}f(x-tv,v)\frac{\mathrm{d}v}{|v|^{1-\frac{q}{q}}} \bigg{\|}_{L_{t}^{\frac{q}{2},\infty}L_{x}^{\infty}(\mathbb{R}\times\mathbb{R })}\lesssim\|f\|_{L^{\frac{q}{2},1}}.\] However, by taking \(f\) as the characteristic function of a sufficiently small neighbourhood of a measure-zero Kakeya set in \(\mathbb{R}^{2}\), we can show that such an estimate cannot be true (see the proof of [3, Theorem 5.3]). We leave open the question of whether one can extend Theorem 2.1 to the critical summability exponent \(\beta=\frac{q}{2}\) when \(d\geq 2\); conceivably, in the scale of Lorentz spaces, restricted weak-type estimates are the best that one can hope for. Our proof of Theorem 2.1 draws on ideas in [3] and relies on widening the framework to incorporate Lorentz spaces. Indeed, restricted weak-type versions of (1.1) can be readily obtained by summing up certain frequency-localized estimates (which in turn are based on dispersive estimates) using the summation trick of Bourgain in Lemma 3.1 below. Also, in certain special cases, improved versions of (1.1) in the scale of Lorentz spaces are possible and offsetting this loss/gain, via interpolation, led to the desired strong-type estimates in [3]. Broadly speaking we adopt this strategy to prove Theorem 2.1 and we give a more informative outline of the proof in Section 3.2. Theorem 2.1 readily generalises to fractional Schrodinger propagators \(U_{m}\), with \(m\in(0,\infty)\setminus\{1\}\), where \[U_{m}f(t,x)=e^{it(-\Delta)^{m/2}}f(x)=\frac{1}{(2\pi)^{d}}\int_{\mathbb{R}^{d }}e^{i(x\cdot\xi+t|\xi|^{m})}\widehat{f}(\xi)\,\mathrm{d}\xi.\] The propagators \(U_{m}\) have drawn much attention from both physical and mathematical viewpoints in recent years (see for example [35, 36, 37, 39, 40, 41]). As described above, the proof of Theorem 2.1 rests on a dispersive estimate which is equally valid for general \(U_{m}\) and it is easily checked that our proof in Sections 4 and 5 goes through for \(U_{m}\), as long as we consider orthonormal systems of data \((f_{j})_{j}\) belonging to \(\dot{H}^{s}(\mathbb{R}^{d})\), \(s=\frac{d}{2}-\frac{m}{q}\). In particular, the range of \(d,q\) and \(\beta\) is unchanged from the case \(m=2\) in Theorem 2.1 above. One advantage of broadening the framework to general \(U_{m}\) can be illustrated as follows. In the case of \(d=1\), a simple change of variables shows8 Footnote 8: Here, \(A\simeq B\) means \(A=cB\) for some positive constant \(c\) whose exact value is not important. \[U_{m}f(t,x)\simeq U_{1/m}f_{+}(x,t)+U_{1/m}f_{-}(-x,t),\qquad\widehat{f_{\pm}} (\xi)=\chi_{(0,\infty)}(\xi)\widehat{f}(\pm\xi^{\frac{1}{m}})\xi^{\frac{1}{m}-1}\] which means that, essentially, one can swap the roles of space and time at the expense of switching from \(U_{m}\) to \(U_{1/m}\) (this trick has been used many times and goes back to Kenig-Ponce-Vega [39]). Modulo an additional argument which takes care of the orthogonality condition (see [5, Lemma 4.2]), this line of reasoning allows us to deduce the maximal-in-time estimate (2.3) \[\left\|\sum_{j}\lambda_{j}|U_{m}f_{j}|^{2}\right\|_{L^{\frac{1}{1-2s}}_{x}L^{ \infty}_{t}(\mathbb{R}\times\mathbb{R})}\lesssim\|\lambda\|_{\ell^{\beta}}\] for orthonormal functions \((f_{j})_{j}\) in \(\dot{H}^{s}(\mathbb{R})\), whenever \(s\in(\frac{1}{4},\frac{1}{2})\) and9\(\beta<\frac{1}{1-2s}\). Here we can take any \(m\in(0,\infty)\setminus\{1\}\) but we note that the \((s,\beta)\) range is independent of \(m\). As a result of the maximal estimate (2.3), it follows that if \(\gamma_{0}\in\mathcal{C}^{\beta,s}\), then the density functions \(\rho_{\gamma(t)}\) and \(\rho_{\gamma_{0}}\) satisfy (1.14) for almost every \(x\in\mathbb{R}\). Here Footnote 9: This range of \(\beta\) is best possible for the estimate (2.3) to hold. In fact, for \(s\in(\frac{1}{4},\frac{1}{2})\), the failure of the following maximal-in-time estimate \[\left\|\sum_{j}\lambda_{j}|Uf_{j}|^{2}\right\|_{L^{\frac{1}{1-2s}}_{x}L^{ \infty}_{t}(\mathbb{R}\times\mathbb{R})}\lesssim\|\lambda\|_{\ell^{\frac{1}{ 1-2s}},1}\] for orthonormal functions \((f_{j})_{j}\) in \(\dot{H}^{s}(\mathbb{R})\) can be shown by adapting the argument discussed earlier for (2.2). We refer the reader to [5] for the details, where such an argument was given in the case \(s=\frac{1}{4}\). \[\gamma(t)=e^{it(-\Delta)^{m/2}}\gamma_{0}e^{-it(-\Delta)^{m/2}}\] is the solution to fractional von Neumann-Schrodinger equation (2.4) \[i\partial_{t}\gamma=-[(-\Delta)^{\frac{m}{2}},\gamma],\quad\gamma(0)=\gamma_ {0}.\] As mentioned earlier, some of our original motivation to prove boundary Strichartz estimates arose from the problem of understanding the pointwise convergence (1.14) in the context of imposing higher regularity on the data, and thus address a question raised in [5]. However, if one is solely focused developing a better understanding of the pointwise convergence problem (1.14), then it seems strongly desirable to have a more direct approach for proving maximal-in-time estimates. For a start, one can then make the problem more tractable by targeting local estimates (with respect to both time and space) rather than global estimates like those in (2.3). As we have already mentioned10, obtaining a new viewpoint on (1.14) was another major source of motivation for the present paper and next we shall present our new results in this direction. Let us remark that by succeeding in this manner, we will profit in two different ways. Firstly, we are able to obtain, for the first time, estimates on the size of the set of points at which the convergence in (1.14) fails. Secondly, we are able to obtain what we believe to be sharp results for \(U_{m}\) when \(m\in(0,1)\); for such \(m\), the \((s,\beta)\) range depends on \(m\) and obtaining such an outcome by relying on Strichartz estimates (and the space-time switching trick) seems rather implausible. _Remark_.: For the one-sided wave propagator \(U_{1}\), the classical Strichartz estimates and Carleson's problem have also been addressed. For Strichartz estimates, roughly speaking, the admissible exponents for \(U_{1}\) on \(\mathbb{R}\times\mathbb{R}^{d}\) correspond to those for \(U\) on \(\mathbb{R}\times\mathbb{R}^{d-1}\), and to a large extent a unified perspective is possible. In the same spirit, an analogue of Theorem 2.1 holds for \(U_{1}\); using ideas from the present paper, a sketch of the proof of this result can be found in [4]. On the other hand, Carleson's problem for \(U_{1}\) is significantly different to the problem for \(U\). Whereas \(\frac{d}{2(d+1)}\) has been identified as the sharp regularity threshold for (1.12), it is known (and can be proved using significantly easier arguments) that \(\frac{1}{2}\) is the sharp regularity threshold for \(U_{1}\) in all dimensions (see [17, 61]). In the present paper, we do not attempt to consider the analogue of (1.14) for \(U_{1}\). ### Maximal-in-time estimates and pointwise convergence Consider the divergence set \[\mathfrak{D}(\gamma_{0}):=\{x\in\mathbb{R}^{d}:\lim_{t\to 0}\rho_{\gamma(t)}(x)\neq \rho_{\gamma_{0}}(x)\}\] associated with the initial data \(\gamma_{0}\), and where \(\gamma(t)\) is the solution of (2.4). For ease of notation, we are suppressing the dependence on \(d\) and \(m\). Any upper bound on the Hausdorff dimension of \(\mathfrak{D}(\gamma_{0})\) which is strictly less than \(d\) would certainly be a stronger statement than the almost everywhere pointwise convergence of \(\rho_{\gamma(t)}\) to \(\rho_{\gamma_{0}}\) and, in Corollary 2.3 below, we provide upper bounds of this nature. By Frostman's lemma from geometric measure theory, it will suffice to establish local maximal-in-time estimates where integration in the spatial variable is taken with respect to an arbitrary \(\alpha\)-dimensional measure. Here, for a given \(\alpha\in(0,d]\), the Borel measure \(\mu\) on \(\mathbb{R}^{d}\) is said to be \(\alpha\)-dimensional if \[\sup_{x\in\mathbb{R}^{d},r>0}\frac{\mu(B(x,r))}{r^{\alpha}}<\infty,\] and we shall use \(\mathcal{M}^{\alpha}(\mathbb{B}^{d})\) to denote the collection of all \(\alpha\)-dimensional probability measures supported on the unit ball \(\mathbb{B}^{d}\). Our maximal-in-time estimates appear in the forthcoming Theorem 2.2 and take the form \[\bigg{\|}\sum_{j}\lambda_{j}|U_{m}f_{j}|^{2}\bigg{\|}_{L^{1}_{x}(\mathbb{B}^{ d},\mathrm{d}\mu)L^{\infty}_{t}(0,1)}\lesssim\|\lambda\|_{\ell^{\beta}}. \tag{2.5}\] Together with Corollary 2.3, these results provide a significant extension and refinement of Theorem 1.2. **Theorem 2.2**.: _Let \(d\geq 1\), \(s\in[0,\frac{d}{2})\), \(\alpha\in(0,d]\), \(\beta\geq 1\), and \(\mu\in\mathcal{M}^{\alpha}(\mathbb{B}^{d})\)._ 1. _Let_ \(m\in(1,\infty)\) _and_ \(s\geq\frac{d}{4}\)_. The maximal estimate (_2.5_) holds for all orthonormal systems_ \((f_{j})_{j}\) _in_ \(\dot{H}^{s}(\mathbb{R}^{d})\) _whenever_ \[s>\frac{1}{2}\left(d-\frac{\alpha}{\beta}\right).\] _This is sharp in the sense that if_ \(s<\frac{1}{2}(d-\frac{\alpha}{\beta})\)_, then there exist_ \(\mu\in\mathcal{M}^{\alpha}(\mathbb{B}^{d})\)_, an orthonormal system_ \((f_{j})_{j}\) _in_ \(\dot{H}^{s}(\mathbb{R}^{d})\)_, and a sequence_ \((\lambda_{j})_{j}\in\ell^{\beta}\) _such that (_2.5_) fails._ _._ 2. _Let_ \(m\in(0,1)\)_. The maximal estimate (_2.5_) holds for all orthonormal systems_ \((f_{j})_{j}\) _in_ \(\dot{H}^{s}(\mathbb{R}^{d})\) _whenever_ \[s>\max\left\{\frac{1}{2}\left(d-\frac{\alpha}{\beta}\right),\frac{d}{4}-\frac{ 1}{2}(1-m)\left(\frac{\alpha}{\beta}-\frac{d}{2}\right)\right\}.\] _When_ \(d=1\)_, this is sharp in the sense that if_ \(\max\left\{\frac{1}{2}(1-\frac{\alpha}{\beta}),\frac{1}{4}-\frac{1}{2}(1-m)( \frac{\alpha}{\beta}-\frac{1}{2})\right\}>s\)_, then there exist_ \(\mu\in\mathcal{M}^{\alpha}(\mathbb{B})\)_, an orthonormal system_ \((f_{j})_{j}\) _in_ \(\dot{H}^{s}(\mathbb{R})\)_, and a sequence_ \((\lambda_{j})_{j}\in\ell^{\beta}\) _such that (_2.5_) fails._ Taking \(\mu\) as standard Lebesgue measure (and \(\alpha=d\)), one may deduce the pointwise convergence \[\lim_{t\to 0}\rho_{\gamma(t)}(x)=\rho_{\gamma_{0}}(x)\quad\text{a.e.}\] for self-adjoint initial data \(\gamma_{0}\in\mathcal{C}^{\beta,s}\), with the appropriate \((\beta,s)\) given in the above theorem. In particular, we note that the range of \(s\) may drop below \(\frac{d}{4}\) in the case \(m\in(0,1)\). We also note that the full strength of Theorem 2.2 yields geometric size information on the corresponding divergence set as follows. We illustrate this in Figure 1 in the case \(d=1\). **Corollary 2.3**.: _Let \(d\geq 1\), \(s\in[0,\frac{d}{2})\). Suppose \(\gamma_{0}\in\mathcal{C}^{\beta,s}\) is self-adjoint with \(1\leq\beta<\frac{d}{d-2s}\)._ 1. _For_ \(m\in(1,\infty)\) _and_ \(s\geq\frac{d}{4}\)_, the pointwise convergence (_1.14_) holds and furthermore we have_ \[\dim_{H}\mathfrak{D}(\gamma_{0})\leq(d-2s)\beta.\] 2. _For_ \(m\in(0,1)\) _and_ \(s>\frac{d}{4}-\frac{d}{2}(1-m)(\frac{1}{\beta}-\frac{1}{2})\)_, the pointwise convergence (_1.14_) holds and furthermore we have_ \[\dim_{H}\mathfrak{D}(\gamma_{0})\leq\max\left\{(d-2s)\beta,\frac{2(d-2s)-md}{2 (1-m)}\beta\right\}.\] We reiterate that, even when \(\beta=1\), Corollary 2.3(i) appears to be new for \(m\neq 2\) when \(d\geq 2\). Also, the case \(\beta=1\) in Corollary 2.3(ii) corresponds to [16, Theorem 1]. In light of the necessary condition in Theorem 2.2(i), it seems reasonable to believe that the upper bound in Corollary 2.3(i) is best possible for any \(d\geq 1\). Similarly, we expect the upper bound in Corollary 2.3(ii) to be sharp when \(d=1\), but the sharp threshold on the size of the divergence sets when \(m\in(0,1)\) and \(d\geq 2\) is less clear. ## 3. The critical case and outline of the proof of Theorem 2.1 The proof of Theorem 2.1 in full appears in Sections 4 and 5, but later in this section we include an overview of the main steps. Prior to that, it will be informative to briefly sketch an argument which yields restricted weak-type estimates at the critical exponent \(\beta=\frac{q}{2}\). The corresponding estimates for the one-sided wave propagator \(U_{1}\) were carefully proved in [4] and so we refer the reader there for further details. ### Restricted weak-type estimates at \(\beta=\frac{q}{2}\) For \(d\geq 2\), we claim \[\bigg{\|}\sum_{j}\lambda_{j}|Uf_{j}|^{2}\bigg{\|}_{L^{\frac{q}{2},\infty}_{x}L^ {\infty}_{x}}\lesssim\|\lambda\|_{\ell^{\frac{q}{2},1}} \tag{3.1}\] holds for all orthonormal systems \((f_{j})_{j}\) in \(\dot{H}^{s}(\mathbb{R}^{d})\), \(s=\frac{d}{2}-\frac{2}{q}\), and all sequences \((\lambda_{j})_{j}\) in \(\ell^{\frac{q}{2},1}\), where the range of \(q\) is the same as in Theorem 2.1. First consider \(d\geq 3\). Our argument is based on the fact that if \(q\) coincides either with \(2\) or \(\infty\), then the frequency-localized estimate \[\bigg{\|}\sum_{j}\lambda_{j}|UP_{k}f_{j}|^{2}\bigg{\|}_{L^{\frac{q}{2}}_{t}L^ {\infty}_{x}}\lesssim 2^{(\frac{d}{2}-\frac{2}{q})k}\|\lambda\|_{\ell^{\frac{q}{2}}} \tag{3.2}\] holds for all orthonormal systems \((f_{j})_{j}\) in \(L^{2}(\mathbb{R}^{d})\). Here, \(k\) is an integer and the operator \(P_{k}\) denotes the Littlewood-Paley type frequency localization given by \(\widehat{P_{k}f}(\xi)=\varphi_{k}(|\xi|)\widehat{f}(\xi)\), where \(\varphi_{k}(r)=\varphi(2^{-k}r)\) with \(\varphi\in C_{0}^{\infty}\) supported in \(\{r\in\mathbb{R}:2^{-1}<r<2\}\). The estimate (3.2) with \(q=2\) follows from the frequency-localized Strichartz estimate \[\|UP_{k}f\|_{L^{2}_{t}L^{\infty}_{x}(\mathbb{R}\times\mathbb{R}^{d})}\lesssim 2 ^{\frac{d-2}{2}k}\|f\|_{L^{2}(\mathbb{R}^{d})} \tag{3.3}\] and the triangle inequality, and (3.2) with \(q=\infty\) follows from Bessel's inequality. The latter claim is based on the following simple observation from [3]: \[\sum_{j}|UP_{k}f_{j}(t,x)|^{2}=\sum_{j}|\langle\widehat{f_{j}}(\xi),e^{-ix\cdot \xi}m_{t}(\xi)\rangle_{L^{2}_{\xi}}|^{2}\leq\|m_{t}\|_{2}^{2}\lesssim 2^{kd}.\] Here, \(m_{t}\) is the Fourier multiplier associated with \(UP_{k}f(t,\cdot)\) and we have used the fact that, thanks to Parseval's identity, \((\widehat{f_{j}})_{j}\) is orthonormal in \(L^{2}(\mathbb{R}^{d})\). Finally, in order to sum up the frequency-localized estimates and pass to (3.1), we use the following. **Lemma 3.1** ([3, 4]).: _Let \(q_{0},q_{1},r\in[2,\infty]\), \(\beta_{0},\beta_{1}\in[2,\infty]\) and \((g_{j})_{j}\) be a uniformly bounded sequence in \(L^{q_{0}}_{t}L^{r}_{x}\cap L^{q_{1}}_{t}L^{r}_{x}\). Suppose there exist \(\varepsilon_{0}\), \(\varepsilon_{1}>0\) such that_ \[\bigg{\|}\sum_{j}\lambda_{j}|P_{k}g_{j}|^{2}\bigg{\|}_{L^{\frac{q}{2},\infty}_ {t}L^{\frac{r}{2}}_{x}}\lesssim 2^{(-1)^{i+1}\varepsilon_{i}k}\|\lambda\|_{ \ell^{\beta_{i}}}\] _for all \(k\in\mathbb{Z}\) and \(i=0,1\), then_ \[\bigg{\|}\sum_{j}\lambda_{j}|g_{j}|^{2}\bigg{\|}_{L^{\frac{q}{2},\infty}_{t}L ^{\frac{r}{2}}_{x}}\lesssim\|\lambda\|_{\ell^{\beta,1}},\] _where \(\frac{1}{q}=\frac{1-\theta}{q_{0}}+\frac{\theta}{q_{1}}\), \(\frac{1}{\beta}=\frac{1-\theta}{\beta_{0}}+\frac{\theta}{\beta_{1}}\) and \(\theta=\frac{\varepsilon_{0}}{\varepsilon_{0}+\varepsilon_{1}}\)._ We are referring the reader to [3, 4] for the above lemma since the statement can be found as it is written here in these papers, but the key underlying idea goes back to Bourgain [7]. When \(d=2\) may be proved in a similar manner but the failure of (3.3) (and thus also (3.2) with \(q=2\)) adds difficulty. Via Lemma 3.1 again, it actually suffices to establish (3.2) with \(q\) strictly larger but arbitrarily close to \(2\). One may reach such estimates by using duality (the forthcoming Lemma 3.2) and estimates on certain Schatten norms using the Brascamp-Lieb inequality. We omit the details and refer the interested reader to the analogous argument for the half-wave propagator in [4]. ### Outline of the proof of Theorem 2.1 It is unclear to us whether it is possible to upgrade (3.1) to a strong-type estimate at the critical exponent \(\beta=\frac{q}{2}\). In Theorem 2.1 we obtain strong-type estimates for \(\beta<\frac{q}{2}\) by dividing the range of \(q\) into \(2<q\leq 4\) and \(4<q<\infty\) (of course, when \(d=1\) this division is superfluous). In this overview of the proof of Theorem 2.1, for simplicity let us suppose \(d\geq 2\). As suggested by the above division, key to the argument is \(q\) close to \(4\). More precisely, the case \(\beta=2\) is important for our proof and we shall see that \[\bigg{\|}\sum_{j}\lambda_{j}|U|D|^{-s}f_{j}|^{2}\bigg{\|}_{L^{\frac{q}{2},2}_{ t}L^{\infty}_{x}}\lesssim\|\lambda\|_{\ell^{2}} \tag{3.4}\] for \(4<q<\infty\) (which is of most use to us when \(q\) is close to \(4\)) and \(s=\frac{d}{2}-\frac{2}{q}\). Here, \(L^{\frac{q}{2},2}_{t}\) is a Lorentz space. Since \(q>4\) these estimates are stronger than the strong-type counterpart with \(L^{\frac{q}{2}}_{t}\), and this gain will be important to obtain Theorem 2.1 when \(4<q<\infty\). We first consider \(2<q\leq 4\). Here we shall see that \[\bigg{\|}\sum_{j}\lambda_{j}|U|D|^{-s}f_{j}|^{2}\bigg{\|}_{L^{\frac{q}{2},1}_ {t}L^{\infty}_{x}}\lesssim\|\lambda\|_{\ell^{1}} \tag{3.5}\] holds for any \(q>2\) and \(s=\frac{d}{2}-\frac{2}{q}\), and this is of most use to us when \(q\) is close to \(2\). We would like to use complex interpolation to deduce Theorem 2.111 for \(2<q\leq 4\) from (3.4) and (3.5) but the fact that \(s\) varies with \(q\) causes some technical difficulty. In order to carry out an analytic interpolation argument to take care of this, we found that it was more convenient to reformulate and work with dual estimates. Here, duality is in the sense of the forthcoming Lemma 3.2 and we describe the argument on the dual side in more detail in a moment. Footnote 11: In fact, a slightly stronger estimate involving certain Lorentz spaces Before that, let us explain the idea behind the proof for \(4<q<\infty\). We shall first obtain the weak-type estimate \[\bigg{\|}\sum_{j}\lambda_{j}|U|D|^{-s}f_{j}|^{2}\bigg{\|}_{L^{\frac{q}{2}, \infty}_{t}L^{\infty}_{x}}\lesssim\|\lambda\|_{\mathcal{C}^{\beta}}, \tag{3.6}\] for \(4<q<\infty\), \(\beta<\frac{q}{2}\) (which is of most use to us when \(q\) is extremely large) and \(s=\frac{d}{2}-\frac{2}{q}\). Again one would like to interpolate this with (3.4) to obtain the desired estimates in Theorem 2.1 for \(4<q<\infty\); in particular, it is key that the slight gain in the sense of the Lorentz space in (3.4) can be used to upgrade the weak-type estimate (3.6) to a strong-type estimate. In order to argue along the above lines, as we mentioned above, we found it more convenient to work on dual estimates using the following. **Lemma 3.2** (Duality principle [27], [3]).: _Suppose \(T\) is a bounded linear operator from \(L^{2}(\mathbb{R}^{d})\) to \(L^{q,a}_{t}L^{r,b}_{x}\) for some \(q,r,a,b\geq 2\) and under the condition that \(a=2\) when \(q=2\). Also, let \(\beta\geq 1\). Then,_ \[\bigg{\|}\sum_{j}\lambda_{j}|Tf_{j}|^{2}\bigg{\|}_{L^{\frac{q}{2},\frac{q}{2}} _{t}L^{\frac{r}{2},\frac{b}{2}}_{x}}\lesssim\|\lambda\|_{\ell^{\beta}}\] _holds for all orthonormal systems \((f_{j})_{j}\) in \(L^{2}(\mathbb{R}^{d})\) and all sequences \((\lambda_{j})_{j}\in\ell^{\beta}(\mathbb{C})\) if and only if_ \[\|WTT^{*}\overline{W}\|_{\mathcal{C}^{\beta^{\prime}}}\lesssim\|W\|^{2}_{L^{ \widetilde{q},\widetilde{a}}_{t}L^{\widetilde{r},\widetilde{b}}_{x}}\] _for all \(W\in L^{\widetilde{q},\widetilde{a}}_{t}L^{\widetilde{r},\widetilde{b}}_{x}\). Here, \(\widetilde{\cdot}\) denotes the "half conjugate" given by_ \[\frac{1}{p}+\frac{1}{\widetilde{p}}=\frac{1}{2}\] _and \(\cdot^{\prime}\) denotes the (usual Holder) conjugate given by_ \[\frac{1}{\beta}+\frac{1}{\beta^{\prime}}=1.\] In the above lemma, for \(p>1\), the notation \(\mathcal{C}^{p}\) denotes the Schatten space of all compact operators on \(L^{2}(\mathbb{R}^{d+1})\) such that \(\|T\|_{\mathcal{C}^{p}}:=\|\lambda\|_{\ell^{p}}<\infty\), where \((\lambda_{j})_{j}\) are the singular values of \(T\). We extend this to \(p=\infty\) with the convention that \(\|\cdot\|_{\mathcal{C}^{\infty}}\) is the usual operator norm. In our proofs, we often recall that \(\|\cdot\|_{\mathcal{C}^{2}}\) is the Hilbert-Schmidt norm and coincides with the norm \(\|\cdot\|_{L^{2}(\mathbb{R}^{d+1}\times\mathbb{R}^{d+1})}\) of the kernel of the corresponding operator. Moreover, we make use several times of a useful characterization of Schatten norms. **Lemma 3.3** ([55, Proposition 2.6]).: _Let \(n\in\mathbb{N}\) and \(\mathcal{B}\) denote the collection all orthonormal families in \(L^{2}(\mathbb{R}^{n})\). For \(1\leq p\leq\infty\) and \(T\in\mathcal{C}^{p}(L^{2}(\mathbb{R}^{n}))\) we have_ \[\|T\|_{\mathcal{C}^{p}}=\sup_{\phi,\psi\in\mathcal{B}}\|\langle T\phi_{j}, \psi_{j}\rangle_{L^{2}}\|_{\ell^{p}}. \tag{3.7}\] _Conversely, if \(T\) is compact in \(L^{2}(\mathbb{R}^{n})\) and the right-hand side of (3.7) is finite, then \(T\in\mathcal{C}^{p}(L^{2}(\mathbb{R}^{n}))\). When \(1\leq p<\infty\), "\(T\) is compact" may be replaced by "\(T\) is bounded" in the last statement._ Thanks to Lemma 3.2, our desired estimates follow from \[\|WU|D|^{-2s}U^{*}\overline{W}\|_{\mathcal{C}^{\beta^{\prime}}}\lesssim\|W\|^ {2}_{L^{\widetilde{q}}_{t}L^{2}_{x}} \tag{3.8}\] for \(\beta^{\prime}>\frac{\widetilde{q}}{2}\) and \(2<\widetilde{q}<\infty\), where \[s=\frac{d}{2}-\frac{2}{q}=\frac{d-2}{2}+\frac{2}{\widetilde{q}}.\] To obtain such estimates via an analytic interpolation argument, we prove the following slightly strenghened versions of (3.4), (3.5) and (3.6): \[\|W_{1}U|D|^{-2s+i\kappa}U^{*}W_{2}\|_{\mathcal{C}^{2}}\leq C(\kappa)\|W_{1}\| _{L^{\widetilde{q},4}_{t}L^{2}_{x}}\|W_{2}\|_{L^{\widetilde{q},4}_{t}L^{2}_{x}} \tag{3.9}\] for \(2<\widetilde{q}<4\), \[\|W_{1}U|D|^{-2s+i\kappa}U^{*}W_{2}\|_{\mathcal{C}^{\infty}}\leq C(\kappa)\|W_{1 }\|_{L^{\widetilde{q},\infty}_{t}L^{2}_{x}}\|W_{2}\|_{L^{\widetilde{q},\infty} _{t}L^{2}_{x}} \tag{3.10}\] for \(2<\widetilde{q}<\infty\), and \[\|W_{1}U|D|^{-2s+i\kappa}U^{*}W_{2}\|_{\mathcal{C}^{\beta^{\prime}}}\leq C(\kappa )\|W_{1}\|_{L_{t}^{\widetilde{q},2}L_{x}^{2}}\|W_{2}\|_{L_{t}^{\widetilde{q},2} L_{x}^{2}} \tag{3.11}\] for \(\beta^{\prime}>\frac{\widetilde{q}}{2}\), \(2<\widetilde{q}<4\). In each case, \(\kappa\in\mathbb{R}\) and \(C(\kappa)\) is a constant which grows subexponentially with \(\kappa\). As we shall see, the estimates (3.9) and (3.10) follow from a certain dispersive estimate and O'Neil's refined version of Hardy-Littlewood-Sobolev inequality, whilst (3.11) follows from appropriate frequency-local estimates and use of bilinear interpolation (in the spirit of the Keel-Tao argument [38]) to sum up the localized estimates. ## 4. Proof of Theorem 2.1 for \(2<q\leq 4\) First we claim that it suffices to prove (2.1) with \(U\) replaced by \(U_{\leq 1}:=U\chi(|D|)\), where \(\chi\in C_{0}^{\infty}(-4,4)\). Indeed, from the global nature of the estimate, a simple rescaling argument would then give (2.1) with \(U\chi(\varepsilon|D|)\) for any \(\varepsilon>0\) and the desired estimate follows by taking \(\varepsilon\) to zero. When \(2<q\leq 4\), even though the claimed estimates are only valid for \(d\geq 2\), some of the preparatory results are valid for \(d=1\) too so we include these cases where possible. Also, it will be handy to introduce the notation \(A\lessapprox_{\kappa}B\) to mean \(A\leq C(\kappa)B\), where \(C(\kappa)\lesssim e^{\varepsilon|\kappa|}\) for any \(\varepsilon>0\). ### Preparation for the interpolation **Proposition 4.1**.: _Let \(d\geq 1\). If \(4<q_{0}<\infty\), then_ \[\|W_{1}U_{\leq 1}|D|^{-2s_{0}+i\kappa}U_{\leq 1}^{*}W_{2}\|_{\mathcal{C}^{2}} \lessapprox_{\kappa}\|W_{1}\|_{L_{t}^{\widetilde{q}_{0},4}L_{x}^{2}}\|W_{2} \|_{L_{t}^{\widetilde{q}_{0},4}L_{x}^{2}} \tag{4.1}\] _where \(s_{0}=\frac{d}{2}-\frac{2}{q_{0}}\). Also, if either \(d\geq 2\) and \(2<q_{1}<\infty\), or \(d=1\) and \(4\leq q_{1}<\infty\), then_ \[\|W_{1}U_{\leq 1}|D|^{-2s_{1}+i\kappa}U_{\leq 1}^{*}W_{2}\|_{\mathcal{C}^{ \infty}}\lessapprox_{\kappa}\|W_{1}\|_{L_{t}^{\widetilde{q}_{1},\infty}L_{x}^ {2}}\|W_{2}\|_{L_{t}^{\widetilde{q}_{1},\infty}L_{x}^{2}}, \tag{4.2}\] _where \(s_{1}=\frac{d}{2}-\frac{2}{q_{1}}\)._ The proof of Proposition 4.1 rests on the following key lemma. To emphasise that Theorem 2.1 readily extends to fractional Schrodinger propagators, we state the result at such a level of generality. **Lemma 4.2** (Dispersive estimate).: _Let \(m\in(0,\infty)\backslash\{1\}\), \(d\geq 1\), \(0<q<\infty\) and_ \[s=\frac{d}{2}-\frac{m}{q},\qquad\frac{2}{q}\leq\frac{d}{2}.\] _Then, we have_ \[\bigg{|}\int e^{i(x\cdot\xi+t|\xi|^{m})}|\xi|^{-2s+i\kappa}\chi(|\xi|)\, \mathrm{d}\xi\bigg{|}\lessapprox_{\kappa}|t|^{-\frac{2}{q}} \tag{4.3}\] _and_ \[\bigg{|}\int e^{i(x\cdot\xi+t|\xi|^{m})}\varphi(|\xi|)\,\mathrm{d}\xi\bigg{|} \lesssim(1+|t|)^{-\frac{d}{2}}. \tag{4.4}\] The estimate (4.3) with \(m\geq 2\) can be found in Kenig-Ponce-Vega [39, Lemma 3.4]. These estimates have also featured more recently in [34] and [6], where a proof in the general case \(m\in(0,\infty)\setminus\{0\}\) is given. The frequency-local estimate (4.4) is more standard (an explanation can be found, for example, in the forthcoming proof of Lemma 6.1). The following Lorentz space refinement of the Hardy-Littlewood-Sobolev inequality by O'Neil [52] will also play an important role. **Lemma 4.3** ([52]).: _Let \(d\geq 1\), \(0<\lambda<d\), \(1<p_{1},p_{2}<\infty\), and \(1\leq a\leq\infty\) satisfy_ \[\frac{1}{p_{1}}+\frac{1}{p_{2}}+\frac{\lambda}{d}=2.\] _Then_ \[\left|\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}f(x)g(y)|x-y|^{-\lambda}\, \mathrm{d}x\mathrm{d}y\right|\lesssim\left\|f\right\|_{L^{p_{1},a}}\left\|g \right\|_{L^{p_{2},a^{\prime}}}\] _for all \(f\in L^{p_{1},a}(\mathbb{R}^{d})\) and \(g\in L^{p_{2},a^{\prime}}(\mathbb{R}^{d})\)._ Proof of Proposition 4.1.: We first show (4.1), which means we assume \(d\geq 1\) and \(4<q_{0}<\infty\). Note that the operator \(W_{1}U_{\leq 1}|D|^{-2s_{0}+i\kappa}U_{\leq 1}^{*}W_{2}\) is given by \[W_{1}U_{\leq 1}|D|^{-2s_{0}}U_{\leq 1}^{*}W_{2}\phi(t,x)=\int_{\mathbb{R}^{ 1+d}}\phi(t^{\prime},x^{\prime})W_{1}(t,x)W_{2}(t^{\prime},x^{\prime})K(t-t^{ \prime},x-x^{\prime})\,\mathrm{d}t^{\prime}\mathrm{d}x^{\prime},\] where \(K(t,x)=\int e^{i(x\cdot\xi+t|\xi|^{2})}|\xi|^{-2s+i\kappa}\,\mathrm{d}\xi\). Thus, by Lemmas 4.2 and 4.3, we have \[\|W_{1}U_{\leq 1}|D|^{-2s_{0}+i\kappa}U_{\leq 1}^{*}W_{2}\|_{ \mathcal{C}^{2}}^{2} =\int\bigl{|}W_{1}(t,x)W_{2}(t^{\prime},x^{\prime})K(t-t^{\prime},x-x^{\prime})\bigr{|}^{2}\,\mathrm{d}t^{\prime}\mathrm{d}x^{\prime}\mathrm{d} t\mathrm{d}x\] \[\lesssim\int_{\mathbb{R}}\int_{\mathbb{R}}\|W_{1}(t,\cdot)\|_{L^{ 2}_{x}}^{2}\|W_{2}(t^{\prime},\cdot)\|_{L^{2}_{x}}^{2}|t-t^{\prime}|^{-4/q_{0} }\,\mathrm{d}t\mathrm{d}t^{\prime}\] \[\lesssim\|W_{1}\|_{L^{\widetilde{q}_{0},4}_{t}L^{2}_{x}}\|W_{2} \|_{L^{\widetilde{q}_{0},4}_{t}L^{2}_{x}}.\] Next we show (4.2), so we assume either \(d\geq 2\) and \(2<q_{1}<\infty\), or \(d=1\) and \(4\leq q_{1}<\infty\). First note that use of Lemmas 4.2 and 4.3 again yields \[\|U_{\leq 1}|D|^{-2s_{1}+i\kappa}U_{\leq 1}^{*}\phi\|_{L^{q_{1},2}_{ t}L^{\infty}_{x}} =\left\|\int K(t-t^{\prime},x-x^{\prime})\phi(t^{\prime},x^{\prime })\mathrm{d}t^{\prime}\mathrm{d}x^{\prime}\right\|_{L^{q_{1},2}_{t}L^{\infty}_ {x}}\] \[\lesssim\kappa\,\left\|\int_{\mathbb{R}}|t-t^{\prime}|^{-2/q_{1} }\|\phi(t^{\prime},\cdot)\|_{L^{2}_{x}}\mathrm{d}t^{\prime}\right\|_{L^{q_{1},2}_{t}}\] \[\lesssim\|\phi\|_{L^{q_{1},2}_{t}L^{1}_{x}}.\] Thus, by the Holder inequality twice, \[\|W_{1}U_{\leq 1}|D|^{-2s_{1}+i\kappa}U_{\leq 1}^{*}W_{2}\phi\|_{L^{ 2}} \lesssim\|W_{1}\|_{L^{\widetilde{q}_{1},\infty}_{x}L^{2}_{x}}\|U_{ \leq 1}|D|^{-2s_{1}+i\kappa}U^{*}W_{2}\phi\|_{L^{q_{1},2}_{t}}\] \[\lesssim\kappa\,\|W_{1}\|_{L^{\widetilde{q}_{1},\infty}_{t}L^{2}_ {x}}\|W_{2}\|_{L^{\widetilde{q}_{1},\infty}_{x}L^{2}_{x}}\|\phi\|_{L^{2}}\] and this gives (4.2). ### Proof of (3.8) for \(2<q\leq 4\) Proposition 4.1 and an analytic interpolation imply the following estimates, which are slightly better than the desired estimates (3.8) thanks to the embedding of Lorentz spaces. **Proposition 4.4**.: _Let \(d\geq 2\), \(2<q\leq 4\) and \(s=\frac{d-2}{2}+\frac{2}{q}\). For \(\beta^{\prime}>\frac{\widetilde{q}}{2}\) we have_ \[\|W_{1}U_{\leq 1}|D|^{-2s}U_{\leq 1}^{*}W_{2}\|_{\mathcal{C}^{\beta^{\prime}}} \lesssim\|W_{1}\|_{L^{\widetilde{q}_{1},2\beta^{\prime}}_{t}L^{2}_{x}}\|W_{L^{ \widetilde{q}_{1},2\beta^{\prime}}_{t}L^{2}_{x}}. \tag{4.5}\] Proof.: We fix \(\phi,\psi\in\mathcal{B}\) and note that, according to Lemma 3.3, it suffices to show \[\|\langle W_{1}U_{\leq 1}|D|^{-2s}U_{\leq 1}^{*}W_{2}\phi_{j},\psi_{j}\rangle_{L^{ 2}(\mathbb{R}^{d+1})}\|_{\ell^{\beta^{\prime}}}\lesssim\|W_{1}\|_{L^{ \widetilde{q}_{1},2\beta^{\prime}}_{t}L^{2}_{x}}\|W_{2}\|_{L^{\widetilde{q}_{1},2\beta^{\prime}}_{t}L^{2}_{x}}. \tag{4.6}\] If \(q_{0},q_{1}\) satisfy \(2<\widetilde{q_{0}}<4\) and \(2<\widetilde{q_{1}}<\infty\), then Proposition 4.1 and Lemma 3.3 imply that \[\|\langle W_{1}U_{\leq 1}|D|^{-2s_{0}+i\kappa}U_{\leq 1}^{*}W_{2}\phi_{j},\psi_{j} \rangle_{L^{2}(\mathbb{R}^{d+1})}\|_{\ell^{2}}\lesssim\lesssim\kappa\,\|W_{1}\|_{L^ {\widetilde{q}_{0},4}_{t}L^{2}_{x}}\|W_{2}\|_{L^{\widetilde{q}_{0},4}_{t}L^{2}_ {x}} \tag{4.7}\] and \[\|\langle W_{1}U_{\leq 1}|D|^{-2s_{1}+i\kappa}U_{\leq 1}^{*}W_{2}\phi_{j},\psi_{j} \rangle_{L^{2}(\mathbb{R}^{d+1})}\|_{\ell^{\infty}}\lessneq_{\kappa}\|W_{1}\|_{ L_{t}^{\overline{q}_{1},\infty}L_{x}^{\underline{2}}}\|W_{2}\|_{L_{t}^{\overline{q}_{1}, \infty}L_{x}^{2}}. \tag{4.8}\] Here, \(s_{i}=\frac{d}{2}-\frac{2}{q_{i}}\) for \(i=0,1\). We would like to perform an analytic interpolation on these estimates with \(\frac{1}{q_{1}}=\frac{1}{2}-\delta\) and \(\frac{1}{q_{0}}=\frac{1}{4}-\delta\), where \(\delta=\frac{1}{2}(\frac{1}{\beta}-\frac{2}{q})>0\). Denote \(\mathfrak{S}^{\circ}:=\{z\in\mathbb{C}\,:\,-d<\operatorname{Re}z<0\}\) and consider the function on the open strip \(\mathfrak{S}^{\circ}\) given by \[F(z)=\langle W_{1}U_{\leq 1}|D|^{z}U_{\leq 1}^{*}W_{2}\phi,\psi\rangle_{L_{t,x}^ {2}},\] where \(W_{1},W_{2}\in L^{1}(\mathbb{R}^{d+1})\) are simple and \(\phi\), \(\psi\in L^{2}(\mathbb{R}^{d+1})\) are normalized (i.e. \(\|\phi\|_{L^{2}}=\|\psi\|_{L^{2}}=1\)). Since \(-2s_{i}\in(-d,0)\) for \(i=0,1\), it follows that if we can show that \(F\) is analytic on \(\mathfrak{S}^{\circ}\), then the family of bilinear operators \(\{T_{z}\}_{z\in\mathfrak{S}^{\circ}}:(L_{t}^{\widetilde{q}_{0},4}L_{x}^{2} \cap L_{t}^{\widetilde{q}_{1},\infty}L_{x}^{2})^{2}\to\ell^{2}\) defined by \[T_{z}(\phi,\psi)=\{\langle W_{1}U_{\leq 1}|D|^{z}U_{\leq 1}^{*}W_{2}\phi_{j}, \psi_{j}\rangle_{L^{2}(\mathbb{R}^{d+1})}\}_{j}\] with \(\mathfrak{S}^{\prime}=\{z\in\mathbb{C}:-2s_{0}\leq\operatorname{Re}z\leq-2s_ {1}\}\) is analytic. Consequently, a bilinear analytic interpolation argument (see, for example, [18] or more recent work in [32], [33]) with (4.7) and (4.8) implies (4.6). To see that \(F\) is analytic on \(\mathfrak{S}^{\circ}\), we first use Parseval's identity to write \[F(z)=\langle|D|^{z}U_{\leq 1}^{*}W_{2}\phi,U_{\leq 1}^{*}\overline{W}_{1} \psi\rangle_{L_{x}^{2}}\simeq\big{\langle}|\xi|^{z}\mathcal{F}_{x}\big{(}U_{ \leq 1}^{*}W_{2}\phi\big{)},\mathcal{F}_{x}\big{(}U_{\leq 1}^{*}\overline{W}_{1} \psi\big{)}\big{\rangle}_{L_{t}^{2}},\] where \(\mathcal{F}_{x}\) denotes the (spatial) Fourier transform. Since \(\mathfrak{S}^{\circ}\) is open, for any \(z_{0}\in\mathfrak{S}^{\circ}\), there exists \(0<c\ll 1\) such that \[\operatorname{Re}z_{0}+d>3c,\quad\operatorname{Re}z_{0}<-3c. \tag{4.9}\] Since \(|\xi|^{z}\) is analytic as a map \(\mathfrak{S}^{\circ}\to\mathbb{C}\) for any \(\xi\in\mathbb{R}^{d}\setminus\{0\}\), by the dominated convergence theorem, it suffices to see that there exists a non-negative function \(g\) on \(\mathbb{R}^{d}\) such that \[\sup_{|h|<c}\Big{|}\frac{|\xi|^{z_{0}+h}-|\xi|^{z_{0}}}{h}\Big{|}\leq g(\xi) \qquad\text{for any $\xi\in\mathbb{R}^{d}\setminus\{0\}$},\] and \[\big{\langle}g\big{|}\mathcal{F}_{x}\big{(}U_{\leq 1}^{*}W_{2}\phi\big{)} \big{|},\big{|}\mathcal{F}_{x}\big{(}U_{\leq 1}^{*}\overline{W}_{1}\psi\big{)} \big{|}\big{\rangle}_{L_{\xi}^{2}}<\infty. \tag{4.10}\] In fact, whenever \(|h|<c\) we have \[\big{|}|\xi|^{z_{0}+h}-|\xi|^{z_{0}}\big{|}\leq|h||\log|\xi||\sup_{|z-z_{0}|<c }|\xi|^{\operatorname{Re}z}\lesssim_{c}|h|(|\xi|^{-c}+|\xi|^{c})\sup_{|z-z_{0} |<c}|\xi|^{\operatorname{Re}z}\] so, thanks to (4.9), it suffices to show (4.10) with \(g(\xi)=|\xi|^{-2s}\) and \(2s=c\) or \(2s=d-c\). By the Cauchy-Schwarz inequality and Plancherel's identity, it suffices to show \[\|U_{\leq 1}^{*}|D|^{-s}W\phi\|_{L_{x}^{2}}<\infty\] for simple functions \(W\) and \(\phi\in L^{2}\). However, by (1.3) we have \[\|U_{\leq 1}^{*}|D|^{-s}W\phi\|_{L_{x}^{2}}\lesssim\|W\phi\|_{L_{t}^{1}L_{x}^{ r^{\prime}}}\leq\|W\|_{L_{t}^{2}L_{x}^{\underline{d}}}\|\phi\|_{L_{t}^{2}L_{x}^{2}}<\infty\] where \(r\in(2,\infty)\) is given by \(s=d(\frac{1}{2}-\frac{1}{r})\) (note \(\frac{c}{2}\) and \(\frac{d-c}{2}\) both belong to \((0,\frac{d}{2})\)). ## 5. Proof of Theorem 2.1 for \(4<q<\infty\) To complete the proof of Theorem 2.1, we consider the remaining cases where \(d\geq 1\) and \(4<q<\infty\). By duality, our goal is to show (3.8) for \(\beta^{\prime}>\frac{\widetilde{q}}{2}\) and \(2<\widetilde{q}<4\). As explained in Section 3, the basic strategy is to interpolate between (3.9) and (3.11). The former estimates were proved in Proposition 4.1 and next we prove the latter. ### Proof of (3.11) **Proposition 5.1**.: _Let \(d\geq 1\), \(2<\widetilde{q}<4\) and \(s=\frac{d-2}{2}+\frac{2}{\widetilde{q}}\). For \(\beta^{\prime}>\frac{\widetilde{q}}{2}\) we have_ \[\|W_{1}U_{\leq 1}|D|^{-2s+i\kappa}U_{\leq 1}^{*}W_{2}\|_{\mathcal{C}^{\beta^{ \prime}}}\lessneq_{\kappa}\|W_{1}\|_{L_{t}^{\widetilde{q},2}L_{x}^{2}}\|W_{2} \|_{L_{t}^{\widetilde{q},2}L_{x}^{2}}.\] In order to prove Proposition 5.1, we show the following. **Lemma 5.2**.: _Let \(s\in\mathbb{R}\), \(k\in\mathbb{Z}\). Then_ \[\|W_{1}U_{\leq 1}|D|^{-2s+i\kappa}P_{k}U_{\leq 1}^{*}W_{2}\|_{\mathcal{C}^{1}} \lesssim 2^{k(d-2s)}\|W_{1}\|_{L_{t}^{2}L_{x}^{2}}\|W_{2}\|_{L_{t}^{2}L_{x}^{2}} \tag{5.1}\] _and if \(\widetilde{q_{1}},\widetilde{q_{2}}\in(2,\infty)\) are such that \(\frac{1}{q_{1}}+\frac{1}{q_{2}}>\frac{1}{2}\), then_ \[\|W_{1}U_{\leq 1}|D|^{-2s+i\kappa}P_{k}U_{\leq 1}^{*}W_{2}\|_{\mathcal{C}^{2}} \lessneq_{\kappa}2^{k(d-2+\frac{2}{q_{1}}+\frac{2}{q_{2}}-2s)}\|W_{1}\|_{L_{t }^{\widetilde{q_{1}}}L_{x}^{2}}\|W_{2}\|_{L_{t}^{\widetilde{q_{2}}}L_{x}^{2}}. \tag{5.2}\] Proof of Lemma 5.2.: Starting with (5.1), thanks to Lemma 3.3, it is sufficient to check \[\|\langle W_{1}U_{\leq 1}|D|^{-2s+i\kappa}P_{k}U_{\leq 1}^{*}W_{2}\phi_{j}, \psi_{j}\rangle_{L_{t,x}^{2}}\|_{\ell^{1}}\lesssim 2^{k(d-2s)}\|W_{1}\|_{L_{t}^{2}L_ {x}^{2}}\|W_{2}\|_{L_{t}^{2}L_{x}^{2}}\] uniformly in \(\phi,\psi\in\mathcal{B}\). In fact, we can reduce to the case \(k=0\) since \[\|\langle W_{1}U_{\leq 1}|D|^{-2s+i\kappa}P_{k}U_{\leq 1}^{*}W_{2} \phi_{j},\psi_{j}\rangle_{L_{t,x}^{2}}\|_{\ell^{1}}\] \[=2^{k(d-2s)}\|\langle W_{1}^{(k)}U_{\leq 1}|D|^{-2s+i\kappa}P_{0}U_{ \leq 1}^{*}W_{2}^{(k)}\phi_{j}^{(k)},\psi_{j}^{(k)}\rangle_{L_{t,x}^{2}}\|_ {\ell^{1}},\] where \(F^{(k)}\) denotes the \(L^{2}\)-normalized function given by \(F^{(k)}(t,x)=2^{-k\frac{d+2}{2}}F(\frac{t}{2^{2\kappa}},\frac{x}{2^{\kappa}})\). So now we fix \(k=0\) and \(\phi,\psi\in\mathcal{B}\), and note \[\|\langle W_{1}U_{\leq 1}|D|^{-2s+i\kappa}P_{0}U_{\leq 1}^{*}W_{2} \phi_{j},\psi_{j}\rangle_{L_{t,x}^{2}}\|_{\ell^{1}}=\|\langle|D|^{-2s+i\kappa} P_{0}U_{\leq 1}^{*}W_{2}\phi_{j},\widetilde{P}_{0}U_{\leq 1}^{*}\overline{W}_{1}\psi_{j} \rangle_{L_{x}^{2}}\|_{\ell^{1}},\] where \(\widetilde{P}_{0}\) is given by \(\mathcal{F}(\widetilde{P}_{0}f)(\xi)=\vartheta(\xi)\widetilde{f}(\xi)\) and the bump function \(\vartheta\) is chosen so that \(P_{0}=\widetilde{P}_{0}P_{0}\). Furthermore \[|D|^{-2s+i\kappa}P_{0}U_{\leq 1}^{*}W_{2}\phi_{j}(x)\simeq\langle\phi_{j}, \overline{W}_{2}\tau_{x}\Phi\rangle_{L_{t,x}^{2}},\] where \(\tau_{x}\Phi(t^{\prime},x^{\prime}):=\Phi(t^{\prime},x-x^{\prime})\) and \[\Phi(t,x):=\int|\xi|^{-2s+i\kappa}\varphi(\xi)e^{it|\xi|^{2}}e^{-ix \cdot\xi}\,\mathrm{d}\xi.\] Similarly, \[\widetilde{P}_{0}U_{\leq 1}^{*}\overline{W}_{1}\psi_{j}(x)\simeq\langle\psi_{j}, W_{1}\tau_{x}\Theta\rangle_{L_{t,x}^{2}},\] where \(\Theta(t,x):=\int\vartheta(\xi)e^{it|\xi|^{2}}e^{-ix\cdot\xi}\,\mathrm{d}\xi.\) In particular, observe that \(\|\Phi(t,\cdot)\|_{L^{2}}\simeq\|\Theta(t,\cdot)\|_{L^{2}}\simeq 1\). Thus, by the Cauchy-Schwarz inequality (twice) sandwiched by use of Bessel's inequality we obtain \[\|\langle|D|^{-2s+i\kappa}P_{0}U_{\leq 1}^{*}W_{2}\phi_{j}, \widetilde{P}_{0}U_{\leq 1}^{*}\overline{W}_{1}\psi_{j}\rangle_{L_{x}^{2}}\|_{\ell^{1}} \leq\int_{\mathbb{R}^{d}}\|W_{2}\tau_{x}\Phi\|_{L_{t^{\prime},x^ {\prime}}^{2}}\|W_{1}\tau_{x}\Theta\|_{L_{t^{\prime},x^{\prime}}^{2}}\, \mathrm{d}x\] \[\leq\bigg{(}\int_{\mathbb{R}^{d}}\|W_{2}\tau_{x}\Phi\|_{L_{t^{ \prime},x^{\prime}}^{2}}^{2}\,\mathrm{d}x\bigg{)}^{\frac{1}{2}}\bigg{(}\int_{ \mathbb{R}^{d}}\|W_{2}\tau_{x}\Phi\|_{L_{t^{\prime},x^{\prime}}^{2}}^{2}\, \mathrm{d}x\bigg{)}^{\frac{1}{2}}\] \[\simeq\|W_{1}\|_{L_{t^{\prime},x^{\prime}}^{2}}\|W_{2}\|_{L_{t^{ \prime},x^{\prime}}^{2}}\] as desired. For (5.2), one may again reduce to the case \(k=0\) by a rescaling argument. When \(k=0\) we argue more or less as in the proof of (4.1). In particular, applying the dispersive estimate (4.4) (instead of (4.3)) and the Young convolution inequality, we have \[\|W_{1}U_{\leq 1}|D|^{-2s+i\kappa}P_{0}U_{\leq 1}^{*}W_{2}\|_{ \mathcal{C}^{2}}^{2} \lessapprox\int\int\|W_{1}(t,\cdot)\|_{L^{2}}^{2}\|W_{2}(t^{\prime},\cdot)\|_{L^{2}}^{2}(1+|t-t^{\prime}|)^{-d}\,\mathrm{d}t\mathrm{d}t^{\prime}\] \[\lesssim\|W_{1}\|_{L_{t}^{\overline{q}_{1}}L_{x}^{2}}^{2}\|W_{2} \|_{L_{t}^{\overline{q}_{2}}L_{x}^{2}}^{2}\] whenever \(\widetilde{q}_{1},\widetilde{q}_{2}\in(2,\infty)\) and12\(\frac{1}{\widetilde{q}_{1}}+\frac{1}{\widetilde{q}_{2}}>\frac{1}{2}\). Footnote 12: A slightly larger range of exponents is allowable for \(d\geq 2\) but this seems to be of no advantage to us. Proof of Proposition 5.1.: Now fix \(\widetilde{q}_{*}\in(2,4)\) and \(\beta_{*}^{\prime}\in(1,2)\) satisfying \(\beta_{*}^{\prime}>\frac{\widetilde{q}_{*}}{2}\), and set \(s_{*}:=\frac{d-2}{2}+\frac{2}{\widetilde{q}_{*}}\). Our goal is \[\|W_{1}U_{\leq 1}|D|^{-2s_{*}+i\kappa}U_{\leq 1}^{*}W_{2}\|_{\mathcal{C}^{ \beta_{*}^{\prime}}}\lessapprox\|W_{1}\|_{L_{t}^{\overline{q}_{*},2}L_{x}^{ 2}}\|W_{2}\|_{L_{t}^{\overline{q}_{*},2}L_{x}^{2}}. \tag{5.3}\] By complex interpolation between (5.1) and (5.2), it follows that \[\|W_{1}U_{\leq 1}|D|^{-2s_{*}+i\kappa}P_{k}U_{\leq 1}^{*}W_{2}\|_{\mathcal{C}^{ \beta_{*}^{\prime}}}\lessapprox 2^{-\nu k}\|W_{1}\|_{L_{t}^{\overline{q}_{1}}L_{x}^{ 2}}\|W_{2}\|_{L_{t}^{\overline{q}_{1}}L_{x}^{2}} \tag{5.4}\] for \((\frac{1}{\widetilde{q}_{1}},\frac{1}{\widetilde{q}_{2}})\in\Delta_{*}\), where \[\nu =\nu(\tfrac{1}{\widetilde{q}_{1}},\tfrac{1}{\widetilde{q}_{2}}): =2\bigg{(}1-\frac{1}{\widetilde{q}_{1}}-\frac{1}{\widetilde{q}_{2}}\bigg{)}-d +2s_{*}\] \[\Delta_{*} :=\{(c_{1},c_{2})\in(0,\tfrac{1}{2})^{2}:c_{1}+c_{2}>\tfrac{1}{ \beta_{*}^{\prime}}\}.\] Note that a sufficiently small neighbourhood of \((\frac{1}{\widetilde{q}_{*}},\frac{1}{\widetilde{q}_{*}})\) is contained in \(\Delta_{*}\) (see Figure 2). To sum up the estimates in (5.4) to obtain (5.3), we perform a bilinear interpolation argument in \(\Delta_{*}\) (in the spirit of Keel-Tao [38]). For this, we note that (5.4) may be reinterpreted as \[\|(W_{1}U_{\leq 1}|D|^{-2s_{*}+i\kappa}P_{k}U_{\leq 1}^{*}W_{2})_{k}\|_{\ell_{ \infty}^{\infty}(\mathcal{C}^{\beta^{\prime}_{*}})}\lessapprox\|W_{1}\|_{L_{ t}^{\widetilde{a}_{t}}L_{x}^{2}}\|W_{2}\|_{L_{t}^{\widetilde{a}_{2}}L_{x}^{2}}, \qquad\nu=\nu(\tfrac{1}{q_{1}},\tfrac{1}{q_{2}})\] where \(\ell_{\nu}^{\infty}(\mathcal{C}^{\beta^{\prime}_{*}})\) is the weighted sequence space (of operators) with norm \[\|(T_{k})_{k}\|_{\ell_{\nu}^{\infty}(\mathcal{C}^{\beta^{\prime}_{*}})}:= \sup_{k}2^{k\nu}\|T_{k}\|_{\mathcal{C}^{\beta^{\prime}_{*}}}.\] Setting \(T:=(W_{1}U_{\leq 1}|D|^{-2s_{*}+i\kappa}P_{k}U_{\leq 1}^{*}W_{2})_{k}\), we see that, in particular, \(T\) is a bilinear operator which is bounded as follows: \[\begin{cases}T:X_{0}\times X_{0}\to Y_{0}\\ T:X_{0}\times X_{1}\to Y_{1}\\ T:X_{1}\times X_{0}\to Y_{1}.\end{cases}\] Here, \(X_{i}:=L_{t}^{a_{i}}L_{x}^{2}\) and \(Y_{i}:=\ell_{\nu_{i}}^{\infty}\) for \(i=0,1\), and \(\frac{1}{a_{0}}=\frac{1}{q_{*}}+\delta\), \(\frac{1}{a_{1}}=\frac{1}{q_{*}}-2\delta\), \(\nu_{i}:=\nu(\frac{1}{a_{0}},\frac{1}{a_{i}})\). Also, \(\delta>0\) is chosen to be sufficiently small so that \((\frac{1}{a_{j}},\frac{1}{a_{i}})\in\Delta_{*}\) for \((i,j)=(0,0),(1,0),(0,1)\) (see Figure 3). By a bilinear interpolation argument (see [2, Exercise 5 (b)]) it follows that \(T\) is bounded as mapping \[T:(L_{t}^{a_{0}}L_{x}^{2},L_{t}^{a_{1}}L_{x}^{2})_{\frac{1}{3},2}\times(L_{t} ^{a_{0}}L_{x}^{2},L_{t}^{a_{1}}L_{x}^{2})_{\frac{1}{3},2}\rightarrow(\ell_{ \nu_{0}}^{\infty}(\mathcal{C}^{\beta^{\prime}_{*}}),\ell_{\nu_{1}}^{\infty}( \mathcal{C}^{\beta^{\prime}_{*}}))_{\frac{2}{3},1},\] which is equivalent to \[T:L_{t}^{\widetilde{a_{*}},2}L_{x}^{2}\times L_{t}^{\widetilde{a_{*}},2}L_{x} ^{2}\rightarrow\ell_{0}^{1}(\mathcal{C}^{\beta^{\prime}_{*}}).\] Hence, \[\|W_{1}U_{\leq 1}|D|^{-2s_{*}+i\kappa}U_{\leq 1}^{*}W_{2}\|_{\mathcal{C}^{\beta^{ \prime}_{*}}}\leq\sum_{k}\|W_{1}U_{\leq 1}|D|^{-2s+i\kappa}P_{k}U_{\leq 1}^{*}W_{2}\|_{ \mathcal{C}^{\beta^{\prime}_{*}}}\lessneq_{\kappa}\|W_{1}\|_{L_{t}^{\widetilde {q},2}L_{x}^{2}}\|W_{2}\|_{L_{t}^{\widetilde{q},2}L_{x}^{2}}\] which is (5.3). ### Proof of (3.8) for \(4<q<\infty\) **Proposition 5.3**.: _Let \(d\geq 1\), \(2<\widetilde{q}<4\) and \(s=\frac{d-2}{2}+\frac{2}{\widetilde{q}}\). For \(\beta^{\prime}>\frac{\widetilde{q}}{2}\), we have_ \[\|W_{1}U_{\leq 1}|D|^{-2s}U_{\leq 1}^{*}W_{2}\|_{\mathcal{C}^{\beta^{\prime}}} \lesssim\|W_{1}\|_{L_{t}^{\widetilde{q}}L_{x}^{2}}\|W_{2}\|_{L_{t}^{ \widetilde{q}}L_{x}^{2}}. \tag{5.5}\] Proof.: Suppose \(\widetilde{q}\in(2,4)\), \(s=\frac{d-2}{2}+\frac{2}{\widetilde{q}}\), and \(\beta^{\prime}>\frac{\widetilde{q}}{2}\). We set \(\varepsilon:=\frac{2}{\widetilde{q}}-\frac{1}{\beta^{\prime}}\) and observe that it suffices to prove (5.5) with \(\beta^{\prime}\) sufficiently close to \(\frac{\widetilde{q}}{2}\) (i.e. \(\varepsilon>0\) sufficiently small). Also, we fix \(\phi,\psi\in\mathcal{B}\) and note that, from Lemma 3.3, it is enough to prove \[\|\langle W_{1}U_{\leq 1}|D|^{-2s}U_{\leq 1}^{*}\overline{W}_{2}\phi_{j},\psi_{ j}\rangle_{L^{2}(\mathbb{R}^{d+1})}\|_{\ell^{\beta^{\prime}}}\lesssim\|W_{1}\|_{L_{t}^ {\widetilde{q}}L_{x}^{2}}\|W_{2}\|_{L_{t}^{\widetilde{q}}L_{x}^{2}}. \tag{5.6}\] For this, we shall see that one can upgrade the estimates from Proposition 5.1 by making use of the Lorentz space improvement in (4.7) and analytic interpolation. Let \(\widetilde{q}_{0},\widetilde{q}_{1}\) be given by \[\frac{1}{\widetilde{q}_{0}}=\frac{1}{4}+\lambda_{0}\varepsilon,\quad\frac{1} {\widetilde{q}_{1}}=\frac{1}{2}-\lambda_{1}\varepsilon\] where \(\lambda_{0}=(\frac{q}{4}-1)\lambda_{1}\) and \(\lambda_{1}=(4-\frac{16}{\widetilde{q}})^{-1}\). Choosing \(\varepsilon\) sufficiently small guarantees that \(\widetilde{q}_{0},\widetilde{q}_{1}\in(2,4)\). Also, let \(\beta_{1}=\varepsilon^{-1}(1-\frac{4}{q})\) and note that the choice of \(\lambda_{1}\) ensures that \(\beta^{\prime}_{1}>\frac{\widetilde{q}_{1}}{2}\). Thanks to these choices, (4.7) implies \[\|\langle W_{1}U_{\leq 1}|D|^{-2s_{0}+i\kappa}U_{\leq 1}^{*}W_{2}\phi_{j}, \psi_{j}\rangle_{L^{2}(\mathbb{R}^{d+1})}\|_{\ell^{2}}\lessneq_{\kappa}\|W_{1 }\|_{L_{t}^{\widetilde{q}_{0},4}L_{x}^{2}}\|W_{2}\|_{L_{t}^{\widetilde{q}_{0},4 }L_{x}^{2}}\] and Proposition 5.1 and Lemma 3.3 imply \[\|\langle W_{1}U_{\leq 1}|D|^{-2s_{1}+i\kappa}U_{\leq 1}^{*}W_{2}\phi_{j}, \psi_{j}\rangle_{L^{2}(\mathbb{R}^{d+1})}\|_{\ell^{\beta^{\prime}_{1}}} \lessneq_{\kappa}\|W_{1}\|_{L_{t}^{\widetilde{q}_{1},2}L_{x}^{2}}\|W_{2}\|_{L _{t}^{\widetilde{q}_{1},2}L_{x}^{2}}. \tag{5.7}\] Finally, we let \(\theta\in(0,1)\) be given by \(\theta=1-\frac{4}{q}\). By analytic interpolation (as in the proof of Proposition 4.4) and the choices of \(q_{0}\) and \(q_{1}\) above, one can check that (5.6) now follows. ## 6. On the pointwise convergence problem In this section we consider (local) maximal-in-time estimates (Theorem 2.2) and applications to the associated pointwise convergence problem (Corollary 2.3). First of all, we give a precise definition of the density function of \(\gamma\in\mathcal{C}^{\beta,s}\). Here, for \(d\geq 1\), \(s\geq 0\), \[\mathcal{C}^{\beta,s}=\{\gamma\in\operatorname{Com}(\dot{H}^{-s}(\mathbb{R}^{d }),\dot{H}^{s}(\mathbb{R}^{d})):\||D|^{s}\gamma|D|^{s}\|_{\mathcal{C}^{\beta}( L^{2}(\mathbb{R}^{d}))}<\infty\},\] where \(\operatorname{Com}(\dot{H}^{-s}(\mathbb{R}^{d}),\dot{H}^{s}(\mathbb{R}^{d}))\) denotes the set of compact operators from \(\dot{H}^{-s}(\mathbb{R}^{d})\) to \(\dot{H}^{s}(\mathbb{R}^{d})\). Using a finite-rank approximation, if \(\gamma\in\mathcal{C}^{\beta,s}\) is self-adjoint then there exist orthonormal functions \((f_{j})_{j}\subset L^{2}(\mathbb{R}^{d})\) and \((\lambda_{j})_{j}\in\ell^{\beta}\) such that \[\gamma^{N}:=\sum_{j=1}^{N}\lambda_{j}|D|^{-s}\Pi_{f_{j}}|D|^{-s},\quad\lim_{N \to\infty}\||D|^{s}(\gamma^{N}-\gamma)|D|^{s}\|_{\mathcal{C}^{\beta}}=0.\] The density function of \(\gamma^{N}\) is defined by \[\rho_{\gamma^{N}}(x)=\sum_{j=1}^{N}\lambda_{j}||D|^{-s}f_{j}(x)|^{2}.\] We define the density function of \(\gamma\in\mathcal{C}^{\beta,s}\) as a limit of \(\rho_{\gamma^{N}}\) in the following manner. We claim that if \(s>\frac{d}{2}-\frac{\alpha}{2\beta}\), the density function \(\rho_{\gamma}\) is well-defined in \(L^{1}(\mathrm{d}\mu)\) for each \(\alpha\)-dimensional measure \(\mu\). For this, it suffices to see \[\bigg{\|}\sum_{j}\lambda_{j}||D|^{-s}f_{j}|^{2}\bigg{\|}_{L^{1}(\mathrm{d}\mu)} \lesssim\|\lambda\|_{\ell^{\beta}} \tag{6.1}\] for all orthonormal functions \((f_{j})_{j}\) in \(L^{2}(\mathbb{R}^{d})\) and \((\lambda_{j})_{j}\in\ell^{\beta}\) since the estimate tells that \((\rho_{\gamma^{N}})_{N}\) is a Cauchy sequence in \(L^{1}(\mathrm{d}\mu)\), and we may take \(\rho_{\gamma}\in L^{1}(\mathrm{d}\mu)\) as the limit of this sequence. In order to verify (6.1), we employ the following two estimates: \[\bigg{\|}\sum_{j}\lambda_{j}|P_{k}f_{j}|^{2}\bigg{\|}_{L^{\infty}(\mathrm{d} \mu)}\lesssim 2^{dk}\|\lambda\|_{\ell^{\infty}}, \tag{6.2}\] and for \(s>\frac{d}{2}-\frac{\alpha}{2}\), \[\bigg{\|}\sum_{j}\lambda_{j}|P_{k}f_{j}|^{2}\bigg{\|}_{L^{1}(\mathrm{d}\mu)} \lesssim 2^{2sk}\|\lambda\|_{\ell^{1}}. \tag{6.3}\] For the proof of (6.2), we refer the reader forward to the proof of (6.13). For the second estimate, we note that Barcelo _et al._[1, Appendix A] obtained \[\big{\|}\sup_{k}|P_{k}g|\big{\|}_{L^{2}(\mathrm{d}\mu)}\lesssim\|g\|_{H^{s}( \mathbb{R}^{d})}\] if \(s>\frac{d}{2}-\frac{\alpha}{2}\), and this clearly implies (6.3) in the case \(k\geq 0\). For \(k\leq 0\), the inequality \[\sup_{x}|P_{k}g(x)| =\sup_{x}|((\mathcal{F}_{\xi}^{-1}\varphi_{k})*g)(x)|\] \[\lesssim\|\varphi_{k}\|_{L^{2}}\|g\|_{L^{2}}\sim 2^{\frac{d}{2}k}\|g \|_{L^{2}}\] implies \(\|P_{k}g\|_{L^{2}(\mathrm{d}\mu)}\lesssim 2^{\frac{d}{2}k}\|g\|_{L^{2}}\). Here, we use the notation \(A\sim B\) when both \(A\lesssim B\) and \(B\lesssim A\) hold. From the above, we see that (6.3) holds in the case \(k<0\). Finally, by using Lemma 3.1 together with (6.2) and (6.3), we conclude (6.1) for \(s>\frac{d}{2}-\frac{\alpha}{2\beta}\). ### Proof of Corollary 2.3 Corollary 2.3 may be deduced from the maximal-in-time estimates in Theorem 2.2 using well-established arguments (for example, [1, 14]). For the sake of completeness, we include a sketch for the case \(m\in(1,\infty)\) (the case \(m\in(0,1)\) can be handled in a similar manner). First we note that an argument based on Frostman's lemma from geometric measure theory (see for example [1, 14]) means that the divergence set bound \(\dim_{H}\mathfrak{D}(\gamma_{0})\leq(d-2s)\beta\) follows if we can show that \[\lim_{t\to 0}\rho_{\gamma(t)}(x)=\rho_{\gamma_{0}}(x)\quad\text{$\mu$-a.e. $x\in\mathbb{B}^{d}$} \tag{6.4}\] holds whenever \(\mu\in\mathcal{M}^{\alpha}(\mathbb{B}^{d})\) and \(\alpha>(d-2s)\beta\). Take \(\gamma_{0}\in\mathcal{C}^{\beta,s}\) to be self-adjoint and \(\mu\in\mathcal{M}^{\alpha}(\mathbb{B}^{d})\), where \(s\in[\frac{d}{4},\frac{d}{2})\) and \(\beta\in[1,\frac{\alpha}{d-2s})\). Let \(\gamma_{0}^{N}\) be defined as above, so that \[\lim_{N\to\infty}\|\rho_{\gamma_{0}}-\rho_{\gamma_{0}^{N}}\|_{L^{1}(\mathrm{d} \mu)}=0. \tag{6.5}\] For each \(t\in\mathbb{R}\), if we set \(\gamma(t)=e^{it(-\Delta)^{m/2}}\gamma_{0}e^{-it(-\Delta)^{m/2}}\) and \(\gamma^{N}(t)=e^{it(-\Delta)^{m/2}}\gamma_{0}^{N}e^{-it(-\Delta)^{m/2}}\), then the fact that \(e^{it(-\Delta)^{m/2}}\) is unitary means that \(\rho_{\gamma(t)}\) is well-defined in \(L^{1}(\mathrm{d}\mu)\) in the same manner. In order to prove (6.4), we first note that this holds in the finite-rank case; that is, for each fixed \(N\in\mathbb{N}\), we have \(\lim_{t\to 0}\rho_{\gamma^{N}(t)}(x)=\rho_{\gamma_{0}^{N}}(x)\) (\(\mu\)-a.e. \(x\in\mathbb{B}^{d}\)). Indeed, since we may write \[\rho_{\gamma^{N}(t)}(x)=\sum_{j=1}^{N}\lambda_{j}|e^{-it(-\Delta)^{m/2}}g_{j}(x )|^{2}\] for a certain orthonormal family \((g_{j})_{j}\) in \(\dot{H}^{s}(\mathbb{R}^{d})\), the claim holds if \(\lim_{t\to 0}e^{it(-\Delta)^{m/2}}f(x)=f(x)\) (\(\mu\)-a.e. \(x\in\mathbb{B}^{d}\)) whenever \(f\in\dot{H}^{s}(\mathbb{R}^{d})\). By standard arguments, this follows from the maximal estimate13 Footnote 13: Although this estimate may be known in certain cases, we note that it follows from (2.5). However, we do not need the full power of (2.5), and the special case \(\beta=1\) suffices. \[\|U_{m}f\|_{L^{2}_{x}(\mathbb{B}^{d},\mathrm{d}\mu)L^{\infty}_{t}(0,1)} \lesssim\|f\|_{\dot{H}^{s}(\mathbb{R}^{d})}. \tag{6.6}\] In order to extend to the infinite-rank case, we use Theorem 2.2. It suffices to prove \[\mu(\{x\in\mathbb{B}^{d}:\limsup_{t\to 0}|\rho_{\gamma(t)}(x)-\rho_{\gamma_{0}} (x)|>k^{-1}\})=0 \tag{6.7}\] for each integer \(k\geq 1\), and to see this we fix \(\varepsilon>0\) and note \[\mu(\{x\in\mathbb{B}^{d}:\limsup_{t\to 0}|\rho_{\gamma(t)}(x)- \rho_{\gamma_{0}}(x)|>k^{-1}\})\] \[\leq\mu(\{x\in\mathbb{B}^{d}:\sup_{t\in(-1,1)}|\rho_{\gamma(t)}(x )-\rho_{\gamma^{N}(t)}(x)|>(3k)^{-1}\})\] \[\quad+\mu(\{x\in\mathbb{B}^{d}:\limsup_{t\to 0}|\rho_{\gamma^{N}(t)}(x )-\rho_{\gamma_{0}^{N}}(x)|>(3k)^{-1}\})\] \[\quad+\mu(\{x\in\mathbb{B}^{d}:|\rho_{\gamma_{0}^{N}}(x)-\rho_{ \gamma_{0}}(x)|>(3k)^{-1}\})=:M_{1}+M_{2}+M_{3},\] where \(N\) is to be chosen momentarily. For \(M_{3}\), by Chebyshev's inequality and (6.5), we have \[M_{3}\leq 3k\|\rho_{\gamma_{0}^{N}}-\rho_{\gamma_{0}}\|_{L^{1}_{x}(\mathrm{d} \mu)}<\varepsilon\] if we take \(N=N(\varepsilon,k)\) sufficiently large. For \(M_{1}\), we use Chebyshev's inequality and Theorem 2.2 to estimate \[M_{1}\leq 3k\|\rho_{\gamma(t)}-\rho_{\gamma^{N_{\varepsilon}}(t)}\|_{L^{1}_{x}( \mathrm{d}\mu)L^{\infty}_{t}}\lesssim 3k\bigg{(}\sum_{j>N}|\lambda_{j}|^{\beta} \bigg{)}^{\frac{1}{\beta}}<\varepsilon,\] for \(N=N(\varepsilon,k)\) sufficiently large. For \(M_{2}\), by the above observation in the finite-rank case, it follows that \(M_{2}=0\) for any choice of \(N\). Hence we obtain (6.7). ### The maximal-in-time estimates Here prove the estimate (2.5) in Theorem 2.2 (the claims regarding sharpness in Theorem 2.2 are justified later in Section 6.3). We recall the notation \(U_{m}=e^{it(-\Delta)^{m/2}}\). Also, throughout the following proof, we write \(L^{q}_{x}(\mathrm{d}\mu)L^{r}_{t}=L^{q}_{x}(\mathbb{R}^{d},\mathrm{d}\mu)L^{r} _{t}(0,1)\) and \(\chi_{E}\) for the characteristic function of \(E\). We consider the following cases and treat them slightly differently. * \(m\in(1,\infty)\) and \(s\in(\frac{d}{4},\frac{d}{2})\); * \(m\in(0,1)\); * \(m\in(1,\infty)\) and \(s=\frac{d}{4}\). The key oscillatory integral estimates to handle the first two cases are as follows. **Lemma 6.1**.: _Let \(d\in\mathbb{N}\) and \(\varphi\in C_{0}^{\infty}\) be supported in \(\{r\in\mathbb{R}:2^{-1}<r<2\}\). For \(m\in(0,\infty)\backslash\{1\}\), we have_ \[\sup_{t\in\mathbb{R}}\bigg{|}\int_{\mathbb{R}^{d}}e^{i(x\cdot\xi+t|\xi|^{m})} \varphi(2^{-k}|\xi|)\,\mathrm{d}\xi\bigg{|}\lesssim\frac{2^{dk}}{(1+2^{k}|x|) ^{\frac{d}{2}}} \tag{6.8}\] _for each \(k\in\mathbb{Z}\). For \(m\in(0,1)\) and \(|x|<1\), we further have that_ \[\sup_{t\in(-1,1)}\left|\int_{\mathbb{R}^{d}}e^{i(x\cdot\xi+t|\xi|^{m})}\varphi(2 ^{-k}|\xi|)\,\mathrm{d}\xi\right|\lesssim\frac{2^{\frac{d}{2}k}|x|^{-\frac{d}{2} }}{(1+2^{k}|x|^{\frac{1}{1-m}})^{\frac{d}{2}}} \tag{6.9}\] _for each \(k\in\mathbb{Z}\)._ Proof.: We change the variables to write \[\int_{\mathbb{R}^{d}}e^{i(x\cdot\xi+t|\xi|^{m})}\varphi(2^{-k}|\xi|)\,\mathrm{ d}\xi=2^{dk}\int_{\mathbb{R}^{d}}e^{i\theta_{k}(\xi)}\varphi(|\xi|)\,\mathrm{d} \xi=:\mathcal{I}_{k},\] where \(\theta_{k}(\xi)=2^{k}x\cdot\xi+2^{mk}t|\xi|^{m}\). Thus, (6.8) follows if \[\sup_{t\in\mathbb{R}}\bigg{|}\int_{\mathbb{R}^{d}}e^{i(x\cdot\xi+t|\xi|^{m})} \varphi(|\xi|)\,\mathrm{d}\xi\bigg{|}\lesssim\frac{1}{(1+|x|)^{\frac{d}{2}}}\] for any \(x\in\mathbb{R}^{d}\). In fact, since \(\{(\xi,|\xi|^{m}):|\xi|\in[\frac{1}{2},2]\}\) has non-vanishing gaussian curvature, we have (see, for example, [60, Section VIII]) \[\bigg{|}\int_{\mathbb{R}^{d}}e^{i(x\cdot\xi+t|\xi|^{m})}\varphi(|\xi|)\, \mathrm{d}\xi\bigg{|}\lesssim\frac{1}{(1+|(x,t)|)^{\frac{d}{2}}}\] and the desired estimate follows immediately. For (6.9) we fix \(m\in(0,1)\) and \(|x|,|t|<1\), and choose constants \(a\) and \(b\) satisfying \[0<a<\frac{1}{m}2^{-(1-m)},\qquad\frac{1}{m}2^{1-m}<b.\] First we consider the case \(2^{k}|x|^{\frac{1}{1-m}}\geq a^{-\frac{1}{1-m}}\). Then we have \(2^{mk}\leq a2^{k}|x|\), and therefore \[|\nabla\theta_{k}(\xi)|\geq 2^{k}|x|-m2^{mk}|t||\xi|^{m-1}\geq(1-am2^{1-m})2^{k} |x|\simeq 2^{k}|x|\] for \(|\xi|\) in the support of \(\varphi\). This means \[|\mathcal{I}_{k}|\leq C_{N}\frac{2^{dk}}{(2^{k}|x|)^{N}}\lesssim\frac{1}{|x|^{ \frac{d(2-m)}{2(1-m)}}}\] as desired. The first inequality holds for any \(N\in\mathbb{N}\) by integration by parts, and the second estimate can be checked since \(2^{-k}\lesssim|x|^{\frac{1}{1-m}}\) and by taking \(N\) sufficiently large (depending on \(d\) and \(m\)). In the remaining case \(2^{k}|x|^{\frac{1}{1-m}}<a^{-\frac{1}{1-m}}\), the goal is \[|\mathcal{I}_{k}|\lesssim\frac{2^{dk}}{(2^{k}|x|)^{\frac{d}{2}}}. \tag{6.10}\] We split into subcases: * \(2^{mk}|t|<a2^{k}|x|\) or \(2^{mk}|t|>b2^{k}|x|\); * \(a2^{k}|x|\leq 2^{mk}|t|\leq b2^{k}|x|\). In Case (I) we have \(|\nabla\theta_{k}(\xi)|\geq C2^{k}|x|\), where \(C=1-am2^{1-m}\), or \(C=bm2^{-(1-m)}-1\), and therefore \[|\mathcal{I}_{k}|\leq C_{N}\frac{2^{dk}}{(1+2^{k}|x|)^{N}}\] for any \(N\in\mathbb{N}\). This follows either from the trivial estimate \(|\mathcal{I}_{k}|\lesssim 2^{dk}\) or by integration by parts. Taking \(N>\frac{d}{2}\) we obtain (6.10). In Case (II), we have \[|\det\operatorname{Hess}\theta_{k}(\xi)|\sim(2^{mk}|t|)^{d}\sim(2^{k}|x|)^{d}\] and therefore we again obtain (6.10) by using standard results from the theory of oscillatory integrals (see, for example, [60, Section VIII]). The following elementary lemma will also be useful to us. **Lemma 6.2** ([15]).: _Let \(0<\alpha\leq d\) and \(\mu\in\mathcal{M}^{\alpha}(\mathbb{B}^{d})\). Then, for each \(\ell\in\mathbb{Z}\) we have_ \[\iint|g(x)||h(x^{\prime})|\chi_{(0,2^{-l})}(|x-x^{\prime}|)\,\mathrm{d}\mu(x) \mathrm{d}\mu(x^{\prime})\lesssim 2^{-\alpha l}\|g\|_{L^{2}_{\mathcal{Z}}( \mathbb{B}^{d},\mathrm{d}\mu)}\|h\|_{L^{2}_{\mathcal{Z}}(\mathbb{B}^{d}, \mathrm{d}\mu)}.\] Proof.: This is a simple consequence of the Cauchy-Schwarz inequality. Indeed, \[\iint|g(x)||h(x^{\prime})|\chi_{(2^{-l-1},2^{-l})}(x-x^{\prime}) \,\mathrm{d}\mu(x)\mathrm{d}\mu(x^{\prime})\] \[\leq\left(\int|g(x)|^{2}\mu(B(x,2^{-l}))\,\mathrm{d}\mu(x) \right)^{\frac{1}{2}}\left(\int|h(x)|^{2}\mu(B(x,2^{-l}))\,\mathrm{d}\mu(x) \right)^{\frac{1}{2}}\] \[\lesssim 2^{-\alpha l}\|g\|_{L^{2}(\mathbb{B}^{d},\mathrm{d}\mu)} ^{2}\|h\|_{L^{2}(\mathbb{B}^{d},\mathrm{d}\mu)}^{2}.\] The defining property of the \(\alpha\)-dimensional measures was used at the last step. **The case of \(m\in(1,\infty)\) and \(\frac{d}{4}<s<\frac{d}{2}\).** \[\|WT\overline{W}\|_{\mathcal{C}^{\beta^{\prime}}}\lesssim\|W\|_{L^{\infty}( \mathrm{d}\mu)L^{2}}^{2} \tag{6.11}\] where \(T=D^{-2s}U_{m}U_{m}^{*}\). It suffices to prove \[\sum_{k\in\mathbb{Z}}\|WT_{k}\overline{W}\|_{\mathcal{C}^{\beta^{\prime}}} \lesssim\|W\|_{L^{\infty}(\mathrm{d}\mu)L^{2}}^{2} \tag{6.12}\] for \(\beta\in[1,\frac{\alpha}{d-2s})\). Here, \(T_{k}:=P_{k}^{2}T\) and \(\widehat{P_{k}f}(\xi)=\varphi(2^{-k}|\xi|)\widehat{f}(\xi)\) with \(\sum_{k\in\mathbb{Z}}\varphi(2^{-k}|\xi|)^{2}=1\). In order to prove (6.12), we make a further decomposition in the spatial variable. In particular, we write \[T_{k}F(t,x) =\int F(t^{\prime},x^{\prime})K_{k}(t-t^{\prime},x-x^{\prime})\, \mathrm{d}t^{\prime}\mathrm{d}\mu(x^{\prime})\] \[=\sum_{l\geq 0}\int F(t^{\prime},x^{\prime})K_{k,l}(t-t^{\prime},x-x ^{\prime})\,\mathrm{d}t^{\prime}\mathrm{d}\mu(x^{\prime})=:\sum_{l\geq 0}T_{k,l}F(t,x),\] where \(K_{k,l}(t,x)=\chi_{l}(|x|)K_{k}(t,x)\), \(\chi_{l}=\chi_{(2^{-l-1},2^{-l})}\), and \[K_{k}(t,x)=\int e^{i(x\cdot\xi+t|\xi|^{m})}\frac{\varphi(2^{-k}|\xi|)^{2}}{| \xi|^{2s}}\,\mathrm{d}\xi.\] The key estimates are: \[\|WT_{k}\overline{W}\|_{\mathcal{C}^{1}} \lesssim 2^{(d-2s)k}\|W\|_{L^{2}(\mathrm{d}\mu)L^{2}}^{2}; \tag{6.14}\] \[\|WT_{k,l}\overline{W}\|_{\mathcal{C}^{2}} \lesssim\frac{2^{(d-2s)k}2^{-\frac{\alpha}{2}l}}{(1+2^{k-l})^{ \frac{d}{2}}}\|W\|_{L^{4}(\mathrm{d}\mu)L^{2}}^{2};\] (6.15) \[\|WT_{k,l}\overline{W}\|_{\mathcal{C}^{\infty}} \lesssim\frac{2^{(d-2s)k}2^{-\alpha l}}{(1+2^{k-l})^{\frac{d}{2}} }\|W\|_{L^{\infty}(\mathrm{d}\mu)L^{2}}^{2}. \tag{6.13}\] Before proving these, let us first see how one obtains (6.12). As one application of (6.13) we see that \[\sum_{k<0}\|WT_{k}\overline{W}\|_{\mathcal{C}^{\beta^{\prime}}}\lesssim\sum_{k <0}\|WT_{k}\overline{W}\|_{\mathcal{C}^{1}}\lesssim\|W\|_{L^{2}(\mathrm{d}\mu) L^{2}}^{2}\lesssim\|W\|_{L^{\infty}(\mathrm{d}\mu)L^{2}}^{2}.\] To handle \(k\geq 0\), we shall first consider the case when \(\frac{\alpha}{d-2s}>2\), in which case it suffices to consider \(\beta\in(2,\frac{\alpha}{d-2s})\). Note that (6.14) implies \[\|WT_{k}\overline{W}\|_{\mathcal{C}^{2}} \lesssim\sum_{l\leq k}\|WT_{k,l}\overline{W}\|_{\mathcal{C}^{2}}+ \sum_{l>k}\|WT_{k,l}\overline{W}\|_{\mathcal{C}^{2}}\] \[\lesssim 2^{(d-2s)k}\|W\|_{L^{4}(\mathrm{d}\mu)L^{2}}^{2}\left( \sum_{l\leq k}\frac{2^{-\frac{\alpha}{2}l}}{2^{\frac{\alpha}{2}}(k-l)}+\sum_{l \leq k}2^{-\frac{\alpha}{2}l}\right)\] \[\lesssim k2^{(\frac{d}{2}-2s-\frac{\alpha}{2})k}\|W\|_{L^{4}( \mathrm{d}\mu)L^{2}}^{2}.\] Interpolating this with (6.13) by Holder's inequality, one obtains \[\sum_{k\geq 0}\|WT_{k}\overline{W}\|_{\mathcal{C}^{\beta^{\prime}}} \lesssim\sum_{k\geq 0}\|WT_{k}\overline{W}\|_{\mathcal{C}^{1}}^{ \frac{2}{\beta^{\prime}}-1}\|WT_{k}\overline{W}\|_{\mathcal{C}^{2}}^{2-\frac{ 2}{\beta^{\prime}}}\] \[\lesssim\sum_{k\geq 0}k^{\frac{2}{\beta}}2^{(d-2s-\frac{\alpha }{\beta})k}\|W\|_{L^{4}(\mathrm{d}\mu)L^{2}}^{2}\] which gives (6.12) since \(\beta<\frac{\alpha}{d-2s}\). When \(\frac{\alpha}{d-2s}\leq 2\), we interpolate between (6.14) with (6.15) before summing up in \(l\). In particular we obtain \[\|WT_{k,l}\overline{W}\|_{\mathcal{C}^{\beta^{\prime}}}\lesssim\frac{2^{(d-2s) k}2^{-\frac{\alpha}{\beta}l}}{(1+2^{k-l})^{\frac{d}{2}}}\|W\|_{L^{\infty}( \mathrm{d}\mu)L^{2}}^{2}.\] Summing up both in \(k,l\geq 0\), and using that \(s\in(\frac{d}{4},\frac{d}{2})\) and \(\beta<\frac{\alpha}{d-2s}\), we obtain (6.12) in this case too. It remains to verify (6.13)-(6.15). For (6.13), since \(T_{k}=(D^{-s}P_{k}U_{m})(D^{-s}P_{k}U_{m})^{*}\), this is equivalent to \[\bigg{\|}\sum_{j}\lambda_{j}|D^{-s}P_{k}U_{m}f_{j}|^{2}\bigg{\|}_{L^{\infty}( \mathrm{d}\mu)L^{\infty}}\lesssim 2^{k(d-2s)}\|\lambda\|_{\ell^{\infty}}\] for orthonormal systems \((f_{j})_{j}\) in \(L^{2}(\mathbb{R}^{d})\). This latter estimate holds thanks to Bessel's inequality and since \(\int|\xi|^{-2s}\varphi(2^{-k}\xi)\,\mathrm{d}\xi\sim 2^{k(d-2s)}\). For (6.14) and (6.15), we use (6.8) of Lemma 6.1 to obtain \[|K_{k,l}(t,x)|\lesssim\chi_{l}(|x|)\frac{2^{(d-2s)k}}{(1+2^{k-l})^{\frac{d}{2} }}.\] Lemma 6.2 now immediately yields (6.14). For (6.15), note that the above kernel estimates gives \[\|WT_{k,l}\overline{W}\|_{\mathcal{C}^{\infty}} =\sup_{\|f_{1}\|_{2}=\|f_{2}\|_{2}=1}\left|\int f_{1}W(t,x)T_{k,l }\overline{W}f_{2}(t,x)\,\mathrm{d}t\mathrm{d}\mu(x)\right|\] \[\lesssim\frac{2^{(d-2s)k}}{(1+2^{k-l})^{\frac{d}{2}}}\sup_{\|f_{1 }\|_{2}=\|f_{2}\|_{2}=1}\iint\|f_{1}W(\cdot,x)\|_{L^{1}}\|f_{2}\overline{W}( \cdot,x^{\prime})\|_{L^{1}}\chi_{l}(|x-x^{\prime}|)\,\mathrm{d}\mu(x)\mathrm{ d}\mu(x^{\prime}),\] and then (6.15) follows from another application of Lemma 6.2. ### The case of \(m\in(0,1)\) In the case \(2\alpha\leq d\beta\), the goal is to prove (6.12) for \(\frac{1}{2}(d-\frac{\alpha}{\beta})<s<\frac{d}{2}\). Since \(\frac{1}{2}(d-\frac{\alpha}{\beta})\geq\frac{d}{4}\) in this case, and since (6.8) holds for \(m\in(0,1)\) too, the above argument for \(m\in(1,\infty)\) may be used to obtain (6.12). Now suppose \(2\alpha>d\beta\), in which case we want to prove (6.11) for \(\frac{d}{4}-\frac{1}{2}(1-m)(\frac{\alpha}{\beta}-\frac{d}{2})<s<\frac{d}{2}\). Our goal is again to show (6.12) and the argument is similar to the above, except use of (6.8) is replaced by (6.9). By doing so, one may obtain \[\|WT_{k,l}\overline{W}\|_{\mathcal{C}^{2}}\lesssim\frac{2^{(\frac{d}{2}-2s)k}2 ^{(\frac{d}{2}-\frac{\alpha}{2})l}}{(1+2^{k-\frac{l}{1-m}})^{\frac{d}{2}}}\|W \|_{L^{\infty}(\mathrm{d}\mu)L^{2}}^{2}\] and \[\|WT_{k,l}\overline{W}\|_{\mathcal{C}^{\infty}}\lesssim\frac{2^{(\frac{d}{2}-2 s)k}2^{(\frac{d}{2}-\alpha)l}}{(1+2^{k-\frac{l}{1-m}})^{\frac{d}{2}}}\|W\|_{L^{ \infty}(\mathrm{d}\mu)L^{2}}^{2},\] which yield \[\|WT_{k,l}\overline{W}\|_{\mathcal{C}^{\beta^{\prime}}}\lesssim\frac{2^{( \frac{d}{2}-2s)k}2^{(\frac{d}{2}-\frac{\alpha}{\beta})l}}{(1+2^{k-\frac{l}{1-m }})^{\frac{d}{2}}}\|W\|_{L^{\infty}(\mathrm{d}\mu)L^{2}}^{2}.\] Note that the restriction \(2\alpha>d\beta\) implies, in particular, that \(\beta<2\). Therefore, \[\sum_{l\geq 0}\|WT_{k,l}\overline{W}\|_{\mathcal{C}^{\beta^{ \prime}}} =\sum_{l\leq(1-m)k}\|WT_{k,l}\overline{W}\|_{\mathcal{C}^{\beta^{ \prime}}}+\sum_{l>(1-m)k}\|WT_{k,l}\overline{W}\|_{\mathcal{C}^{\beta^{\prime}}}\] \[\lesssim 2^{(\frac{d}{2}-2s)k}\|W\|_{L^{\infty}(\mathrm{d}\mu)L^{2 }}^{2}\left(\sum_{l\leq(1-m)k}\frac{2^{(\frac{d}{2}-\frac{\alpha}{\beta})l}}{2 ^{(k-\frac{l}{1-m})^{\frac{d}{2}}}}+\sum_{l>(1-m)k}2^{(\frac{d}{2}-\frac{ \alpha}{\beta})l}\right)\] \[\lesssim k2^{(\frac{d}{2}-2s+(1-m)(\frac{d}{2}-\frac{\alpha}{ \beta}))k}\|W\|_{L^{\infty}(\mathrm{d}\mu)L^{2}}^{2}.\] This gives (2.2) since \(\frac{d}{2}-(1-m)(\frac{\alpha}{\beta}-\frac{d}{2})<2s\). ### The case of \(m\in(1,\infty)\) and \(s=\frac{d}{4}\) Instead of Lemma 6.1, we shall make use of the following estimate, which for \(d=1\) appears in [58] and is applied to the single-particle case by Barcelo _et al._[1]. To state it, we write \(\varphi_{\leq N}(|\xi|)=\sum_{k\leq N}\varphi(2^{-k}|\xi|)\) for each \(N\in\mathbb{Z}\). **Lemma 6.3**.: _Let \(d\in\mathbb{N}\). For \(m\in(1,\infty)\), we have_ \[\sup_{t\in\mathbb{R}}\left|\int_{\mathbb{R}^{d}}e^{i(x\cdot\xi+t|\xi|^{m})} \frac{\varphi(2^{-k}|\xi|)}{|\xi|^{\frac{d}{2}}}\,\mathrm{d}\xi\right|\lesssim |x|^{-\frac{d}{2}}\] _uniformly in \(N\in\mathbb{Z}\)._ Proof.: First notice that \[\sum_{2^{k}<|x|^{-1}}\left|\int_{\mathbb{R}^{d}}e^{i(x\cdot\xi+t|\xi|^{m})} \frac{\varphi(2^{-k}|\xi|)}{|\xi|^{\frac{d}{2}}}\,\mathrm{d}\xi\right|\lesssim \int_{0}^{|x|^{-1}}r^{\frac{d}{2}-1}\,\mathrm{d}r\sim|x|^{-\frac{d}{2}}.\] We set \(V=\{k\in\mathbb{Z}:|x|^{-1}\leq 2^{k}\leq 2^{N}\}\) and for \(k\in V\) we change variables to write \[\int_{\mathbb{R}^{d}}e^{i(x\cdot\xi+t|\xi|^{m})}\frac{\varphi(2^{-k}|\xi|)}{ |\xi|^{\frac{d}{2}}}\,\mathrm{d}\xi=2^{\frac{d}{2}k}\int_{\mathbb{R}^{d}}e^{i \theta_{k}(\xi)}\widetilde{\varphi}(|\xi|)\,\mathrm{d}\xi=:\mathcal{I}_{k}\] where \(\widetilde{\varphi}(\xi):=|\xi|^{-\frac{d}{2}}\varphi(|\xi|)\), and \(\theta_{k}(\xi)=2^{k}x\cdot\xi+2^{mk}t|\xi|^{m}\). Also, we split \(V\) into \[V_{1}=\{k\in V:2^{mk}|t|<a2^{k}|x|\text{ or }2^{mk}|t|>b2^{k}|x|\}\] and \[V_{2}=\{k\in V:a2^{k}|x|\leq 2^{mk}|t|\leq b2^{k}|x|\}\] where the constants \(a\) and \(b\) are chosen as in the proof of Lemma 6.1. Since we have \(|\nabla\theta_{k}(\xi)|\gtrsim 2^{k}|x|\) whenever \(k\in V_{1}\) and \(|\xi|\) belongs to the support of \(\widetilde{\varphi}\), integration by parts yields \[|\mathcal{I}_{k}|\leq C_{N}2^{\frac{d}{2}k}(2^{k}|x|)^{-N}\] for any natural number \(N\). Thus, if we choose \(N\) sufficiently large then \[\sum_{k\in V_{1}}|\mathcal{I}_{k}|\lesssim|x|^{-N}\sum_{2^{k}\geq|x|^{-1}}2^{- k(N-\frac{d}{2})}\lesssim|x|^{-\frac{d}{2}}.\] Next consider \(k\in V_{2}\), and note that \[|\det\operatorname{Hess}\theta_{k}(\xi)|\sim(2^{km}|t|)^{d}\sim(2^{k}|x|)^{d}\] if \(\xi\) belongs to the support of \(\widetilde{\varphi}\). Again using standard results in the theory of oscillatory integrals, we obtain \[|\mathcal{I}_{k}|\lesssim 2^{\frac{d}{2}k}(2^{k}|x|)^{-\frac{d}{2}}\sim|x|^{- \frac{d}{2}}.\] However, the cardinality of \(V_{2}\) is \(O(1)\) and so this completes the proof. **Remarks**.: (i) More generally, for \(m\in(1,\infty)\) and any \(\frac{d}{2}\leq\gamma<d\) we have \[\sup_{t\in\mathbb{R}}\left|\int_{\mathbb{R}^{d}}e^{i(x\cdot\xi+t|\xi|^{m})} \frac{\varphi_{\leq N}(|\xi|)}{|\xi|^{\gamma}}\,\mathrm{d}\xi\right|\lesssim|x |^{-(d-\gamma)} \tag{6.16}\] uniformly in \(N\in\mathbb{Z}\). The non-endpoint cases \(\frac{d}{2}<\gamma<d\) follow quickly from (6.8) and a dyadic decomposition. Moreover, we remark that (6.16) also holds for \(m\in(0,1)\). (ii) Although it is not directly of use to us here, we note that, similar to Lemma 6.1, we have for \(m\in(0,1)\), \(\gamma<\frac{d}{2}\), and \(|x|<1\) \[\sup_{t\in(-1,1)}\left|\int_{\mathbb{R}^{d}}e^{i(x\cdot\xi+t|\xi|^{m})}\frac{ \varphi_{\leq N}(|\xi|)}{|\xi|^{\gamma}}\,\mathrm{d}\xi\right|\lesssim|x|^{-( \frac{d}{2}+\frac{1}{1-m}(\frac{d}{2}-\gamma))}\] uniformly in \(N\in\mathbb{Z}\). Now we show (6.11) for \(m>1\) and \(s=\frac{d}{4}\). The main difference from the previous arguments to handle the delicacy here is to avoid the use of the decomposition in frequency. In fact, our goal (6.11) follows if we prove \[\|W\mathcal{T}_{l}\overline{W}\|_{\mathcal{C}^{\beta^{\prime}}}\lesssim 2^{( \frac{d}{2}-\frac{\alpha}{\beta})l}\|W\|_{L^{\infty}(\mathrm{d}\mu)L^{2}}^{2}\] for \(\beta\in[1,\frac{2\alpha}{d})\) and uniformly in \(N\), where \[\mathcal{T}_{l}F(t,x)=\iint F(t^{\prime},x^{\prime})\mathcal{K}_{l}(t-t^{ \prime},x-x^{\prime})\,\mathrm{d}t^{\prime}\mathrm{d}\mu(x^{\prime})\] and \[\mathcal{K}_{l}(t,x)=\chi_{l}(|x|)\int_{\mathbb{R}^{d}}e^{i(x\cdot\xi+t|\xi|^{ m})}\frac{\varphi_{\leq N}(|\xi|)}{|\xi|^{\frac{d}{2}}}\,\mathrm{d}\xi.\] Since \(\frac{2\alpha}{d}\leq 2\), it is enough to consider the \(\mathcal{C}^{2}\) norm and the \(\mathcal{C}^{\infty}\) norm. Lemma 6.3 yields the kernel estimate \(|\mathcal{K}_{l}(t,x)|\lesssim\chi_{l}(|x|)2^{(d-2s)l}\). Hence, following a similar argument as before, we obtain \[\|W\mathcal{T}_{l}\overline{W}\|_{\mathcal{C}^{2}}\lesssim 2^{(\frac{d}{2}- \frac{\alpha}{2})l}\|W\|_{L^{\infty}(\mathrm{d}\mu)L^{2}}^{2}\] and \[\|W\mathcal{T}_{l}\overline{W}\|_{\mathcal{C}^{\infty}}\lesssim 2^{(\frac{d}{2}- \alpha)l}\|W\|_{L^{\infty}(\mathrm{d}\mu)L^{2}}^{2},\] and therefore \[\|W\mathcal{T}_{l}\overline{W}\|_{\mathcal{C}^{\beta^{\prime}}}\lesssim 2^{( \frac{d}{2}-\frac{\alpha}{\beta})l}\|W\|_{L^{\infty}(\mathrm{d}\mu)L^{2}}^{2}.\] This implies (6.11) since \(\frac{d}{2}<\frac{\alpha}{\beta}\). ### Necessary conditions Here we establish two necessary conditions for the maximal-in-time estimate (2.5) to hold, and thus complete the proof of Theorem 2.2. We begin with the more delicate case of \(0<m<1\) (in which case we consider \(d=1\)). **Condition 1**.: Let \(d=1\), \(0<m<1\), and \(0<\alpha\leq 1\). We show \(1-2s\leq\frac{m}{2}+(1-m)\frac{\alpha}{\beta}\) is necessary for (2.5) to hold for all orthonormal systems \((f_{j})_{j}\) in \(\dot{H}^{s}(\mathbb{R})\). Let \(N\simeq 1\), which we take to be sufficiently large later in the proof. Define \[I_{j}=[2N^{-\alpha(1-m)}j,2N^{-\alpha(1-m)}j+N^{-(1-m)}],\] and \(\mathcal{J}=\{j\in\mathbb{Z}:I_{j}\cap[-1,1]\neq\emptyset\}\). Then, we set \[\mu(x)=cN^{(1-\alpha)(1-m)}\sum_{j\in\mathcal{J}}\chi_{I_{j}}(x)\mathrm{d}x.\] Then we have \(\mu(B(x,r))\lesssim r^{\alpha}\) for any \(x\in[-1,1],r>0\). To see this, first note that it clearly suffices to check the case \(r\leq 2\). Now write \(M=N^{1-m}\). In the case \(2r<M^{-\alpha}\), a ball of radius \(r\) contains at most one interval so that \[M^{1-\alpha}\sum_{j\in\mathcal{J}}|B(x,r)\cap I_{j}|\lesssim M^{1-\alpha}\min \{r,M^{-1}\}\lesssim r^{\alpha}.\] On the other hand, if \(\frac{1}{2}M^{-\alpha}\leq r\leq 2\), and denoting \(r=\ell M^{-\alpha}\) with \(\frac{1}{2}\leq\ell\leq 2M^{\alpha}\), then number of \(j\in\mathcal{J}\) such that \(B(x,r)\cap I_{j}\neq\emptyset\) is bounded by \(3\ell\). Therefore, we have \[M^{1-\alpha}\sum_{j\in\mathcal{J}}|B(x,r)\cap I_{j}|\lesssim\ell M^{-\alpha} \lesssim r^{\alpha}.\] It follows that \(\mu\in\mathcal{M}^{\alpha}([-1,1])\) if the constant \(c>0\) is chosen appropriately. We choose the initial data as \[f_{j}(x)=c^{\prime}N^{\frac{1}{2}-\frac{m}{4}}\varrho(N^{1-\frac{m}{2}}(x-2N ^{-\alpha(1-m)}j))e^{i(x-2jN^{-\alpha(1-m)})N}.\] Here, we choose \(\varrho=\widehat{\psi}*\widehat{\psi}\), where \(\widehat{\psi}\) is a standard bump function supported on \([-\frac{1}{2},\frac{1}{2}]\). Then \(\varrho\in C_{0}^{\infty}(\mathbb{R})\), \(\operatorname{supp}\varrho\subset[-1,1]\), \(\widehat{\varrho}\) is non-negative, and \(\int_{[-1,1]^{c}}\widehat{\varrho}(\xi)\,\mathrm{d}\xi\leq(1-\varepsilon) \int_{[-1,1]}\widehat{\varrho}(\xi)\,\mathrm{d}\xi\) for some \(\varepsilon\simeq 1\) sufficiently small. From the disjoint supports, it is easy to check that \((f_{j})_{j}\) is orthonormal in \(L^{2}\) upon an appropriate choice of the constant \(c^{\prime}>0\). For each \(j\in\mathcal{J}\) we claim that \[|U_{m}|D|^{-s}f_{j}(t_{j}(x),x)|\gtrsim N^{\frac{1}{2}-\frac{m}{4}-s} \tag{6.17}\] whenever \(|x-2N^{-\alpha(1-m)}j|\leq\nu N^{-(1-m)}\) and \(t_{j}(x)=-m^{-1}N^{1-m}(x-2N^{-\alpha(1-m)}j)\) (for this particular choice of \(t_{j}(x)\), see also Figure 4). Here, \(\nu\simeq 1\) will be chosen sufficiently small later in the argument. We note, in particular, that \(|t_{j}(x)|<1\) is guaranteed as long as we take \(\nu<m\). It suffices to check (6.17) when \(j=0\) since \[\widehat{f}_{j}(\xi)=c^{\prime}N^{-(\frac{1}{2}-\frac{m}{4})}e^{-2ij\xi N^{- \alpha(1-m)}}\widehat{\varrho}(N^{-(1-\frac{m}{2})}(\xi-N)).\] Now \[|U_{m}|D|^{-s}f_{0}(t,x)| \simeq N^{-(\frac{1}{2}-\frac{m}{4})}\bigg{|}\int e^{i(x\xi+t|\xi |^{m})}|\xi|^{-s}\widehat{\varrho}(N^{-(1-\frac{m}{2})}(\xi-N))\,\mathrm{d}\xi \bigg{|}\] \[=N^{\frac{1}{2}-\frac{m}{4}-s}\bigg{|}\int e^{i\theta(\xi)}|1+N^{ -m/2}\xi|^{-s}\widehat{\varrho}(\xi)\,\mathrm{d}\xi\bigg{|},\] where \(\theta(\xi)=xN^{1-\frac{m}{2}}\xi+N^{m}t|1+N^{-\frac{m}{2}}\xi|^{m}-N^{m}t\). If \(|\xi|\leq 1\) then a Taylor expansion gives \[\theta(\xi)=(xN^{1-\frac{m}{2}}+mN^{\frac{m}{2}}t)\xi+tO(|\xi|^{2})\] and we observe that by choosing \(t=t_{0}(x)\) the coefficient of \(\xi\) vanishes. A careful check of the various constants reveals that \(|\theta(\xi)|\leq 4m^{-1}\nu\) whenever \(|x|\leq\nu N^{-(1-m)}\), \(t=t_{0}(x)\) and \(|\xi|\leq 1\). For such \(x\) and \(t\) we therefore have \[\bigg{|}\int_{-1}^{1}e^{i\theta(\xi)}|1+N^{-m/2}\xi|^{-s}\widehat{\varrho}(\xi) \,\mathrm{d}\xi\bigg{|}\geq(1-\tfrac{\varepsilon}{4})\int_{-1}^{1}\widehat{ \varrho}(\xi)\,\mathrm{d}\xi\] by taking \(N\) sufficiently large, and taking \(\nu\) sufficiently small. For the contribution for \(|\xi|>1\), by considering the cases \(|\xi|\in[1,\delta N^{\frac{m}{2}}]\) and \(|\xi|>\delta N^{\frac{m}{2}}\), and using the fast decay of \(\widehat{\varrho}\) we have \[\bigg{|}\int_{|\xi|>1}|1+N^{-m/2}\xi|^{-s}\widehat{\varrho}(\xi) \,\mathrm{d}\xi\bigg{|} \leq\frac{1}{(1-\delta)^{s}}\int_{|\xi|>1}\widehat{\varrho}(\xi) \,\mathrm{d}\xi+\frac{C}{\delta^{2}N^{\frac{m}{2}}}\] \[\leq\frac{1-\varepsilon}{(1-\delta)^{s}}\int_{-1}^{1}\widehat{ \varrho}(\xi)\,\mathrm{d}\xi+\frac{C}{\delta^{2}N^{\frac{m}{2}}}\] for some \(C\simeq 1\). Taking \(\delta\) sufficiently small, and then taking \(N\) sufficiently large, we have \[\bigg{|}\int e^{i(x\xi+t|\xi|^{m})}|\xi|^{-s}\widehat{\varrho}(N^{-(1-\frac{m} {2})}(\xi-N))\,\mathrm{d}\xi\bigg{|}\geq\frac{\varepsilon}{4}\int_{-1}^{1} \widehat{\varrho}(\xi)\,\mathrm{d}\xi-\frac{C}{\delta^{2}N^{\frac{m}{2}}}\geq \frac{\varepsilon}{8}\int_{-1}^{1}\widehat{\varrho}(\xi)\,\mathrm{d}\xi\] which gives (6.17) when \(j=0\). Figure 4. The construction of initial data for Condition 1. The grey columns consist of their bases \(I_{j}\) with the height \(t\). Our choice of \(t=t(x)\) belongs to the orange region. From (6.17) we obtain \[\bigg{\|}\sum_{j\in\mathcal{J}}|U_{m}|D|^{-s}f_{j}|^{2}\bigg{\|}_{L^ {1}(\mathrm{d}\mu)L^{\infty}} \geq\sum_{j\in\mathcal{J}}\int_{I_{j}}\sup_{t\in[-1,1]}|U_{m}|D|^{-s }f_{j}(t,x)|^{2}\,\mathrm{d}x\] \[\gtrsim N^{1-\frac{m}{2}-2s}.\] If we assume that (2.5) is true, then the above yields \(N^{1-\frac{m}{2}-2s}\lesssim(\#\mathcal{J})^{\frac{1}{s}}\lesssim N^{(1-m) \frac{\alpha}{\beta}}\), and letting \(N\to\infty\) we deduce that \(1-2s\leq\frac{m}{2}+(1-m)\frac{\alpha}{\beta}\) as desired. **Condition 2**.: Let \(d\in\mathbb{N}\), \(m\in(0,\infty)\backslash\{1\}\) and \(0<\alpha\leq d\). We show \(d-2s\leq\frac{\alpha}{\beta}\) is necessary for (2.5) to hold for all orthonormal systems \((f_{j})_{j}\) in \(\dot{H}^{s}(\mathbb{R}^{d})\). Let \(N\geq 1\), and define \[I_{j}=[2N^{-\frac{\alpha}{\beta}}j,2N^{-\frac{\alpha}{\beta}}j+100^{-1}N^{-1}] \subset\mathbb{R}\] for \(j\in\mathbb{Z}\) and \(\mathcal{J}=\{(j_{1},\dots,j_{d})\in\mathbb{Z}^{d}:I_{j_{1}}\times\dots\times I _{j_{d}}\cap\mathbb{B}^{d}\neq\emptyset\}\). Then, we set \[\mu(x)=cN^{d-\alpha}\sum_{\mathcal{J}}\chi_{I_{j_{1}}\times\dots\times I_{j_{ d}}}(x)\mathrm{d}x.\] By a similar argument as we used for Condition 1, one can easily verify that \(\mu(B(x,r))\lesssim r^{\alpha}\) for any \(x\in\mathbb{B}^{d},r>0\). Hence, with an appropriate constant \(c>0\), we have \(\mu\in\mathcal{M}^{\alpha}(\mathbb{B}^{d})\). For \(\mathbf{j}\in\mathbb{Z}^{d}\), we define the initial data \[f_{\mathbf{j}}(x)=c^{\prime}N^{\frac{d}{2}}\varrho(N(x-2N^{-\frac{\alpha}{ 4}}\mathbf{j})).\] Here, similar to Condition 1, we choose \(\varrho\in C_{0}^{\infty}(\mathbb{R}^{d})\) such that \(\operatorname{supp}\varrho\subset\mathbb{B}^{d}\), \(\widehat{\varrho}\) is non-negative, and \(\int_{|\xi|\geq 1}\widehat{\varrho}(\xi)\,\mathrm{d}\xi\leq(1-\varepsilon) \int_{\mathbb{B}^{d}}\widehat{\varrho}(\xi)\,\mathrm{d}\xi\) for some \(\varepsilon\simeq 1\). It is simple to check that \((f_{\mathbf{j}})_{\mathbf{j}}\) is orthonormal in \(L^{2}\) if we make an appropriate choice of the constant \(c^{\prime}>0\). Also, we claim that, for each \(\mathbf{j}\in\mathcal{J}\), we have \[|U_{m}|D|^{-s}f_{\mathbf{j}}(t,x)|\gtrsim N^{\frac{d}{2}-s}\] whenever \(|x-2N^{-\frac{\alpha}{4}}\mathbf{j}|\leq\nu N^{-1}\) and \(|t|\leq\nu N^{-m}\) (\(\nu\simeq 1\) to be chosen sufficiently small). To see this, since \[\widehat{f}_{\mathbf{j}}(\xi)=c^{\prime}N^{-\frac{d}{2}}e^{-2iN^{-\frac{ \alpha}{2}}\mathbf{j}\cdot\xi}\widehat{\varrho}(N^{-1}\xi),\] it suffices to check the case \(\mathbf{j}=0\). Now \[|U_{m}|D|^{-s}f_{0}(t,x)|\simeq N^{\frac{d}{2}-s}\bigg{|}\int e^{i\theta(\xi)} |\xi|^{-s}\widehat{\varrho}(\xi)\,\mathrm{d}\xi\bigg{|},\] where the phase is denoted by \(\theta(\xi)=Nx\cdot\xi+N^{m}t|\xi|^{m}\). If \(|x|\leq\nu N^{-1}\) and \(|t|\leq\nu N^{-m}\), then the phase is sufficiently small so that \[\bigg{|}\int_{\mathbb{B}^{d}}e^{i\theta(\xi)}|\xi|^{-s}\widehat{\varrho}(\xi) \,\mathrm{d}\xi\bigg{|}\geq(1-\tfrac{\varepsilon}{2})\int_{\mathbb{B}^{d}} \widehat{\varrho}(\xi)\,\mathrm{d}\xi.\] The contribution for \(|\xi|\geq 1\) can be easily estimated from above using the properties of \(\widehat{\varrho}\), and the claim follows. From the above we conclude that \[\bigg{\|}\sum_{\mathbf{j}\in\mathcal{J}}|U_{m}|D|^{-s}f_{\mathbf{ j}}|^{2}\bigg{\|}_{L^{1}(\mathrm{d}\mu)L^{\infty}} \geq\int_{\mathbb{B}^{d}}\sup_{t\in[-1,1]}\sum_{\mathbf{j}\in \mathcal{J}}|U_{m}|D|^{-s}f_{\mathbf{j}}(t,x)|^{2}\,\mathrm{d}\mu(x)\] \[\gtrsim N^{d-2s}.\] This means that if (2.5) is true, then we obtain \(N^{d-2s}\lesssim(\#\mathcal{J})^{\frac{1}{\beta}}\lesssim N^{\frac{\alpha}{ \beta}}\), and letting \(N\to\infty\) we deduce that \(d-2s\leq\frac{\alpha}{\beta}\) as desired. _Acknowledgements._ The first author would like to express his thanks to Shohei Nakamura and Sanghyuk Lee for many inspiring conversations related to this content of this paper. Part of this work was carried out whilst the authors were participating in the MATRIX-RIMS Tandem Workshop on Geometric Analysis in Harmonic Analysis and PDE at RIMS during 27-31 March 2023, and the authors are grateful for the stimulating working environment.
2304.05242
Crediting football players for creating dangerous actions in an unbiased way: the generation of threat (GoT) indices
We introduce an innovative methodology to identify football players at the origin of threatening actions in a team. In our framework, a threat is defined as entering the opposing team's danger area. We investigate the timing of threat events and ball touches of players, and capture their correlation using Hawkes processes. Our model-based approach allows us to evaluate a player's ability to create danger both directly and through interactions with teammates. We define a new index, called Generation of Threat (GoT), that measures in an unbiased way the contribution of a player to threat generation. For illustration, we present a detailed analysis of Chelsea's 2016-2017 season, with a standout performance from Eden Hazard. We are able to credit each player for his involvement in danger creation and determine the main circuits leading to threat. In the same spirit, we investigate the danger generation process of Stade Rennais in the 2021-2022 season. Furthermore, we establish a comprehensive ranking of Ligue 1 players based on their generated threat in the 2021-2022 season. Our analysis reveals surprising results, with players such as Jason Berthomier, Moses Simon and Frederic Guilbert among the top performers in the GoT rankings. We also present a ranking of Ligue 1 central defenders in terms of generation of threat and confirm the great performance of some center-back pairs, such as Nayef Aguerd and Warmed Omari.
Ali Baouan, Sébastien Coustou, Mathieu Lacome, Sergio Pulido, Mathieu Rosenbaum
2023-04-11T14:27:36Z
http://arxiv.org/abs/2304.05242v1
Crediting football players for creating dangerous actions in an unbiased way: the generation of threat (GoT) indices. ###### Abstract We introduce an innovative methodology to identify football players at the origin of threatening actions in a team. In our framework, a threat is defined as entering the opposing team's _danger area_. We investigate the timing of threat events and ball touches of players, and capture their correlation using Hawkes processes. Our model-based approach allows us to evaluate a player's ability to create danger both directly and through interactions with teammates. We define a new index, called _Generation of Threat_ (GoT), that measures in an unbiased way the contribution of a player to threat generation. For illustration, we present a detailed analysis of Chelsea's 2016-2017 season, with a standout performance from Eden Hazard. We are able to credit each player for his involvement in danger creation and determine the main circuits leading to threat. In the same spirit, we investigate the danger generation process of Stade Rennais in the 2021-2022 season. Furthermore, we establish a comprehensive ranking of Ligue 1 players based on their generated threat in the 2021-2022 season. Our analysis reveals surprising results, with players such as Jason Berthomier, Moses Simon and Frederic Guilbert among the top performers in the GoT rankings. We also present a ranking of Ligue 1 central defenders in terms of generation of threat and confirm the great performance of some center-back pairs, such as Nayef Aguerd and Warmed Omari. ## 1 Introduction Which player should be credited for a successful action or sequence in a football match? In the case of a goal, the striker obviously plays an important role. However, we all have in mind goals where the striker just needs to push the ball after a great assist. In that case, the passer is certainly the most important player involved. Some argue that the second-to-last pass is actually the most crucial component as it is often this pass that creates disequilibrium. Sometimes, we even see a clearance by a goalkeeper being at the origin of a dangerous situation. In this work, our goal is to build a quantitative and unbiased methodology enabling us to assess the importance of a player in the generation of dangerous actions. By a threat, we simply mean a situation where a player of the team of interest gets the ball in the danger area of the opposing team. The danger area is defined as a rectangular region around the opponent's goal where the likelihood of scoring from a shot is high. To achieve our objective, we need to model interactions between players, taking into account past events in the game accurately. This is because we want, for example, to be able to credit a defender for a great pass that leads to a dangerous situation after several ball touches following the initial pass. Therefore, at the timestamp where the action is considered dangerous (in our case when the ball reaches the danger area), we must "remember" the original pass of the defender. Thus, at a given time \(t\), we want to draw links between past events in the game and its future. With this objective in mind, simply relying on the current state of the game (players and ball's positions) as the information set is not enough for modeling the game accurately. It is important to consider the dynamics that occurred prior to time \(t\). This is in contrast to the so-called Markovian approach where one summarizes information obtained from the beginning of the game until time \(t\) by the state of the game at time \(t\). The Markovian setting is in fact underlying some very relevant and successful metrics introduced recently such as the expected goals (Green, 2012) and expected assists (Whitmore, 2021). For example, the expected goal estimates the probability that a shot results in a goal based on factors such as the distance to the goal and the angle of the shot, both attributes of the game state at time \(t\). The Markov assumption is in that case natural as these features give a reasonable estimate of the quality of the chance. Similarly, the expected assists aim at measuring the probability that a pass leads to a goal, by looking at a different subset of game state features, such as the type of the pass and the coordinates of the target. What these two approaches have in common is that given time \(t\) they define a value for an action (pass or shot), that is determined by the game state at time \(t\) only and does not look at the past patterns of play. In the same spirit, the expected threat introduced in (Singh, 2018) assigns a value to each game state depending only on the position of the ball. This value combines the possibilities of a direct shot or a pass to another position in quantifying the expected number of goals. To account for the effect of past events in the future dynamics of a game, we introduce Hawkes processes (Hawkes, 1971a,b) to reproduce interactions between players. Hawkes processes are stochastic models used to model sequences of random events. They are widely used in various fields such as earthquake modeling (Adamopoulos, 1976; Ogata, 1988), neuroscience (Lambert et al., 2018; Bonnet et al., 2022a) and finance (Jaisson and Rosenbaum, 2015). In our case, the events are the times when players touch the ball. Specifically, we implement a Hawkes process with 11 components (number of players in the team), with component \(i\) corresponding to player \(i\) of the team of interest. The value of this component at time \(t\) is simply the number of times player \(i\) has touched the ball from the beginning of the game to time \(t\). At each time the player touches the ball, his corresponding component increases by one. The innovation here is that we collect information from these timestamps and their correlations from one player to the other teammates. The specificity of Hawkes processes is that at time \(t\), the probability that player \(i\) gets the ball shortly after \(t\) depends on which players had possession of the ball before \(t\) and how long ago they had it. The impact on this probability of a player touching the ball a long time before \(t\) is negligible compared to a player who had possession right before \(t\). The ability to reproduce the decaying impact of events with time is a particularly useful property of Hawkes processes in our context. For instance, let us consider a central defender. At time \(t\), the probability that he gets the ball in the near future should be high if, in the recent past (last few seconds), he already touched the ball and/or another central defender did. On the contrary, if the forward players have held the ball for the past minute, this probability should be low. Then, we add a twelfth component to our Hawkes process that we call threat. The value of the threat component at time \(t\) is simply the number of times the ball has reached the danger area of the opposing team between the beginning of the game and time \(t\). Treating this component as part of our Hawkes process, we are able to model the influence of each player in the generation of threat. Calibrating our model allows us to assess the contribution of each player of a team to the creation of dangerous situations. We are therefore able to investigate carefully the subtle dynamics and connections leading to ominous situations. In particular, we can emphasize the crucial role of certain players that are not spotted by other statistics. Note that our calibration requires the analysis of a data set of at least ten games. So we are not evaluating each action occurring in a game but rather the global performance of players in terms of threat generation over a sequence of games. More precisely, the structure of Hawkes processes allows us to define the Generation of Threat (GoT) indices to objectively evaluate a player's involvement in the creation of threats over a considered series of games. These metrics quantify the expected number of dangerous situations for which a player can be credited. The direct generation of threat indices \(\text{GoT}^{d}\) and \(\text{GoT}^{d}_{90}\) measure the number of threats the player is directly responsible for generating per touch of the ball and per 90 minutes, respectively. Directly generating a threat can be viewed as being the last link in the chain of events leading to it. On the other hand, the indirect generation of threat indices \(\text{GoT}^{i}\) and \(\text{GoT}^{i}_{90}\) measure the indirect contribution per touch and per 90 minutes, respectively, adding the danger created via the interactions with other players too. In this case, we count all the instances where the player participates in the chain of events leading to the dangerous situation. As an application, we use the GoT indices to rank the Ligue 1 players in the 2021-2022 season. Not surprisingly, the top positions are dominated by established offensive players. However, we also identify some surprising picks, including Jason Berthomier, Moses Simon and Frederic Guilbert, who rank among the top twenty players. We also compare the performance of the Ligue 1 central defenders in terms of \(\text{GoT}^{i}_{90}\). Naturally, defenders from Paris Saint-Germain stand out and benefit from the offensive performance of their forwards. However, we also identify other excellent center-back pairs such as Nayef Aguerd and Warmed Omari from Stade Rennais, and Facundo Medina and Jonathan Gradit from Lens. Moreover, our approach allows us to rate these players based on their performance in specific positions in a formation, providing a tool to identify the optimal position for each player. Our approach has the property of being easily interpretable using the immigration-birth representation of linear Hawkes processes, see (Hawkes and Oakes, 1974). This representation induces a notion of causality between events and allows us to visualize the interactions between different event types in a graph. All player touches can be viewed as individuals in a population, and each individual independently generates offsprings, that are threat events or ball touches of the same player or other players. In particular, this enables us to effectively interpret the estimated GoT metrics as a measure of the causal relationship between the player's touch and subsequent threat events. Furthermore, we can construct interaction networks of football teams and graphically analyze a team's in-game dynamics and danger creation circuits. We apply this approach to investigate games from Chelsea in the 2016-2017 season and Stade Rennais in the 2021-2022 season. We are able to effectively capture the main threat creation circuits that the opponent should try to control. Identifying specific patterns and evaluating the ability of players to create threat with our methodology paves the way to more informed decisions about tactics. The article is organized as follows. In Section 2, we provide an overview of Hawkes processes and recall the results that are useful for our football application. Section 3 describes the event-based data we have in hand and how it is processed. Furthermore, we present the interpretation of the estimated parameters in the context of football and define the Generation of Threat (GoT) metrics. In Section 4, we briefly describe the maximum likelihood estimation methodology. We also conduct a study on simulated data to measure estimation accuracy that can be expected on real datasets depending on the amount of available data. We find that reliable estimation can be obtained from 600 minutes of football data. Section 5 presents the results of our analysis on a collection of Chelsea games in the 2016-2017 season. In Section 6, we establish a ranking of Ligue 1 players in the 2021-2022 season based on their GoT indices. Finally, in the appendix, we present the analysis of the Stade Rennais games in the 2021-2022 Ligue 1 season. ## 2 Hawkes processes This section provides a short overview of Hawkes processes. It includes necessary definitions and theoretical results for a better understanding of the subsequent analysis of football dynamics. As mentioned in the introduction, Hawkes processes are a class of multivariate point processes introduced in (Hawkes, 1971a). If we consider a vector \(N(t)=(N_{i}(t))_{i\in\{1,\ldots,d\}}\), where \(N_{i}(t)\) denotes the number of events for the \(i\)-th component between \(0\) and \(t\), the associated intensity process can essentially be defined as: \[\lambda_{i}\left(t\right):=\lim_{h\to 0^{+}}\frac{\mathbb{P}(N_{i}(t+h)-N_{i}(t) =1|\mathcal{F}_{t})}{h}.\] Here, \(\mathcal{F}_{t}\) is the filtration generated by \(\{N_{s},\ s<t\}\), that is the information set available at time \(t\). The intensity of a counting process determines the rate at which new jumps occur based on past events, see (Bremaud, 1981) for a more rigorous definition. In the case of Hawkes processes, the intensity is a linear combination of past jump times. **Definition 2.1** (Hawkes process).: _A d-variate Hawkes process is a counting-process \(N(t)\in\mathbb{R}^{d}\) whose \(i\)-th component is determined by its intensity of the form:_ \[\lambda_{i}(t)=\mu_{i}+\sum_{j=1}^{d}\sum_{t_{k}^{(j)}<t}\phi_{i,j}(t-t_{k}^{(j )}),\] _where the \(\left(t_{k}^{(j)}\right)_{k\geq 1}\) are the times of events for dimension \(j\) for \(j=1,\ldots,d\). \(\mu_{i}\in\mathbb{R}^{+}\) is a constant baseline intensity and \(\phi_{i,j}:\mathbb{R}^{+}\rightarrow\mathbb{R}^{+}\) is a non-negative kernel. We can write the expression for the intensity in the vectorial form:_ \[\lambda(t)=\mu+\int_{0}^{t}\phi(t-s)dN(s),\] _with \(\mu\in\mathbb{R}^{+d}\) and \(\phi=\left\{\phi_{i,j}\right\}_{0\leq i,j\leq d}:\mathbb{R}^{+}\rightarrow \mathbb{R}^{d\times d}\) a non-negative matrix-valued kernel._ The underlying idea behind Hawkes processes is that a constant intensity \(\mu\) generates the initial batch of jumps across all dimensions. These jumps are random but the rate of their occurrence remains constant over time. Then, each jump increases the intensity in the near future; therefore, exciting new jumps, that in turn trigger other jumps. This leads to a chain reaction called the self-excitation property of Hawkes processes. We need to impose conditions for this system to be stable. These conditions can be stated in terms of the branching matrix defined below: **Definition 2.2** (Branching matrix, stability).: _The branching matrix of a Hawkes process is defined as,_ \[K=\int_{0}^{\infty}\phi(t)dt=\left\{\int_{0}^{\infty}\phi_{i,j}(t)dt\right\}_{1 \leq i,j\leq d}.\] _Moreover, a Hawkes process is said to be stable if \(\int_{0}^{\infty}\phi_{i,j}(t)dt<\infty\) for all i,j and if the spectral radius \(\rho(K)\) of the branching matrix satisfies:_ \[\rho(K)<1.\] _See (Jaisson and Rosenbaum, 2015) for more details._ Immigration-birth representation:Introduced in (Hawkes and Oakes, 1974), the immigration-birth representation provides an intuitive way to understand linear Hawkes processes. Let us consider a stable \(d\)-dimensional Hawkes process \(N(t)\) with a baseline intensity \(\mu\) and a kernel \(\phi\). The law of such point process can be described through a population approach. Essentially, we consider a population where immigrants of \(d\) types arrive at random times. Each of them gives birth to children of all types. Then the children, grand-children, grand-grand-children etc. also give birth to children of all types. More precisely, the dynamic is constructed as follows: * For \(j=1,2,\ldots,d\), we consider an instance of a Poisson process with rate \(\mu_{j}\), with its elements called immigrants of type \(j\). Generation \(0\) consists of the immigrants; * Recursively, given generations \(0,1,\ldots,n\), each individual born at time \(s\) of type \(j\) in generation \(n\) generates its offspring of type \(i\) as an independent instance of a non-homogeneous Poisson process with rate \(\lambda_{t}^{s,n}:=\phi_{i,j}(t-s)\) for \(t\geq s\). The union of these offspring of all types constitutes generation \(n+1\). * The point process is then defined as the union of all generations. The resulting process has the law of a Hawkes process. In this representation, stability means each individual has less than one child on average in the case \(d=1\), which ensures some good mathematical properties for the process. From now on, we assume that all considered Hawkes processes are stable. Additionally, under this construction, \(K_{i,j}=\int_{0}^{\infty}\phi_{i,j}(t)dt\) can be interpreted as the expected number of direct children of type \(i\) of an individual of type \(j\). The following proposition provides a closed-form formula for the expected number of descendants of a single individual. It includes both immediate descendants and those from later generations. This result is derived similarly to the one-dimensional case in (Jaisson and Rosenbaum, 2015), and allows us to quantify the average number of events originating from each jump from each dimension. **Proposition 2.1**.: _The entry \(i,j\) of the matrix \(K(I-K)^{-1}\) gives the expected number of descendants of type \(i\) generated by an individual of type \(j\)._ In this work, we estimate a branching matrix from football event-based data. We use the parametric class of exponential kernels in our estimation methodology. **Definition 2.3** (Exponential kernels).: _The exponential kernel is defined as_ \[\phi_{i,j}(t)=\alpha_{i,j}e^{-\beta_{i,j}t}1_{t\geq 0},\] _where \(\alpha_{i,j}\), \(\beta_{i,j}\) are nonnegative real numbers._ Exponential kernels are particularly nice from a computational viewpoint in estimation. Additionally, their parameters are easy to interpret. In fact, the branching matrix in this case is simply given by \(K=(\frac{\alpha_{i,j}}{\beta_{i,j}})_{i,j}\) and the decay parameter \(\beta_{i,j}\) indicates the speed at which cross excitation decreases. Event-based football data ### Description of the data We use the F24 files provided by Stats-perform1. Each file gives comprehensive information about a football match. Information includes the formation of each team and the position of each player on the pitch. Additionally, it lists all events occurring with the ball within the game specifying the player involved, the event type, the coordinates on the pitch, and the timestamp for each action. Footnote 1: [https://www.statsperform.com/](https://www.statsperform.com/) In the Stats-perform classification system, each position on the pitch is assigned a number \(p\) in \(\{1,\ldots,11\}\) for each formation. The distribution of these positions for various formations is shown in Figure 1. Our study aims at understanding the impact of ball touches in each position \(p\) in \(\{1,\ldots,11\}\) on a team's offensive performance. To ensure homogeneity, the analysis is conducted only on games where each position has the same role. For this purpose, we group formations in clusters of similar shapes as those presented below and only use matches from the most commonly used cluster for each team: * Cluster 1: 433, 4141, 4231, 4321. * Cluster 2: 442, 41212, 451, 4411, 4222. * Cluster 3: 532, 352, 31312, 3511, 3412. * Cluster 4: 343, 541, 3421. ### Processing of the data for Hawkes inference We study our event-based data using Hawkes processes. Doing so, we can gain insights from timestamps of events and information about the spatial coordinates of the ball. For a given team and a list of its games in the same formation cluster, we build a 12-dimensional point process for each game. Each dimension \(p\in\{1,\ldots,11\}\) records the timestamps of ball touches by the player occupying position \(p\), regardless of his identity. The twelfth dimension represents the threat state and is triggered every time there is a ball touch by a player from the considered team in the danger area of the opponent. The danger area is defined as a box around the opposing goal covering 50% of the width of the pitch and 25% of its length, as illustrated in Figure 2. When a player has possession of the ball in this region, the probability of a shot occurring is high, see (Singh, 2018) for an estimate of the shot probability at each location on the pitch. Compared to the penalty surface, the danger area is slightly closer to the midfielders and defenders, enabling us to capture more threat events generated by these positions. The following rules are applied when constructing the process: 1. Every time a player in the considered team touches the ball, there is a jump in the dimension \(p\in\{1,\ldots,11\}\) associated with his position. 2. Every time a player in the considered team touches the ball inside the opposing threat area, there is a jump in the twelfth dimension at the corresponding timestamp. In this case, no jump is recorded in the component associated with the player. 3. Once a threat state is triggered, no jumps or time are recorded until the ball exits the danger area. We resume counting the jumps when the ball is outside the danger area by at least two meters. 4. When the ball is lost (when there is an event where the opposing team has the ball), the time and events are not recorded until the ball is won again. Upon regaining possession, we resume Figure 1: The number associated with each position for each group of formations. Figure 2: Representation of the danger area. recording the events in our point process by adding a random duration, with an average of twelve seconds, generated from the sum of two exponential distributions of parameter six. 5. We exclude crossing events coming from a free kick or a corner. Rule 3 is considered to avoid consecutive threat states. We are not interested in the auto-exciting property of the threat events. Therefore, we stop recording once a threat state is achieved and only resume when the team is outside the opposing surface by at least two meters. In Rule 4, we want to avoid having large durations where no event occurs. This is the case every time the considered team loses the ball to the opposition. Thus, the possession times of the opponent are compressed into an average of twelve seconds. The choice of the twelve seconds threshold is based on the average duration between events to which we add another exponential random variable as a penalization for losing the ball. The constructed point process considers possession stretches of the team to be uninterrupted. Rule 5 is implemented because the crossing events are highly correlated with threat events. In particular, the designated set piece taker of each team is naturally responsible for more threats. Therefore, we choose to discard these events to remove bias from our measure of danger creation and ensure fair player comparisons. Given a collection of games of a team, the point processes built from each game are assembled into one process. An example of the resulting point process is shown in Figure 3. We use information on the timestamps and spatial coordinates on the field to define the threat state. The aim is to extract the causal relationship between player touches. We are interested in identifying the positions where a ball touch is directly correlated to a future jump in the twelfth dimension, which represents a threat. We also want to measure the indirect contribution of a player to the generation of threat through his interaction with other players. **Remark 3.1**.: _In the following, we aim at evaluating a player's performance when he plays in a Figure 3: Example of constructed point process. specific position. To achieve this, we only consider sequences of games where the player in question is playing in that position. We record ball touches in the other positions regardless of the identity of the player occupying them. In Section 5 and Appendix A where we analyze the interactions between the starting eleven players in given teams, we only record sequences of games where the same eleven players play in their respective positions. The way we deal with substitutions is detailed for each case in Sections 5 and 6._ **Remark 3.2** (A different twelfth dimension).: _In this work, we have incorporated a twelfth dimension that tracks the instances of entering the opposing danger area. This is done because we want to identify the players who are responsible for creating the threat events. Our approach can be extended for various analyses by selecting an alternative twelfth state. For example, we can choose to record the timestamps of ball losses in the twelfth dimension instead of threats. This would enable us to identify the players who are most accountable for losing possession and measure the correlation between their touches and subsequent turnovers._ ### Generation of Threat (GoT) indices The immigration-birth representation of Hawkes processes explained in Section 2 allows us to establish connections between the events in a football match. Essentially, each ball touch or threat event can be seen as an individual in a population, that generates first-generation children of various types - ball touches from other players and threat situations. These offspring, in turn, generate additional ball touches or threat events etc. When we say that an event generates a ball touch or a dangerous situation, we mean that it is responsible for its occurrence. This is a subtle definition because being responsible for an action does not necessarily mean providing the pass that leads to it. In some instances, the second-to-last pass is the most crucial step in creating the dangerous situation. There may even be several events between the generating ball touch and the dangerous action. Our approach eliminates these "noisy" in-between events and associates events through parent-child connections. Hawkes processes impute the responsibility of generating a threat to the most likely parent event, even if it occurred prior to other ball touches. In particular, they allow us to quantify the average number of dangerous actions that can be attributed to a given player. Using this population representation, we define the following GoT indices to assess the ability of a player to generate threat when he plays in a given position. The first two indices evaluate the impact of one touch of the player whereas the latter two measure the impact of the player's touches over 90 minutes. Direct GoT per touch (GoT\({}^{d}\)):A ball touch from the player in position \(p\) generates first-generation children of type threat. We refer to these instances as the _direct_ threat events generated by the player touch. We define \(\mathrm{GoT}^{d}\) as the average number of these threat events that occur because of one touch from player \(p\). This metric describes the intrinsic ability of the player to create dangerous situations. It can be calculated through the estimated branching matrix: \[\mathrm{GoT}^{d}(p)=K_{12,p}.\] Indirect GoT per touch (GoT\({}^{i}\)):A ball touch from a given player can be directly responsible for a threat event, but can also generate other ball touches that then generate danger. To quantify the total impact of a single player touch on the danger creation process, we use Proposition 2.1 and consider the matrix \[M=K(I-K)^{-1}.\] The coefficient \(M_{12,p}\) represents the expected number of threat events where the ball touch from the player \(p\) originates the chain of events leading to it. This includes the threat directly generated but also the one resulting from a sequence of other player touches. The difference with the GoT\({}^{d}\) index is that we credit the player touch for being at the root of the generation process and not for the crucial creative step. \[\text{GoT}^{i}(p)=M_{12,p}.\] Direct GoT per 90 minutes (\(\text{GoT}^{d}_{90}\)):We may want to account for the involvement in the game of a given player by normalizing \(\text{GoT}^{d}\) by his expected number of touches. We define the direct GoT per 90 minutes as the expected number of dangerous actions over 90 minutes2 for which we credit the player: Footnote 2: Note that here 90 minutes corresponds to 90 minutes of data after processing which does not translate to 90 minutes in a football match. This is notably because of the concatenation of sequences of possession explained in Section 3.2. \[\text{GoT}^{d}_{90}=\text{\bf E}\left(N_{p}(T)\right)\times\text{GoT}^{d}(p),\] where \(T=90\) minutes. The expected number of touches vector can be approximated thanks to the law of large numbers: \[\text{\bf E}\left(N(T)\right)\approx(I-K)^{-1}\mu T.\] Indirect GoT per 90 minutes (\(\text{GoT}^{i}_{90}\)):This index measures the expected number of threats over 90 minutes where a given player is involved in the building circuit. We define the indirect GoT per 90 minutes as the average number of threat events subtracted by the average number of threat events if the considered player is removed from the pitch. The \(\text{GoT}^{i}_{90}\) index is therefore calculated as follows: \[\text{GoT}^{i}_{90}=\text{\bf E}\left(N_{12}(T,K,\mu)-N_{12}(T,K^{(-p)},\mu^ {(-p)})\right),\] where \(K^{(-p)}\) is defined as the matrix K where the \(p^{th}\) row and \(p^{th}\) column are set to zero. Likewise, \(\mu^{(-p)}\) is defined as the vector \(\mu\) where the \(p^{th}\) coordinate is set to 0. The expected number of threats can be approximated using the branching matrix and the baseline intensity \(\mu\): \[\text{\bf E}\left(N_{12}(T,K,\mu)\right)\approx\left((I-K)^{-1}\mu T\right)_{1 2}.\] **Remark 3.3**.: _Calculating the \(\text{GoT}^{i}_{90}\) by multiplying the \(\text{GoT}^{i}\) index by the average number of ball touches of the player would overestimate the player's involvement in danger creation. In fact, we would count multiple times the circuits leading to threat where the player touches the ball more than once._ Additionally, a ball touch from a player can also be responsible for generating ball touches from other players or himself. In this case as well, this is not necessarily achieved through a direct pass. Hawkes processes allow us to estimate the expected number of these generated ball touches. Similar to the \(\text{GoT}^{d}\) index definition, the branching coefficient \(K_{p_{1},p_{2}}\) indicates the expected number of touches of player \(p_{1}\) that happen because a given ball touch from player \(p_{2}\) occurred before. The graphical representation of these interaction indices through a graph helps us gain a better understanding of the danger creation process. In particular, it allows us to identify the patterns of play that end in a threat. Maximum Likelihood estimation ### Likelihood of Hawkes process This section describes briefly parameters estimation for multivariate Hawkes processes, see (Ogata et al., 1978; Bonnet et al., 2022b). Consider a \(d\)-dimensional point process \((N(t))\) on \([0,T]\) with intensity of the form \[\lambda_{i}(t,\theta^{*})=\mu_{i}^{*}+\sum\limits_{j=1}^{d}\sum\limits_{t_{k}^{ (j)}<t}\alpha_{i,j}^{*}\exp\left(-\beta_{i,j}^{*}(t-t_{k}^{(j)})\right),\] where \(\theta^{*}=(\mu^{*},\alpha^{*},\beta^{*})\) are some unknown parameters. Given fixed parameters \(\theta=(\mu,\alpha,\beta)\) and a realization of the Hawkes process, the log-likelihood is calculated as follows: \[\ell(\theta)=\sum\limits_{i=1}^{d}\left(-\int_{0}^{T}\lambda_{i}(s,\theta)ds+ \sum\limits_{t_{k}^{(j)}<T}\log\left(\lambda_{i}(t_{k}^{(i)},\theta)\right) \right). \tag{1}\] The maximum likelihood estimator is the parameter that maximizes the above function. It can be observed from Equation (1) that the likelihood can be separated into \(d\) distinct subfunctions, each dependent on the parameters \(\mu_{i}\) and \((\alpha_{i,j},\beta_{i,j})_{j=1,\ldots,d}\) for \(i\) in \(\{1,\ldots,d\}\). As a result, the optimization can be performed separately \(d\) different times to estimate each subset of parameters. It is shown in (Ogata et al., 1978) that this estimator is consistent. Additionally, the log-likelihood can be simplified in the case of exponential kernels and computed in time complexity of \(\mathcal{O}\left(d^{2}N(T)\right)\), see (Ogata, 1981). For example, for \(d=1\) and \(T=t_{n}\), the likelihood is given by : \[\ell(\theta)=\sum\limits_{i=1}^{n}\log\left(\mu+\alpha R(i)\right)-\mu t_{n}+ \frac{\alpha}{\beta}\sum\limits_{i=1}^{n}\left(e^{-\beta(t_{n}-t_{i})}-1\right),\] where \(R(i)=\sum\limits_{j=1}^{i-1}e^{-\beta(t_{i}-t_{j})}\) can be computed recursively for i in \(\{2,\ldots,n\}\) : \[R(i)=e^{-\beta(t_{i}-t_{i-1})}\left(1+R(i-1)\right).\] **Remark 4.1**.: _The likelihood function is not concave with respect to \((\beta_{k,l})_{k,l=1,\ldots,d}\) in the exponential case. This means that convergence to the global maximum is not guaranteed, especially in large dimensions. Fixing \(\beta_{k,l}=\beta_{k}\) for all \(l=1,\ldots,d\) as proposed by (Bonnet et al., 2022b) produces very good results for \(d=12\). In this case, each of the objective functions is not concave in only one parameter instead of \(d\)._ **Remark 4.2**.: _In the context of football, the effect of a ball touch on the intensity of the process should last no longer than a few seconds. When \(n\) realizations of football matches are concatenated and treated as one long game, the likelihood function should not be altered by much. In fact, the rapid decay of the exponential kernel compared to the duration of games makes the induced error negligible._ ### Simulation study The goal of this section is to evaluate the maximum likelihood estimation using a simulated dataset that reproduces similar dynamics as those in a football game. We want to determine the amount of data required for an accurate estimation of the branching matrix. We also want to assess the model's ability to detect a null kernel between two dimensions. A null kernel \(\phi_{i,j}\) means a jump in dimension \(j\) has no exciting effect on dimension \(i\). In the context of football, it is particularly informative to detect such an absence of connection between players. We perform simulations over different horizons. The parameters are sampled as follows: * \(\mu\) is chosen from a uniform random variable over \([0.006,0.01]\). * \(\beta\) is chosen to be constant for all \(i,j\) in \(\{1,\ldots,12\}\) sampled from a uniform random variable over \([0.5,1]\). * The \(\alpha_{i,j}\) are chosen independently from a geometric distribution of parameter \(p=0.4\) scaled by \(40\) for all \(i,j\) so that \(40\%\) of the values are equal to \(0\). Then we fit a 12-dimensional Hawkes process to this data using the algorithm from (Bonnet et al., 2022b). We analyze the resulting accuracy as a function of the simulation horizon. Table 1 presents the results through three different metrics: * False positive: Percentage of branching matrix coefficients \(\hat{\alpha}_{i,j}\) wrongly estimated as null when \(\alpha_{i,j}>0\). Our estimation correctly detects existing links even for small horizons. * Error on false negative: Our estimation detects accurately \(60\%\) of null links \(\alpha_{i,j}=0\). The estimation on the remaining \(40\%\) is generally very low as can be seen in Table 1. * Relative error: The weighted mean absolute percentage error when \(\alpha_{i,j}>0\). This metric is defined as the mean absolute error divided by the average value of \(\alpha_{i,j}\): \[\text{wMAPE}=\frac{\sum\limits_{i,j}|\hat{\alpha}_{i,j}-\alpha_{i,j}|\,\mathbb{1 }_{\alpha_{i,j}>0}}{\sum\limits_{i,j}\alpha_{i,j}\mathbb{1}_{\alpha_{i,j}>0}}.\] The maximum likelihood estimate is good enough for our purposes given the high dimensionality. Figure 4 shows the estimated branching matrix from a simulation of horizon \(600\) minutes. We observe that the estimated branching matrix appears to correctly approximate the true branching matrix in Figure 5. **Remark 4.3** (Confidence intervals).: _Given regularity assumptions on the kernel of the Hawkes process, we can retrieve the rate of convergence of the maximum likelihood estimator and build asymptotic confidence intervals. We do not include confidence interval values here to ease reading but our choice of minimal number of games is dictated by them and the analysis in this section._ \begin{table} \begin{tabular}{c|c c c} \hline \hline **Horizon (minutes)** & **False positive** & **Error on false negative** & **Relative error** \\ \hline **300** & 1.1\% & 0.0054 & 25.7\% \\ \hline **600** & 0.0\% & 0.0045 & 19.3\% \\ \hline **1200** & 0.0\% & 0.0030 & 12.4\% \\ \hline **2400** & 0.0\% & 0.0020 & 11.4\% \\ \hline \hline \end{tabular} \end{table} Table 1: Accuracy results of the maximum likelihood estimation of Hawkes parameters on the simulated dataset. ## 5 Analysis of Chelsea FC in the 2016-2017 season As a first example, we perform our analysis on a selection of Chelsea FC matches from the 2016-2017 season. The team had a stable formation and a constant starting eleven over thirteen games in the Premier League. This is quite convenient because we retrieve a large amount of data where each position \(p\) in \(\{1,\ldots,11\}\) is associated with one player. Similar analysis for Stade Rennais in the 2021-2022 season is provided in Appendix A. ### Selected games In Table 2, we give the list of selected games for Chelsea FC. In each of these games, the flat 343 formation is used for at least sixty minutes and the starting eleven remains the same: * Thibaut Courtois. * David Luiz - Cesar Azpilicueta. * Nemanja Matic - N'Golo Kante - Victor Moses. * Diego Costa - Pedro Rodriguez. Therefore, we use the data before the first substitution from Chelsea FC in each game to build the counting process. ### Results and discussion In Table 3, we display the different GoT indices for the Chelsea players. Figure 6 graphically represents the direct interactions between players as well as their GoT\({}^{i}\) indices and Figure 7 shows the estimated branching matrix. We can identify two buildup schemes along the wings with two triangles: Cahill-Alonso-Matic and Kante-Azpilicueta-Moses. The main channel of communication between both sides is based on the Matic-Kante link. Below is a list of observations on players: Eden Hazard:Unsurprisingly, the offensive player, ranked second in the PFA Players' Player of the Year 2017 award, leads all GoT metrics. In particular, there is no significant difference between his GoT\({}^{d}_{90}\) and GoT\({}^{i}_{90}\) indices, indicating that his primary way of creating danger is through direct threat. Hazard was well known for his aggressive and direct play as well as for his dribbling. N'Golo Kante:Ranking fourth in GoT\({}_{90}^{i}\) is evidence to Kante's important role in Chelsea's success in the 2016-2017 season. The winner of the PFA Players' Player of the Year 2017 award is definitely not limited to defense as the numbers show that he is largely involved in danger creation. This is explained by the fact that Kante is a box to box midfielder and that he is at the center of multiple circuits that end in a threat: * Kante \(\rightarrow\) Pedro \(\rightarrow\) Threat. \begin{table} \begin{tabular}{l l l l} \hline \hline **Date** & **Opponent** & **Home or Away** & **Competition** \\ \hline Oct 15, 2016 & Leicester City & Home & English Premier League \\ Oct 23, 2016 & Manchester United & Home & English Premier League \\ Oct 30, 2016 & Southampton & Away & English Premier League \\ \hline Nov 5, 2016 & Everton & Home & English Premier League \\ Nov 20, 2016 & Middleshrough & Away & English Premier League \\ Nov 26, 2016 & Tottenham Hotspur & Home & English Premier League \\ \hline Dec 11, 2016 & West Bronwich Albion & Home & English Premier League \\ Jan 4, 2017 & Tottenham Hotspur & Away & English Premier League \\ Jan 22, 2017 & Hull City & Home & English Premier League \\ \hline Feb 4, 2017 & Arsenal & Home & English Premier League \\ Feb 12, 2017 & Burnley & Away & English Premier League \\ Apr 8, 2017 & Bournemouth & Away & English Premier League \\ \hline Apr 30, 2017 & Everton & Away & English Premier League \\ \hline \hline \end{tabular} \end{table} Table 2: List of selected games with the same starting eleven for Chelsea FC. Figure 6: Graph summarizing the interactions between Chelsea players. The width of an arrow from player \(p_{1}\) to player \(p_{2}\) is proportional to the expected number of touches of player \(p_{2}\) generated by one touch from player \(p_{1}\). The size of the circle of player \(p\) is proportional to the sum of the arrow sizes received, indicating the involvement of the player in the considered games. The color of the circle represents the GoT\({}^{i}\) index for each player. * Kante \(\rightarrow\) Moses \(\rightarrow\) Pedro \(\rightarrow\) Threat. * Kante \(\rightarrow\) Matic \(\rightarrow\) Hazard \(\rightarrow\) Threat. David Luiz:The contribution of the central defender David Luiz in the generation of threat is minimal. This is not surprising as the flat 3-4-3 system relies heavily on the wings. David Luiz naturally passes the ball to either Gary Cahill or Azpilicueta in the build-up to spread the play. Diego Costa:Costa generates a small number of threats despite being a striker. This is expected as he is responsible for transforming the goalscoring chances rather than being at the origin of the danger. Moreover, his \(\text{GoT}^{i}_{90}\) statistic is particularly low since he has a low number of touches per time unit and many of his touches in the danger zone are not recorded in the constructed counting process. We can clearly see that considering indirect contribution to threat generation is important for defenders and midfielders. These positions are generally at the base of the danger creation process. They have small \(\text{GoT}^{d}\) indices. However, indirect generated threat combined with the consideration of the number of touches allows us to effectively compare players playing in deeper positions. From the graphical representation in Figure 6, we can identify some patterns that lead to a dangerous situation. When facing a team like Chelsea in the 2016-2017 season, some strategies can be derived from this analysis: * As illustrated in Figure 6, the right side of Chelsea combines a lot for threat generation and should be disrupted at the root. Azpilicueta should be stopped from feeding the ball to the midfielders or directly to Pedro. * The left side relies much more on the direct offensive output of Eden Hazard. In fact, all of Gary Cahill, Matic and Marcos Alonso mostly aim at delivering the ball to the left winger. To neutralize the threat of the left side, it is essential to prevent the ball from reaching Hazard. This can be achieved by marking him closely or by constantly closing the passing lanes to him. \begin{table} \begin{tabular}{l c c c c} \hline \hline **Player name** & \(\text{GoT}^{d}\) & \(\text{GoT}^{i}\) & \(\text{GoT}^{d}_{90}\) & \(\text{GoT}^{i}_{90}\) \\ \hline **Eden Hazard** & 0.16 & 0.21 & 14.2 & 15.0 \\ **Victor Moses** & 0.07 & 0.11 & 5.7 & 7.5 \\ **Pedro Rodriguez** & 0.08 & 0.12 & 5.5 & 6.7 \\ \hline **N’Golo Kante** & 0.02 & 0.07 & 2.7 & 6.2 \\ **Nemanja Matic** & 0.01 & 0.06 & 1.5 & 5.2 \\ **Marcos Alonso** & 0.02 & 0.06 & 1.9 & 5.1 \\ \hline **Diego Costa** & 0.07 & 0.10 & 3.6 & 4.8 \\ **Cesar Azpilicueta** & 0.00 & 0.04 & 0.0 & 4.1 \\ **Gary Cahill** & 0.00 & 0.04 & 0.0 & 3.0 \\ \hline **David Luiz** & 0.00 & 0.01 & 0.0 & 1.0 \\ **Thibaut Courtois** & 0.00 & 0.01 & 0.0 & 0.6 \\ \hline \hline \end{tabular} \end{table} Table 3: Generated threat metrics for the players of Chelsea FC. The table is sorted by \(\text{GoT}^{i}_{90}\). * Goalkeeper Courtois is successful in targetting Marcos Alonso directly. This passing pattern should be considered when pressing Chelsea. ## 6 Ligue 1 2021-22 season analysis In this section, we provide a ranking of players and teams from Ligue 1 in the 2021-2022 season based on their generation of threat. To maintain homogeneity, we only consider for each team the games where they use their main formation cluster, see Table 12 in Appendix B for the list of formation clusters of each team. ### Generated threat to rank players in a position Each position on the pitch imposes a different role on the player who occupies it. In particular, we cannot expect the same player to produce the same GoT metrics at two different positions. Therefore, we choose to evaluate players when they play in a particular position. This approach will also allow us to determine the optimal position for a player to maximize a GoT metric of interest. Additionally, we apply a filter to consider only players who play at least 600 minutes at a given position, with playing time calculated based on games in which the player features for at least 45 minutes. Given a player and a position, we record the games in which the player occupies the position. The remaining positions may feature different players at each game. Whenever a player from his team is substituted, we do not consider the rest of the game in the construction of the counting process. We fit a Hawkes process and assign to the player the generated threat indices of his position. Tables 4 and 5 present the top twenty players in Ligue 1 in terms of \(\text{GoT}^{d}\) and \(\text{GoT}^{i}_{90}\), respectively (see Tables 10 and 11 in Appendix B for the Top 100). We display these two indices because they quantify the two extremes of the danger generation process. \(\text{GoT}^{d}\) isolates the direct impact of players while \(\text{GoT}^{i}_{90}\) measures their participation in the chain of events leading to threats. player plays in a position, the less accurate the estimate of his generated threat is. Moreover, our estimation relies on selected games only. When a player has a limited number of minutes in a position, a good GoT metric should be interpreted as a measure of performance across the considered games only. For example, Moses Simon ranking third in \(\text{GoT}^{d}\) should not be surprising as he provided seven assists in the 1200 minutes but only gave one more assist in the remaining games when the team plays in a different formation or when he plays in a different position. Below are some observations based on the results: \(\text{GoT}^{d}\) vs \(\text{GoT}^{i}_{90}\):GoT\({}^{d}\) captures the intrinsic ability of a player to advance the ball to the opponent's danger area while \(\text{GoT}^{i}_{90}\) incorporates possible combinations with teammates. Therefore, the style of play and the ability of teammates can have an impact on the value of \(\text{GoT}^{i}_{90}\). These two indices describe different ways to contribute to threat generation and allow us to select different profiles of players. For example, the Paris Saint-Germain midfielder Verratti produces high values of \(\text{GoT}^{i}_{90}\) while Moses Simon from FC Nantes features in the top positions in terms of \(\text{GoT}^{d}\). Jason Berthomier as a surprising pick:In his only season in Ligue 1, Jason Berthomier delivered excellent values of \(\text{GoT}^{i}_{90}\). The Clermont Foot midfielder ranks \(43^{rd}\) in terms of \(\text{GoT}^{d}\) and climbs up to the tenth position in the \(\text{GoT}^{i}_{90}\) ranking. This proves that he is consistently involved in the generation of dangerous situations for his team and is successful in feeding the forward players. Teji Savanier excels in midfield:Teji Savanier stands out as an interior midfielder in the 433 formation of Montpellier. With eight goals and seven assists, it is no surprise that he is central to the process of threat generation of his team. He ranks eighth in \(\text{GoT}^{i}_{90}\) and outperforms many \begin{table} \begin{tabular}{l l l l l l} \hline \hline **Rank** & **Name** & **Position** & **Team** & **Minutes** & \(\text{GoT}^{d}\) \\ \hline 1 & Lionel Messi & 10 & Paris Saint-Germain & 630 & 0.130 \\ 2 & Angel Di Maria & 10 & Paris Saint-Germain & 1171 & 0.128 \\ 3 & Moses Simon & 11 & Nantes & 1222 & 0.120 \\ \hline 4 & Kylian Mbappé & 9 & Paris Saint-Germain & 1338 & 0.110 \\ 5 & Lionel Messi & 9 & Paris Saint-Germain & 675 & 0.109 \\ 6 & Martin Terrier & 11 & Rennes & 1386 & 0.108 \\ \hline 7 & Kylian Mbappé & 11 & Paris Saint-Germain & 1066 & 0.107 \\ 8 & Romain Faivre & 7 & Brest & 630 & 0.106 \\ 8 & Houssem Aouar & 7 & Lyon & 810 & 0.106 \\ \hline 10 & Sofiane Boufal & 9 & Angers & 771 & 0.100 \\ 11 & Jonathan Ikone & 7 & Lille & 767 & 0.096 \\ 12 & Wissam Ben Yedder & 9 & Monaco & 1625 & 0.094 \\ \hline 13 & Franck Honorat & 11 & Brest & 838 & 0.093 \\ 13 & Karl Toko-Ekambi & 11 & Lyon & 1855 & 0.093 \\ 15 & Benjamin Bourigeaud & 10 & Rennes & 1719 & 0.092 \\ \hline 16 & Sofiane Boufal & 11 & Angers & 665 & 0.091 \\ 17 & Justin Kluivert & 11 & Nice & 1207 & 0.090 \\ 19 & Dimitri Payet & 9 & Marseille & 617 & 0.088 \\ \hline 19 & Kevin Gameiro & 10 & Strasbourg & 673 & 0.088 \\ 19 & Neymar & 11 & Paris Saint-Germain & 1258 & 0.088 \\ \hline \hline \end{tabular} \end{table} Table 4: Ranking of Ligue 1 players in terms of \(\text{GoT}^{d}\). offensive players in the league. This confirms the quality of Teji Savanier and his good performance during the 2021-2022 season. A defender in the Top 20:Frederic Guilbert of Strasbourg is a defender who excels at creating threats, ranking \(18^{th}\) in GoT\({}^{i}_{90}\). In fact, his team deploys a 532 formation that provides enough cover for the fullbacks to play offensively. The same holds for Jonathan Clauss who acts almost as a right midfielder in the Lens formation and ranks \(33^{rd}\) in GoT\({}^{i}_{90}\). This is also not surprising as Clauss ranks third in the league in the number of passes that lead to a shot, another proof of his creative play. A good season from Messi in generated threat:Despite underperforming in terms of scoring goals, Lionel Messi delivers outstanding values of generated threat both directly given his dribbling and passing quality, and indirectly given his involvement in ball possession. Additionally, we observe that his performance increases slightly when playing in his natural position as a 10 in the 433 formation. The right wing is Messi's best position as he poses more of a threat cutting inside from the right. Optimal position for some players:Romain Faivre stands out in both GoT\({}^{d}\) and GoT\({}^{i}_{90}\), ranking among the top twenty players. This is in fact expected because, when playing as a right midfielder in the 442 formation of Brest, the player performed well and was involved in six goals in just 660 minutes. Similarly, Houssem Aouar was successful as an interior midfielder in the 433 formation. He scored three and assisted three more in the considered period, earning him a top spot on our list. A metric that does not value center forwards:Very few strikers make the Top 20 in the two metrics. This is because the role of some center forwards is to receive the ball in the danger area and not necessarily to be at the origin of the threat. This is even more pronounced when looking at GoT\({}^{i}_{90}\). For example, Mbappe, the top scorer in the league, barely makes it to the Top 20. Mbappe is not known for participating in possession and touching the ball a lot but as an aggressive transition player. In contrast, midfielders such as Verratti and Guimaraes, that are involved in the build-up of a lot of dangerous situations, feature in the top positions in terms of GoT\({}^{i}_{90}\). ### Ranking the central defenders' involvement in terms of GoT To quantify the involvement of central defenders in danger creation, we use the indirect generation of threat per 90 minutes. This is because the direct generation of threat (GoT\({}^{d}\)) values are particularly low for defenders and therefore cannot be used to compare players. While GoT\({}^{i}_{90}\) is influenced by the quality of the offensive players and team style of play, it also provides valuable information on the role of defenders in the team's build-up scheme. For instance, a center-back who is technically proficient but avoids taking risks and does not contribute much to ball progression will have a low value of GoT\({}^{i}_{90}\). This metric strikes a balance in measuring a player's intrinsic ability as well as their involvement within the team. Table 6 displays the Top 10 best central defenders with the highest values of GoT\({}^{i}_{90}\). It is no surprise that Marquinhos and Kimpembe take the first two spots, given that they are part of Paris Saint-Germain, the most dominant team in Ligue 1. This is of course due to their technical ability, but there is also a factor due to the high possession values and danger creation ability of their team. The same holds for Nayef Aguerd and Warmed Omari that contribute significantly to ball progression, primarily through accurate long balls. The third-placed is Facundo Medina. The Lens defender is well known for his range of passing and for his ability to switch play from one side to the other. In particular, he ranks tenth in the league in terms of accurate passes per 90 minutes. William Saliba naturally completes the Top 5. The Marseille player excels with the ball at his feet and ranks third in accurate passing in Figure 1. The player has now moved to Arsenal, a team that likes to play from the back, and continues to deliver in that aspect of the game. ### GoT\({}^{d}\) to rank teams To verify the consistency of our metrics, we rank Ligue 1 teams based on their aggregate values of GoT\({}^{d}\). This metric can be considered as an indicator of squad quality. For each club, we fit a 12-dimensional Hawkes process to all matches in which they use their primary formation cluster, regardless of the players occupying each position. We then sum the estimated direct threat per touch GoT\({}^{d}\) for all the positions. Table 7 shows the resulting Top 10 based on generated threat. Our metric describes an important part of the offensive performance but obviously does not cover all aspects of the game. Nevertheless, it remains a very good measure of the quality of the team. Our ranking shows a significant 62% Kendall correlation with the realized ranking of Ligue 1. This is achieved while only looking at ball touch and threat event timestamps to infer player abilities. Below are some observations from the ranking: * Rennes climbs to the second position in our ranking. This is because the team was very attack-minded in the 2021-2022 season and managed to score 82 goals, one of the highest totals in Europe. Their expected threat is proof of their offensive output. \begin{table} \begin{tabular}{l l l l l l} \hline \hline **Rank** & **Name** & **Position** & **Team** & **Minutes** & \(\text{GoT}^{i}_{90}\) \\ \hline 1 & Lionel Messi & 10 & Paris Saint-Germain & 630 & 14.911 \\ 2 & Angel Di Maria & 10 & Paris Saint-Germain & 1171 & 13.218 \\ 3 & Neymar & 11 & Paris Saint-Germain & 1258 & 12.724 \\ \hline 4 & Marco Verratti & 4 & Paris Saint-Germain & 602 & 12.581 \\ 5 & Lionel Messi & 9 & Paris Saint-Germain & 675 & 12.353 \\ 6 & Romain Faivre & 7 & Brest & 630 & 10.402 \\ \hline 7 & Houssem Aouar & 7 & Lyon & 810 & 10.077 \\ 8 & Teji Savanier & 7 & Montpellier & 2209 & 9.608 \\ 9 & Marco Verratti & 8 & Paris Saint-Germain & 1069 & 9.446 \\ \hline 10 & Jason Berthomier & 7 & Clermont & 1244 & 9.340 \\ 11 & Benjamin Bourigeaud & 10 & Rennes & 1719 & 9.211 \\ 12 & Sofiane Boufal & 9 & Angers & 771 & 9.100 \\ \hline 13 & Bruno Guimaraes & 4 & Lyon & 900 & 8.817 \\ 14 & Dimitri Payet & 9 & Marseille & 617 & 8.815 \\ 15 & Moses Simon & 11 & Nantes & 1222 & 8.790 \\ \hline 16 & Martin Terrier & 11 & Rennes & 1386 & 8.639 \\ 17 & Kylian Mbappé & 11 & Paris Saint-Germain & 1066 & 8.577 \\ 18 & Frederic Guilbert & 2 & Strasbourg & 2428 & 8.421 \\ \hline 19 & Ruben Aguilar & 2 & Monaco & 1205 & 8.019 \\ 20 & Lovro Majer & 7 & Rennes & 1302 & 7.927 \\ \hline \hline \end{tabular} \end{table} Table 5: Ranking of Ligue 1 players in terms of GoT\({}^{i}_{90}\). * Olympique Lyonnais, ranked eighth in Ligue 1, still had a very prolific season offensively. They have the third-highest total of goals and the second-highest total of expected goals. It is therefore natural they are fourth with respect to our offensive metric. ## 7 Conclusion and future work In order to measure a player's ability to create threat in football, we develop model-based metrics that rely on Hawkes processes. These processes provide an easy to interpret way to capture causation between event times. Thanks to this modeling, we are able to identify the players whose touches are most consistently correlated with subsequent threats. We derive four different metrics each describing different ways to create danger. On the one hand, the direct generation of threat metrics GoT\({}^{d}\) and \begin{table} \begin{tabular}{l l l l l l} \hline \hline **Rank** & **Name** & **Position** & **Team** & **Minutes** & \(\text{GoT}^{i}_{90}\) \\ \hline 1 & Marquinhos & 5 & Paris Saint-Germain & 2340 & 5.625 \\ 2 & Presnel Kimpembe & 6 & Paris Saint-Germain & 1840 & 5.230 \\ 3 & Facundo Medina & 4 & Lens & 1329 & 4.953 \\ \hline 4 & Nayef Aguerd & 6 & Rennes & 1698 & 4.908 \\ 5 & William Saliba & 5 & Marseille & 1800 & 4.652 \\ 6 & Jason Denayer & 6 & Lyon & 630 & 4.591 \\ \hline 7 & Jonathan Gradit & 6 & Lens & 1710 & 4.535 \\ 8 & Warmed Omari & 5 & Rennes & 1710 & 4.407 \\ 9 & Damien Da Silva & 5 & Lyon & 612 & 4.182 \\ \hline 10 & Dante & 6 & Nice & 2880 & 3.707 \\ 11 & Duje Caleta-Car & 6 & Marseille & 1397 & 3.462 \\ 12 & Lucas Perrin & 6 & Strasbourg & 2329 & 3.185 \\ \hline 13 & Kevin Dango & 5 & Lens & 1620 & 3.105 \\ 14 & Benoit Badiashile & 6 & Monaco & 975 & 3.075 \\ 15 & Castello Lukeba & 6 & Lyon & 1375 & 3.074 \\ \hline 16 & Mamadou Sakho & 6 & Montpellier & 1962 & 2.998 \\ 17 & Florent Ogier & 6 & Clermont & 2329 & 2.949 \\ 18 & Guillermo Maripán & 6 & Monaco & 810 & 2.916 \\ \hline 19 & Guillermo Maripán & 5 & Monaco & 605 & 2.915 \\ 20 & Jean-Clair Todibo & 5 & Nice & 3123 & 2.864 \\ \hline \hline \end{tabular} \end{table} Table 6: Ranking of Ligue 1 central defenders in terms of \(\text{GoT}^{i}_{90}\). \begin{table} \begin{tabular}{l l l l} \hline \hline **Team** & \(\text{GoT}^{d}\) & **Ligue 1 ranking** & **Goals scored** \\ \hline **Paris Saint-Germain** & 0.42 & 1 & 90 \\ **Rennes** & 0.41 & 4 & 82 \\ **Monaco** & 0.41 & 3 & 65 \\ \hline **Lyon** & 0.36 & 8 & 66 \\ **Marseille** & 0.36 & 2 & 63 \\ **Lens** & 0.30 & 7 & 62 \\ \hline **Nice** & 0.28 & 5 & 52 \\ **Strasbourg** & 0.26 & 6 & 60 \\ **Lille** & 0.26 & 10 & 48 \\ \hline **Reims** & 0.26 & 12 & 43 \\ \hline \hline \end{tabular} \end{table} Table 7: Top 10 Ligue 1 teams with respect to aggregated \(\text{GoT}^{d}\) of starting eleven. \(\mathrm{GoT}^{d}_{90}\) allow us to isolate the intrinsic ability of players. On the other hand, \(\mathrm{GoT}^{i}\) and \(\mathrm{GoT}^{i}_{90}\) indicate the indirect contribution to the generation of threat through interactions with other positions. Beyond crediting players for danger generation, our approach can also be used to quantify and visualize the synergies between players on the pitch and identify the patterns that lead to dangerous situations. We demonstrate our methodology can successfully detect and rank the key players in the 2021-2022 Ligue 1 season, who contribute to their team's offensive output. The results we find are consistent with the observed performances of the retrieved players, but also reveal some surprising choices. Through the example of Chelsea in the 2016-2017 season, we show that our model-based approach can help teams make data-driven decisions about their tactics. By primarily looking at timestamps of ball touches, we gain a deeper understanding of the threat generation process of a team. Future work will include exploring the application of our model-based metrics for optimal team selection. In fact, if we are capable of inferring the branching matrix parameters linking players from different teams, we can measure the impact of a potential transfer on the danger creation process. In addition, we can use this framework to capture interactions of players with other game states different from threats. In particular, by replacing the threat events with ball losses, we can effectively analyze the defensive aspect of the game and determine players whose touches are most correlated with a turnover. Acknowledgment:The authors thank Anna Bonnet for her help with the estimation of Hawkes processes in large dimensions. They are also grateful to Charlotte Dion and Celine Duval. The authors gratefully acknowledge financial support from the chairs "Machine Learning & Systematic Methods in Finance" and "Deep Finance and Statistics".
2303.08919
Short-lived Radionuclides in the Milky Way Galaxy
The short-lived radionuclides (SLRs) have a half-life $\leq$ 100 Myr. The $\gamma$-ray observations and excess abundance of their daughter nuclides in various meteoritic phases confirm the existence of SLRs in the Galaxy and early solar system (ESS), respectively. In this work, we have developed Galactic Chemical Evolution (GCE) models for SLRs, $^{26}$Al, and $^{60}$Fe along with $^{36}$Cl, $^{41}$Ca, and $^{53}$Mn. These models predict the temporal and spatial evolution of SLR abundance trends in the Galaxy from 2-18 kpc. The abundance of two SLRs, $^{26}$Al, and $^{60}$Fe, are investigated further, as their $\gamma$-ray observations are available for comparison with the model predictions. The predictions for the abundance per unit area for each ring decrease from the inner to outer regions of the Galaxy.The GCE predictions for the total mass of alive $^{26}$Al, and $^{60}$Fe in 2-18 kpc of the Galaxy at present time are 0.2 M$_\odot$ and 0.08 M$_\odot$, respectively.
Tejpreet Kaur
2023-03-15T20:26:45Z
http://arxiv.org/abs/2303.08919v1
# Short-lived Radionuclides in the Milky Way Galaxy ###### Abstract The short-lived radionuclides (SLRs) have a half-life \(\leq\) 100 Myr. The \(\gamma\)-ray observations and excess abundance of their daughter nuclides in various meteoritic phases confirm the existence of SLRs in the Galaxy and early solar system (ESS), respectively. In this work, we have developed Galactic Chemical Evolution (GCE) models for SLRs, \({}^{26}\)Al, and \({}^{60}\)Fe along with \({}^{36}\)Cl, \({}^{41}\)Ca, and \({}^{53}\)Mn. These models predict the temporal and spatial evolution of SLR abundance trends in the Galaxy from 2-18 kpc. The abundance of two SLRs, \({}^{26}\)Al, and \({}^{60}\)Fe, are investigated further, as their \(\gamma\)-ray observations are available for comparison with the model predictions. The predictions for the abundance per unit area for each ring decrease from the inner to outer regions of the Galaxy.The GCE predictions for the total mass of alive \({}^{26}\)Al, and \({}^{60}\)Fe in 2-18 kpc of the Galaxy at present time are 0.2 M\({}_{\odot}\) and 0.08 M\({}_{\odot}\), respectively. ## 1 Introduction Short-lived radionuclides (SLRs) are elements with a half-life of the order of a few million years. These nuclides can be formed in various stellar environments in the Galaxy. The presence of the SLRs is vital evidence that star formation is an ongoing process in the Galaxy. The main production sites of \({}^{26}\)Al are Asymptotic Giant Branch stars (AGB), Core collapse Supernovae (CCSNe), nova and Wolf-Rayet (WR) stars. For \({}^{60}\)Fe, the primary production happens during the neutron capture reaction inside the CCSNe. These are the only two short-lived radionuclides for which the \(\gamma\)-ray observations can provide evidence of their presence in the star-forming regions. For the other SLRs, \({}^{36}\)Cl, \({}^{41}\)Ca and, \({}^{53}\)Mn, CCSNe are major production sites except for \({}^{53}\)Mn, which can also be produced in Supernova Ia (SNIa). The details of the characteristics of these nuclides are given in Table 1. The \(\gamma\)-ray COMPTEL and INTEGRAL observations detected the presence of \({}^{26}\)Al in the Galaxy. The results show the map of \({}^{26}\)Al which is concentrated in the galactic plane [1][2]. The \({}^{26}\)Al- rich regions in the \(\gamma\)-ray map coincide with the star-forming regions of the disc region. The \({}^{26}\)Al emits \(\gamma\)-rays when it decays to \({}^{26}\)Mg at 1809 keV. The \({}^{26}\)Al emission regions also contain the observations of the presence of \({}^{60}\)Fe, which emits \(\gamma\)-rays while decaying into \({}^{60}\)Co and further \({}^{60}\)Ni at 1173 keV and 1332 keV, respectively. With the ability of INTEGRAL to resolve the star-forming region, the \({}^{26}\)Al can be observed in the stellar ejecta, and nucleosynthesis studies can be performed inside the star-forming regions. The observations of \({}^{26}\)Al and \({}^{60}\)Fe from the same star-forming regions are being used to constrain the stellar nucleosynthesis models, identifying more such regions will also help identify the source of these SLRs in the ESS. Also, the observations from the other galaxies will provide more stringent constraints for the stellar and galactic chemical evolution modelling community. We have studied the abundance of SLRs using the Galactic Chemical Evolution (GCE) models discussed in [3]. ## 2 Galactic Chemical Evolution Models The study of the abundance distribution of SLRs plays a vital role in understanding the Galaxy's active star-forming regions. The SLR prediction for the Solar System also gives insights into its origin and the pre-solar molecular cloud. Despite precise observations of SLR abundances in the Solar System from meteorites, their stellar sources are still debated. To explain the Galactic scale steady-state distribution of SLRs, we divided the Galaxy into rings of 2 kpc width from 2-18 kpc, based on Monte Carlo simulations described in [5] & [6]. The homogeneous GCE model presented here is based on three-infall accretion for the formation of the Galaxy. In the case of the three infall model, the halo, thick and the thin disc all form in separate accretion episodes. The first two episodes occur on a shorter time scale, followed by the third episode in which thin disc forms over the extended time scale in inside-out scenario [7][8]. In inside-out scenario the inner regions of the thin disc forms first and then the outer regions. The normalisation constants involved in the accretion are determined by reproducing the observed value of total surface mass density in the solar neighbourhood and other parts of the Galaxy. Various generations of stars are formed according to the prevailing star formation rate (SFR), a function of the gas and prevalent total surface mass densities. Also, the temporal and radial star formation efficiencies, \(\nu(t)\) and \(\eta(r)\), respectively, account for the temporal and spatial variation in the SFR. The stellar mass in each ring at every time step is distributed in the stars according to the initial mass function (IMF) in the mass range 0.8-100 M\({}_{\odot}\) in various stellar generations. These stars evolve according to their mass and metallicity. Then, at every time step T, an assessment is made for all stars formed before T (t\(<\)T), to estimate their nucleosynthetic yields and remnants. A binary stellar population is synthesised at the time of star formation. With the binary fraction f, some stars out of the total stars formed at each time step evolve into a definite progenitor of SNe Ia. The progenitor stars have mass in the range of 3-8 M\({}_{\odot}\) and 11-16 M\({}_{\odot}\). The radioactive nuclides produced are subject to decay at each time step. The stellar yields of AGB and massive stars contribute the radioactive nuclides along with the stable nuclides to the interstellar medium (ISM). The contribution to the ISM for CCSNe is considered from [9], and AGBs from [10] and [11]. \begin{table} \begin{tabular}{||c|c|c|c|c|c||} \hline Short-lived radionuclide (SLR) & \({}^{26}\)Al & \({}^{36}\)Cl & \({}^{41}\)Ca & \({}^{53}\)Mn & \({}^{60}\)Fe \\ \hline \hline Half-life (t\({}_{1/2}\) (Myr)) & 0.717 & 0.301 & 0.0994 & 3.74 & 2.62 \\ \hline Mean-life(\(\tau\) (Myr)) & 1.035 & 0.434 & 0.1434 & 5.40 & 3.78 \\ \hline Decay Process & \(\beta_{+}\) & \(\beta_{-}\) & \(\beta_{+}\) & \(\beta_{+}\) & \(\beta_{-}\) \\ \hline Daughter Product & \({}^{26}\)Mg & \({}^{36}\)S, \({}^{36}\)Ar & \({}^{41}\)K & \({}^{53}\)Cr & \({}^{60}\)Ni \\ \hline Stable Isotope & \({}^{27}\)Al & \({}^{35}\)Cl & \({}^{40}\)Ca & \({}^{55}\)Mn & \({}^{56}\)Fe \\ \hline \end{tabular} \end{table} Table 1: Details of Short-Lived Radio nuclides considered in this work. The data of half-life and decay modes is taken from [4]. ## 3 Results and Discussion Results shown in figures 1-3 are based on the models I of homogeneous GCE models presented in [3]. Figure 1 shows the absolute abundance and abundance per unit area for \({}^{26}\)Al in the Galaxy. Similar results for \({}^{60}\)Fe are presented in Figure 2. Then the ratio of the absolute abundance of \({}^{60}\)Fe and \({}^{26}\)Al in the Galaxy is presented in Figure 3. The GCE model shown here evolves each ring of the Galaxy of width 2 kpc from 2-18 kpc over the galactic time scale. The model also predicts the metallicity and [Fe/H] for the Galaxy over the galactic time scale. Each ring experiences the accretion of gas and forms various generations of stars as explained in section 2 above and in [3]. The SLRs trends mainly depend upon the star formation rate in the region,which is higher in the inner regions and decreases as we move towards the outer regions of the Galaxy. This trend is mainly because of the higher accretion in the inner regions and high the star formation efficiency. However, \({}^{26}\)Al does not follow the decreasing trend as well as \({}^{60}\)Fe because of the contribution of \({}^{26}\)Al from AGB stars. On the contrary, \({}^{60}\)Fe is mainly coming from massive stars. The sum of the mass of \({}^{26}\)Al and \({}^{60}\)Fe over the 2-18 kpc of the Galaxy is 0.2 M\({}_{\odot}\) and 0.08 M\({}_{\odot}\), respectively. The abundance of other SLRs, \({}^{36}\)Cl, \({}^{41}\)Ca and \({}^{53}\)Mn, also follow a similar trend of decrease in the abundance per unit area of the ring as we move towards the Galaxy's outer regions as shown in figure 1 of [3]. The areas of the rings from 2-18 kpc, which have 2 kpc width, from Figure 1: (Left) Absolute abundance of \({}^{26}\)Al abundance, (Right) \({}^{26}\)Al per unit area in the Galaxy for eight annular rings from 2-18 kpc. Figure 2: (Left) Absolute abundance of \({}^{60}\)Fe abundance, (Right) \({}^{60}\)Fe per unit area in the Galaxy for eight annular rings from 2-18 kpc. the inner first to outer eighth ring, are given as 37.68, 62.8, 87.92, 113.04, 138.16, 163.28, 188.4, and 213.52 kpc\({}^{2}\) respectively. In this GCE model, the inventories of every generation of stars are all mixed with the ISM gas and homogenised with the entire ring area. Hence abundance per unit area gives better insight into the abundance distribution of the SLRs in the Galaxy. The \({}^{26}\)Al and \({}^{60}\)Fe can be synthesised inside two different evolutionary phases of the massive stars [12]. So there are suggestions that the ratio of these two SLRs \({}^{26}\)Al and \({}^{60}\)Fe can be an important indicator of the stellar population in the star-forming region. The \(\gamma\) ray observations show that the flux ratio of \({}^{60}\)Fe/\({}^{26}\)Al is 0.4, which limits the steady-state mass ratio for \({}^{60}\)Fe/\({}^{26}\)Al to be \(<0.9\)[12]. Figure 3 shows the trend of this ratio with the distance from the galactic center. There is significant decrease in the ratio with an increase in distance from the galactic center. The ratio is \(<0.9\) in the Galaxy except for the first two galactic rings from 2-6 kpc. In the solar neighborhood the ratio is \(\approx 0.8\) and decreases afterwards in the outer regions which could arise from the AGB contribution to \({}^{26}\)Al abundance in the Galaxy. ## 4 Conclusion The SLRs abundance distribution presented here are based on the homogeneous GCE models. The GCE models represent a good approximation to understand the overall trend in the Galaxy [13]. The homogeneous models show the abundance per unit area of \({}^{26}\)Al and \({}^{60}\)Fe in the Galaxy is higher in the the inner regions and lower in the outer. This trend is in agreement with the \(\gamma\)-ray observation. The estimates for the mass of \({}^{26}\)Al and \({}^{60}\)Fe in the entire Galaxy from the \(\gamma\)-ray observations are 1.8-2.4 M\({}_{\odot}\), and 1-6 M\({}_{\odot}\), respectively, which are higher than the predicted values from the GCE model explained above [14]. The low mass of SLRs could be outcome of the approach used to mix the stellar ejecta with the ISM. The stellar ejecta of every star is mixed with the entire annulus ring taken into account. This approximation is valid for the stable nuclides and have a little effect on their abundance predictions. However, in case of the SLRs, because of the short half-life the mixing of the ejecta with ISM can significantly reduce the abundance predictions. The second reason for low predictions from the GCE model is that the central 2 kpc is excluded from the model calculation. Finally, the ratio of \({}^{60}\)Fe/\({}^{26}\)Al from the GCE can provide clues to the contribution of different stellar sources to these two SLRs. Figure 3: The ratio of absolute abundance of \({}^{60}\)Fe/\({}^{26}\)Al in the Galaxy for eight annular rings from 2-18 kpc. ## 5 Acknowledgement TK is thankful to Dr Thomas Siegert from the University of Wurzburg, Germany, for valuable discussions and suggestions related to the \(\gamma\)-ray observations for SLRs in the Milky Way Galaxy. This manuscript was written during a visit to the Konkoly Observatory in Budapest, Hungary, hosted by Dr Maria Lugaro, for which TK is also grateful.
2306.04725
Nonlinear Evolution of Quadratic Gravity in 3+1 Dimensions
We present a numerically stable system of (3+1) evolution equations for the nonlinear gravitational dynamics of quadratic-curvature corrections to General Relativity (Quadratic Gravity). We also report on the numerical implementation of these evolution equations. We recover a well-known linear instability and gather evidence that -- aside from said instability -- Quadratic Gravity exhibits a physically stable Ricci-flat subsector. In particular, we demonstrate that Teukolsky-wave perturbations of a Schwarzschild black hole as well as a full binary inspiral (evolved up to merger) remain Ricci flat throughout evolution. This suggests that, at least in vacuum, classical Quadratic Gravity can mimic General Relativity, even in the fully nonlinear strong-gravity regime.
Aaron Held, Hyun Lim
2023-06-07T18:41:47Z
http://arxiv.org/abs/2306.04725v1
# Nonlinear Evolution of Quadratic Gravity in 3+1 Dimensions ###### Abstract We present a numerically stable system of (3+1) evolution equations for the nonlinear gravitational dynamics of quadratic-curvature corrections to General Relativity (Quadratic Gravity). We also report on the numerical implementation of these evolution equations. We recover a well-known linear instability and gather evidence that - aside from said instability - Quadratic Gravity exhibits a physically stable Ricci-flat subsector. In particular, we demonstrate that Teukolsky-wave perturbations of a Schwarzschild black hole as well as a full binary inspiral (evolved up to merger) remain Ricci flat throughout evolution. This suggests that, at least in vacuum, classical Quadratic Gravity can mimic General Relativity, even in the fully nonlinear strong-gravity regime. ## I Motivation The dynamics of General Relativity (GR) is governed by terms at linear order in (Riemann) curvature. As we gain access to the strong gravity regime [1; 2], we probe potential new physics which becomes relevant at higher order in curvature. Such new physics is suggested by the cosmological riddles of dark matter [3; 4] and dark energy [5]. Moreover, GR predicts its own breakdown, as singularity theorems [6; 7; 8; 9] imply geodesic incompleteness in the interior of black holes. In the context of new physics at strong curvature, such a breakdown is not surprising: Close to the formation of a singularity, curvature scales grow (arbitrarily) large, hence, potential higher-order curvature corrections are no longer negligible, and the dynamics of GR needs to be modified to account for these corrections. If curvature corrections are present, the respective new-physics scale may occur anywhere between the largest currently accessible curvature scales and the Planck scale. Taking a step beyond GR, we focus on dynamics at quadratic order in curvature. Such quadratic-curvature corrections are widely expected to arise from quantum fluctuations, see [10; 11; 12; 13; 14; 15] for perturbative quantum gravity, [16] for lattice approaches to quantum gravity, [17] for loop-quantum gravity, [18; 19] for string theory, and [20; 21; 22; 23] for asymptotically safe gravity. Quadratic curvature corrections occur in the form of gravitational self-interactions [11; 24] and in the form of non-minimal couplings of curvature to other fields [25; 26]. Both sectors can be unified in the context of an effective field theory of gravity and matter, see, e.g., [27]. Field redefinitions can mix between the pure-gravity and the non-minimal sector and, moreover, between different orders in curvature, see, e.g., [28; 29]. Several different terms may thus be physically equivalent if the field redefinitions do not impact physical conclusions. In the following, we will focus on gravitational self-interactions, we will not perform field redefinitions, and we will neglect any potential non-minimal couplings of curvature to other fields. We abbreviate the respective theory as Quadratic Gravity (QG) - sometimes also called Stelle-gravity [11; 24]. General Relativity tends to hide singularities, and thus regions of diverging curvature, behind horizons [30; 31], see [32; 33] for the potential exception of critical collapse. Experimental probes of horizon-scale physics [1; 2] thus provide the most promising way to constrain potential new physics at large curvature. Here, we are motivated, in particular, by the rapidly growing catalog of gravitational-wave events [34; 35; 36]. Utilizing said data to constrain new physics [37] will eventually require predictions for gravitational wave forms in theories beyond GR. The key tool to predict the respective nonlinear dynamics close to merger is well-posed numerical evolution, see [38; 39; 40; 41] for pioneering work in numerical relativity and [42; 43] for reviews of the well-posed initial value problem in GR. Beyond GR, numerical evolution in the presence of non-minimally coupled scalar degrees of freedom has received much attention [44; 45; 46; 47; 48; 49; 50; 51; 52; 53] and (for a specified set of theories) well-posedness has been established at weak non-minimal coupling [54; 55]. See also [56; 57] for evolution including pure-gravity operators at quartic order [29] and by means of damped high-frequency modes [58]. In previous work [59], we verified stable numerical evolution in the spherically-symmetric sector of Quadratic Gravity. Here, we report on an extension of the evolution equations to (3+1) dimensions, following, in particular, the pioneering work of Noakes [60], see also [61]. In Sec. (II), we start by reviewing QG, its equa tions of motion, and the propagating degrees of freedom. In Sec. (III), we perform a \((3+1)\) decomposition and derive our key analytical result: a set of \(1^{\rm st}\)-order evolution equations. In Sec. (IV), we describe our specific numerical implementation and verify numerical stability. In Sec. (V), we present first physical results which suggest that QG exhibits a nonlinearly stable Ricci-flat subsector which is fully equivalent to GR. In Sec. (VI), we conclude with a discussion and an outlook on future work. Several technical details are relegated into appendices. We use the \((-,+,+,+)\) signature and use Latin letters as spacetime indices. Moreover, we work in Planck units, i.e., setting the speed of light \(c=1\). For clarity, we keep Newton's constant \(G\) explicit. Round (square) brackets denote full (anti-)symmetrization of the enclosed indices. ## II Setup: Quadratic Gravity The action of Quadratic Gravity (QG) is given by \[S_{\rm QG}=\int_{x}\left[{\cal L}_{\rm mat}[\Phi]+\frac{1}{16\pi G}R+\alpha R_ {ab}R^{ab}-\beta R^{2}\right], \tag{1}\] where \(\int_{x}\) is shorthand notation for \(\int d^{4}x\sqrt{\det(-g)}\). In the following, the first term is taken to be independent of the curvature and depends solely on minimally coupled matter fields (and on the cosmological constant). The matter fields are collectively denoted by \(\Phi\). The second term is linear in the curvature and corresponds to GR, parameterized by Newton's constant \(G=1/(8\pi M_{\rm Pl})\) (or, equivalently, by the Planck mass \(M_{\rm Pl}\)). The third and fourth term are quadratic in the curvature and are parameterized by couplings \(\alpha\) and \(\beta\). In four dimensions, \(\alpha\) and \(\beta\) are dimensionless and all other (vacuum) terms at quadratic order in the curvature can be rewritten into linear combinations of the included ones by means of the Gauss-Bonnet identity. We neglect boundary terms and non-minimal couplings between matter and curvature. The theory of QG, as defined in Eq. (1), propagates (i) the usual graviton, i.e., a massless spin-2 mode; (ii) a massive spin-0 mode; and (iii) a massive spin-2 mode. The massive spin-2 mode has an opposite-sign kinetic term (in comparison to the other two modes) and is thus an Ostrogradski ghost. The massive spin-0 and spin-2 mode have respective masses \[m_{0}^{2}=-\frac{1}{32\pi G(3\beta-\alpha)}\;,\hskip 28.452756ptm_{2}^{2}=- \frac{1}{16\pi G\alpha}\;. \tag{2}\] In the following, we express the dimensionless couplings \(\alpha\) and \(\beta\) in terms of the masses \(m_{0}\) and \(m_{2}\). Due to the inclusion of quadratic-curvature terms, the dynamics of QG is governed by fourth-order equations of motion. Nevertheless, the full theory can be described in terms of the same degrees of freedom [60; 62; 63] as the linearized theory. To make this explicit, the Ricci scalar \({\cal R}\) and the traceless Ricci tensor \(\widetilde{\cal R}_{ab}=R_{ab}-1/4g_{ab}R\) can be promoted to independent evolution variables, as indicated by the calligraphic notation. This allows to write the equations of motion, obtained by varying the action in Eq. (1), as follows1[60; 62; 64; 65; 24]: Footnote 1: While [64; 65; 24] use different definitions of the couplings (related by the Gauss-Bonnet identity), the respective equations of motion are all equivalent. Some signs in [60] differ which, however, does not affect conclusions about a well-posedness. massless spin-2: \[G_{ab}(\Box g)= \widetilde{\cal R}_{ab}-\frac{1}{4}g_{ab}{\cal R}\equiv\frac{1}{M _{\rm Pl}^{2}}\widetilde{T}_{ab}\;,\] (3) massive spin-0: \[\Box\,{\cal R}= m_{0}^{2}\,{\cal R}+\frac{m_{0}^{2}}{M_{\rm Pl}^{2}}\,T_{c}^{c}\;,\] (4) massive spin-2: \[\Box\,\widetilde{\cal R}_{ab}= m_{2}^{2}\,\widetilde{\cal R}_{ab}-\frac{m_{2}^{2}}{M_{\rm Pl}^{2}}\,T_{ab}^{( \rm TL)}+2\,\widetilde{\cal R}_{a}{}^{c}\widetilde{\cal R}_{bc}-\frac{1}{2}g_{ ab}\widetilde{\cal R}^{cd}\widetilde{\cal R}_{cd}+\frac{1}{3}\left(\frac{m_{2}^{2}}{m_{0}^{ 2}}+1\right){\cal R}\,\widetilde{\cal R}_{ab}\] \[-\frac{1}{3}\left(\frac{m_{2}^{2}}{m_{0}^{2}}-1\right)\left[ \nabla_{a}\nabla_{b}{\cal R}-\frac{1}{4}g_{ab}\left(m_{0}^{2}{\cal R}+\frac{m_ {0}^{2}}{M_{\rm Pl}^{2}}\,T_{c}^{c}\right)\right]-2\,\widetilde{\cal R}^{cd}C _{acbd}\;.\] (5) For reasons detailed below, we will refer to these equations as the metric equation, the trace equation, and the traceless equation, respectively. The metric equation, i.e., Eq. (3), is nothing but the definition of the Einstein tensor: in terms of the metric on the left-hand side (LHS); and in terms of the fiducial variables on the right-hand side (RHS). It provides a second-order evolution equation for the metric. The fiducial variables \({\cal R}\) and \(\widetilde{\cal R}_{ab}\), appearing on the RHS, are effectively equivalent to matter source terms, for which we have defined a fiducial stress-energy tensor \(\widetilde{T}_{ab}\equiv M_{\rm Pl}^{2}(\widetilde{\cal R}_{ab}-\frac{1}{4}g_{ab}{ \cal R})\). Hence, the metric equation can be treated as in GR. For instance, one can make use of harmonic gauge to diagonalize the metric equation [60]. Alternatively, one may use the BSSN formalism [38; 39], as we do in Sec. (IV). The trace equation, i.e., Eq. (4), provides a \(2^{\rm nd}\)-order evolution equation for \({\cal R}\). The traceless equation, i.e., Eq. (5), provides a \(2^{\rm nd}\)-order evolution equation for \(\widetilde{\cal R}_{ab}\). Herein, we split the actual matter sources into a trace (\(T^{c}_{\ c}\)) and a traceless (\(T^{(\rm TL)}_{ab}\)) part which, in turn, source the respective fiducial variables. To keep the equations as concise as possible, we have also introduced the Weyl-tensor \(C_{acbd}\). The latter can be expressed in terms of \(R_{abcd}\), \(\widetilde{\cal R}_{ab}\), and \({\cal R}\) as \[C_{acbd}=R_{acbd}+g_{b[c}\widetilde{\cal R}_{a]d}+g_{d[a}\widetilde{\cal R}_{ c]b}+\frac{1}{6}g_{b[a}g_{c]d}{\cal R}\;. \tag{6}\] In the evolution equations of \({\cal R}\) (Eq. (4)) and \(\widetilde{\cal R}_{ab}\) (Eq. (5)), derivatives of the metric only enter via double covariant derivatives as well as in \(R_{abcd}\). ## III Derivation: (3+1)-decomposition of the evolution equations The evolution system, as given in Eqs. (3) to (5), is a good starting point to perform the (3+1)-decomposition. Herein, we decompose the metric, i.e., \[g_{ab}=\gamma_{ab}-n_{a}n_{b} \tag{7}\] into the spatial metric \(\gamma_{ab}\) and the normal vector \(n^{a}\) orthogonal to the spatial hypersurface. (The normal vector is chosen such that \(n^{a}n_{a}=-1\).) Covariant derivatives \(\nabla_{a}\) are projected onto spatial and normal part via \[\nabla_{a}=(\gamma^{b}_{\ a}-n_{a}n^{b})\nabla_{b}\equiv D_{a}-n_{a}n^{b} \nabla_{b}\;, \tag{8}\] where we have defined the usual spatial covariant derivative \(D_{a}\equiv\gamma_{a}^{\ b}\nabla_{b}\). Moreover, we introduce the usual geometric definition2 of the extrinsic curvature \(K_{ij}\) and the acceleration \(a_{i}\), respectively, as the mixed and the spatial projection of the gradient of the normal vector, i.e., Footnote 2: The extrinsic curvature can be defined as the symmetric part of the spatial projection of the gradient of the normal vector, i.e., as \(K_{ij}\equiv-\gamma_{i}^{\ a}\gamma_{j}^{\ b}\nabla_{a}n_{b}\), but if the normal vector is rotation free, the antisymmetric part vanishes and the strict definition reduces to the one in Eq. (10) \[a_{i} \equiv\gamma_{i}^{\ b}n^{a}\nabla_{a}n_{b}\;, \tag{9}\] \[K_{ij} \equiv-\gamma_{i}^{\ a}\gamma_{j}^{\ b}\nabla_{a}n_{b}\;. \tag{10}\] The purely temporal projection of \(\nabla_{a}n_{b}\) vanishes such that one may abuse notation and also write \(a_{b}=n^{a}\nabla_{a}n_{b}\). In this case, \(n^{b}a_{b}=0\). In complete equivalence to the above geometric definition, one can give a dynamical definition of the extrinsic curvature as a 1st-order variable for the metric, i.e., as \(K_{ij}\equiv-\frac{1}{2}{\cal L}_{n}\gamma_{ij}\), where \({\cal L}_{n}\) denotes the Lie derivative along \(n^{a}\). Both definitions are fully equivalent and imply each other. In the following, we reduce the remaining \(2^{\rm nd}\)-order derivatives in the time-direction, i.e., along \(n^{a}\), to \(1^{\rm st}\)-order derivatives. In anticipation of that, we define additional \(1^{\rm st}\)-order variables \[\widetilde{V}_{ab} \equiv -n^{c}\nabla_{c}\widetilde{\cal R}_{ab}\;, \tag{11}\] \[\widetilde{\cal R} \equiv -n^{c}\nabla_{c}{\cal R}\;, \tag{12}\] for the fiducial Ricci variables. Furthermore, we decompose the fiducial traceless-Ricci tensor \(\widetilde{\cal R}_{ab}\) and its \(1^{\rm st}\)-order variable \(\widetilde{V}_{ab}\) such that \[{\cal A} \equiv\gamma^{cd}\widetilde{\cal R}_{cd}\;, {\cal B} \equiv\gamma^{cd}\widetilde{V}_{cd}\;,\] \[{\cal A}_{ab} \equiv\gamma_{a}^{c}\gamma_{b}^{d}\widetilde{\cal R}_{cd}-\frac{1 }{3}\gamma_{ab}{\cal A}\;, {\cal B}_{ab} \equiv\gamma_{a}^{c}\gamma_{b}^{d}\widetilde{V}_{cd}-\frac{1}{3} \gamma_{ab}{\cal B}\;,\] \[{\cal C}_{a} \equiv n^{c}\gamma_{a}^{d}\widetilde{\cal R}_{cd}\;, {\cal E}_{a} \equiv n^{c}\gamma_{a}^{d}\widetilde{V}_{cd}\;,\] \[\Rightarrow {\cal A} =n^{a}n^{b}\widetilde{\cal R}_{ab}\;, {\cal B} =n^{a}n^{b}\widetilde{V}_{ab}\;, \tag{13}\] where the last two relations are enforced by the tracelessness of \(\widetilde{\cal R}_{ab}\) and \(\widetilde{V}_{ab}\). Equivalently, one may write this (3+1) split as \[\widetilde{\cal R}_{ab} ={\cal A}_{ab}+\frac{1}{3}\,\gamma_{ab}\,{\cal A}-2\,n_{(a}{\cal C }_{b)}+n_{a}n_{b}\,{\cal A}\;,\] \[\widetilde{V}_{ab} ={\cal B}_{ab}+\frac{1}{3}\,\gamma_{ab}\,{\cal B}-2\,n_{(a}{\cal E }_{b)}+n_{a}n_{b}\,{\cal B}\;. \tag{14}\] The remaining metric-dependent quantities can be decomposed using the conventional Gauss-Codazzi and Ricci equations, as collected in App. (A). We decompose the actual matter sources following the usual convention, i.e., \[\rho =n_{a}n_{b}\,T^{ab}\;,\] \[S_{i} =-\gamma_{ia}n_{b}\,T^{ab}\;,\] \[S_{ij} =\gamma_{ia}\gamma_{jb}\,T^{ab}\;. \tag{15}\] Similarly, we decompose the fiducial matter sources, i.e., \[\widetilde{\rho} =n_{a}n_{b}\,\widetilde{T}^{ab}=M_{\rm Pl}^{2}\left({\cal A}+ \frac{1}{4}{\cal R}\right)\;,\] \[\widetilde{S}_{i} =-\gamma_{ia}n_{b}\,\widetilde{T}^{ab}=-M_{\rm Pl}^{2}\,{\cal C}_{ i}\;,\] \[\widetilde{S}_{ij} =\gamma_{ia}\gamma_{jb}\,\widetilde{T}^{ab}=M_{\rm Pl}^{2}\left({ \cal A}_{ij}+\frac{1}{3}\gamma_{ij}{\cal A}-\frac{1}{4}\gamma_{ij}{\cal R} \right)\;. \tag{16}\] For the actual matter sources, we note that \(T_{ab}^{\rm(TL)}\), appearing in Eq. (5), is traceless in 4D but the 3D projections do not vanish, i.e., \[n^{a}n^{b}\,T_{ab}^{\rm(TL)} =\frac{1}{4}(S+3\rho)\;, \tag{17}\] \[\gamma^{ab}\,T_{ab}^{\rm(TL)} =\frac{1}{4}(S+3\rho)\;,\] (18) \[\gamma_{i}^{a}\gamma_{j}^{b}\,T_{ab}^{\rm(TL)} =S_{ij}-\frac{1}{4}\gamma_{ij}(S-\rho)\;. \tag{19}\] With these definitions at hand, the decomposition of the three evolution equations (metric equation, trace equation, and traceless equation, cf. Eqs. (3) to (5)) is tedious but essentially straightforward. After the decomposition, we also identify which of the decomposed equations correspond to constraints, constraint evolution, or physical evolution equations. The busy reader may skip to Subsec. (III.7) where we summarize the result. ### (3+1) decomposition of the metric equation Eq. (3), determines the evolution of the metric \(g_{ab}\). The fiducial variables \(\mathcal{R}\) and \(\tilde{\mathcal{R}}_{ab}\) can be treated as fiducial matter sources. The actual matter sources \(T^{ab}\) do not appear in the metric evolution. They will only affect the other evolution equations. As for most numerical efforts in GR, our starting point for the metric sector is the York-variant of the ADM equations [66], i.e., \[(n^{c}\nabla_{c}\gamma_{ij})= -2\,D_{(i}n_{j)}-2\,K_{ij}\;, \tag{20}\] \[(n^{c}\nabla_{c}K_{ij})= -a_{i}a_{j}-2\,D_{(i}a_{j)}-2\,K_{m(i}D_{j)}n^{m}\] \[-2K_{im}K_{j}^{m}+K\,K_{ij}+{}^{(3)}\!R_{ij}\] \[-\frac{1}{M_{\rm Pl}^{2}}\left(\widetilde{S}_{ij}-\frac{1}{2} \gamma_{ij}(\widetilde{S}-\widetilde{\rho})\right)\;,\] (21) \[0 =D_{j}K_{i}^{j}-D_{i}K-\frac{1}{M_{\rm Pl}^{2}}\widetilde{S}_{i}\;,\] (22) \[0 ={}^{(3)}\!R-K_{ij}K^{ij}+K^{2}-\frac{2}{M_{\rm Pl}^{2}} \widetilde{\rho}\;. \tag{23}\] The first equation (evolution of the spatial metric) is a definition, used to reduce the equations from \(2^{\rm nd}\)-order to \(1^{\rm st}\)-order in time. It is the metric equivalent of our definitions in Eqs. (11) and (12). However, the fiducial variable, that one has introduced in the metric sector, i.e., \(K_{ij}\), also carries direct geometric meaning - it is the extrinsic curvature of the spatial hypersurface. For the second equation (evolution of the extrinsic curvature), one has used the lapse constraint in Eq. (23) to simplify the evolution equation. Hence the appearance of \(\widetilde{\rho}\) in Eq. (21). The \(3^{\rm rd}\) and \(4^{\rm th}\) equation correspond to the momentum and Hamiltonian constraint, respectively. In summary, including fiducial matter sources, the metric equation decomposes in complete equivalence to GR. The spatial projections result in evolution equations for \(\gamma_{ij}\) and \(K_{ij}\), i.e., for 12 pieces of initial data3. The mixed and the temporal projections of the metric equation result in 4 constraints - the Hamiltonian and the momentum constraint. Moreover, there remains coordinate freedom within the spatial hypersurface: We are free to choose the spatial coordinates as well as the initial time, hence removing 4 further pieced of initial data. Overall, as in GR, one finds \(12-4-4=4\) independent pieces of initial data, i.e., 2 degrees of freedom, in the metric sector. We will come back to the overall counting of degrees of freedom in Subsec. (III.7). Footnote 3: Here, we already assume that hypersurfaces are chosen such as to fix \(g_{00}\) and \(g_{0i}\) by an appropriate choice of lapse and shift as well as \(n^{c}\nabla_{c}g_{00}\) and \(n^{c}\nabla_{c}g_{0i}\) such as to obey a specified gauge choice, e.g., harmonic gauge. This choice of gauge/coordinates already fixes 8 out of 20 pieces of initial data in the second-order evolution of \(g_{\mu\nu}\). ### (3+1) decomposition of the trace equation Eq. (4) determines the evolution of the fiducial Ricci scalar \(\mathcal{R}\). Since the only derivatives appear in \(\square\mathcal{R}\), it is of quasi-linear form. We (3+1)-decompose the covariant derivatives on the left-hand side (LHS) as \[\square\mathcal{R}=n^{a}\nabla_{a}\hat{\mathcal{R}}+(D_{i}+a_{i})D^{i} \mathcal{R}-K\hat{\mathcal{R}}\;. \tag{24}\] Combining the above result with the RHS of Eq. (4) provides two \(1^{\rm st}\)-order (in time) equations for \(\mathcal{R}\) and \(\hat{\mathcal{R}}\), i.e., \[n^{a}\nabla_{a}\mathcal{R} =-\hat{\mathcal{R}}\;, \tag{25}\] \[n^{a}\nabla_{a}\hat{\mathcal{R}} =-(D_{i}+a_{i})D^{i}\mathcal{R}\] \[\quad+K\hat{\mathcal{R}}+m_{0}^{2}\mathcal{R}+\frac{m_{0}^{2}}{M_ {\rm Pl}^{2}}(S-\rho)\;, \tag{26}\] where \(\rho\) and \(S=\gamma^{ab}T_{ab}=\gamma^{ab}S_{ab}\) correspond to the trace of the actual matter source terms, decomposed analogously to Eq. (16). In summary, we find two evolution equations and no constraints in the trace sector. ### (3+1) decomposition of the traceless equation Eq. (5) evolves the traceless fiducial Ricci tensor \(\widetilde{\mathcal{R}}_{ab}\). Without recasting, this equation is not of quasi-linear form. Therefore, while performing the (3+1) decomposition, we can expect to have to use the previous evolution equations to remove all second order time derivatives on the RHS. This procedure is reminiscent of the order reduction in [60]. Before doing so, we consider the (3+1) decomposition of the LHS, i.e., \[\square\widetilde{\mathcal{R}}_{ab}=n^{c}\nabla_{c}\widetilde{V}_{ab}+(D_{c}+a_{ c})D^{c}\widetilde{\mathcal{R}}_{ab}-K\widetilde{V}_{ab}\;, \tag{27}\] where, as for the fiducial Ricci scalar, we have introduced the first-order fiducial variable \(\widetilde{V}_{ab}=-n^{c}\nabla_{c}\widetilde{\mathcal{R}}_{ab}\), cf. Eq. (11). Herein, the spatial covariant derivatives should strictly be understood as a shorthand notation, i.e., \(D_{c}D^{c}\widetilde{\mathcal{R}}_{ab}\equiv\gamma^{d}_{\phantom{d}c}\nabla_{ d}(\gamma^{ce}\nabla_{e}\widetilde{\mathcal{R}}_{ab})\) and \(a^{c}D_{c}\widetilde{\mathcal{R}}_{ab}\equiv a^{e}\gamma^{c}_{\phantom{e}c} \nabla_{c}\widetilde{\mathcal{R}}_{ab}\). This subtlety is important since \(\widetilde{\mathcal{R}}_{ab}\) is not yet projected and thus contains temporal components. The derivation is made explicit in App. (C). Overall, this renders the LHS manifestly 1st-order in time. On the RHS of Eq. (5), the only derivative terms are contained in \(\nabla_{a}\nabla_{b}\mathcal{R}\) and in the Riemann tensor \(R_{acbd}\). The Riemann tensor can be decomposed in the usual way, cf. App. (A). Regarding \(\nabla_{a}\nabla_{b}\mathcal{R}\), we find \[\nabla_{a}\nabla_{b}\mathcal{R} =D_{a}D_{b}\mathcal{R}+2\,n_{(a}D_{b)}\hat{\mathcal{R}}-2\,K_{ab} \hat{\mathcal{R}}\] \[\quad-n_{a}n_{b}\left(n^{c}\nabla_{c}\hat{\mathcal{R}}\right)+n _{a}n_{b}\,a_{c}D^{c}\mathcal{R}\;, \tag{28}\] which is, as expected, symmetric in \((a,b)\). Here, we have used (i) the projection of the covariant derivative, cf. Eq. (8); (ii) the geometric definitions of acceleration and extrinsic curvature, i.e., Eqs. (9) and (10); and (iii) the identity \(0=\nabla_{d}g^{c}_{a}=\nabla_{d}(\gamma^{c}_{a}-n_{a}n^{c})=\nabla_{d}\gamma^{ c}_{a}-n_{a}\nabla_{d}n^{c}-n^{c}\nabla_{d}n_{a}\). The calculation is made fully explicit in App. (B). Collecting everything, we find 1st-order evolution equations for the fiducial variables \(\widetilde{\mathcal{R}}_{ab}\) and \(\widetilde{V}_{ab}\), i.e., \[n^{c}\nabla_{c}\widetilde{\mathcal{R}}_{ab}= -\widetilde{V}_{ab}\;, \tag{29}\] \[n^{c}\nabla_{c}\widetilde{V}_{ab}= -(D_{c}+a_{c})D^{c}\widetilde{\mathcal{R}}_{ab}+K\widetilde{V}_{ ab}+m_{2}^{2}\widetilde{\mathcal{R}}_{ab}-\frac{m_{2}^{2}}{M_{\rm Pl}^{2}}\,T_{ab}^{( \rm TL)}+2\,\widetilde{\mathcal{R}}_{a}^{\phantom{a}c}\widetilde{\mathcal{R}}_ {bc}-\frac{1}{2}g_{ab}\widetilde{\mathcal{R}}^{cd}\widetilde{\mathcal{R}}_{ cd}+\frac{1}{3}\left(\frac{m_{2}^{2}}{m_{0}^{2}}+1\right)\mathcal{R}\, \widetilde{\mathcal{R}}_{ab}\] \[\quad\quad\quad+2\left(n_{(a}D_{b)}-K_{ab}\right)\hat{\mathcal{R} }-n_{a}n_{b}\left(n^{c}\nabla_{c}\hat{\mathcal{R}}\right)\Bigg{]}\] \[\quad\quad\quad-2\,\widetilde{\mathcal{R}}^{cd}\Bigg{[}g_{b[c} \widetilde{\mathcal{R}}_{a]d}+g_{d[a}\widetilde{\mathcal{R}}_{c]b}+\frac{1}{6} g_{b[a}g_{c]d}\mathcal{R}+{}^{(3)}\!R_{acbd}+2\,K_{a[b}K_{d]c}+4\,a_{[a}n_{c]}a_{[b}n_{d]} -4\,n_{[b}\left(D_{d]}a_{[a}\right)n_{c]}\] \[\quad\quad\quad\quad+4\left(D_{[a}K_{c][b}\right)n_{d]}+4\left(D_ {[b}K_{d][a}\right)n_{c]}+4\,n_{[a}\,K_{c]}^{e}\,K_{e[b}n_{d]}+4\,\left(\gamma^ {f}_{[a}n_{c]}\gamma^{g}_{[b}n_{d]}\right)\left(n^{e}\nabla_{e}K_{fg}\right) \Bigg{]}\;. \tag{30}\] The terms involving \((n^{c}\nabla_{c}\hat{\mathcal{R}})\) and \((n^{e}\nabla_{e}K_{fg})\) can be written in terms of the other evolution equations such that no time derivatives remain on the RHS. It remains to project all the non-derivative terms onto spatial and temporal parts and thereby decompose the above two 1st-order traceless equations into spatial and temporal parts. ### Projection of the traceless equations We can explicitly project the traceless equations in order to separate constraint data from initial data. In all cases, we will obtain four different projections, i.e., we can obtain (i) the spatial trace with \(\gamma^{ab}\); (ii) the spatial projection with \(\gamma^{a}_{c}\gamma^{b}_{d}\); (iii) the temporal projection with \(n^{a}n^{b}\); and (iv) the mixed projection with \(n^{a}\gamma^{b}_{c}\) (or equivalently \(n^{b}\gamma^{a}_{c}\)). To obtain these, we have to commute the projection operators through the covariant derivative on the left-hand side of each respective equation which generates further terms. We present the explicit derivation in App. (D) and find \[\gamma^{ab}\left(n^{c}\nabla_{c}\widetilde{\mathcal{R}}_{ab}\right)=(n^{c} \nabla_{c}\mathcal{A})-2\,a^{c}\mathcal{C}_{c}\;, \tag{31}\] \[\gamma_{i}^{a}\gamma_{j}^{b}\left(n^{c}\nabla_{c}\widetilde{\cal R}_{ ab}\right) = (n^{c}\nabla_{c}{\cal A}_{ij})+\frac{1}{3}\gamma_{ij}\left(n^{c} \nabla_{c}{\cal A}\right) \tag{32}\] \[-\frac{2}{3}{\cal A}\left(D_{(i}n_{j)}+K_{ij}\right)-2a_{(i}{\cal C }_{j)}\] \[-2\,a^{c}\left({\cal A}_{c(i}n_{j)}+\frac{1}{3}\gamma_{c(i}n_{j)}{ \cal A}\right)\;,\] \[n^{a}n^{b}\left(n^{c}\nabla_{c}\widetilde{\cal R}_{ab}\right) = (n^{c}\nabla_{c}{\cal A})-2\,a^{c}{\cal C}_{c}\;,\] (33) \[n^{a}\gamma_{d}^{b}\left(n^{c}\nabla_{c}\widetilde{\cal R}_{ab}\right) = (n^{c}\nabla_{c}{\cal C}_{d})-n_{d}a^{c}{\cal C}_{c}\] (34) \[-a^{a}\left({\cal A}_{ad}+\frac{2}{3}\gamma_{ad}{\cal A}\right)\;.\] The analogous projections for the fiducial first-order variables are obtained by the replacements \(\widetilde{\cal R}_{ab}\to\widetilde{V}_{ab}\), \({\cal A}_{ij}\to{\cal B}_{ij}\), \({\cal A}\to{\cal B}\), and \({\cal C}_{a}\to{\cal E}_{a}\). These left-hand-side projections separate the covariant equations into evolution equations (Eqs. (31) and (32)) and constraint evolution (Eq. (34)). In line with the \((3+1)\) conventions chosen in Eq. (13), the temporal projection in Eq. (33) is redundant with the spatial trace projection in Eq. (31). The RHS projections are tedious, and we check them in the ancillary files4. Crucially, the RHS terms do not impact the character of the respective projections since they only involve spatial derivatives. Footnote 4: See the GitHub repository ([https://github.com/aaron-hd/QG-sphSymm-ancillary](https://github.com/aaron-hd/QG-sphSymm-ancillary)). Parts of the derivation make use of the xAct package [67] ([http://www.xact.es/](http://www.xact.es/)). The trace and spatial projection result in the following set of evolution equations for the spacial variables \({\cal A}\), \({\cal A}_{ij}\), \({\cal B}\), and \({\cal B}_{ij}\): \[n^{c}\nabla_{c}{\cal A} =2\,a^{k}{\cal C}_{k}-{\cal B}\;. \tag{35}\] \[n^{c}\nabla_{c}{\cal A}_{ij} =\frac{2}{3}\,{\cal A}\left(D_{(i}n_{j)}+K_{ij}\right)+2\,a^{c} \left({\cal A}_{c(i}n_{j)}+\frac{1}{3}\gamma_{c(i}n_{j)}{\cal A}+\gamma_{c(i} {\cal C}_{j)}\right)-{\cal B}_{ij}-\frac{2}{3}\gamma_{ij}a^{k}{\cal C}_{k}\;,\] (36) \[n^{c}\nabla_{c}{\cal B} =2\,a^{k}{\cal E}_{k}-\frac{1}{4}\frac{m_{2}^{2}}{M_{\rm Pl}^{2}} (S+3\rho)-\left(D_{i}D^{i}+a_{i}D^{i}-m_{2}^{2}+\frac{1}{6}\,{\cal R}\right){ \cal A}+K\,{\cal B}\] (37) \[+\frac{1}{3}\left(\frac{m_{2}^{2}}{m_{0}^{2}}+1\right){\cal R}\, {\cal A}-\frac{1}{3}\left(\frac{m_{2}^{2}}{m_{0}^{2}}-1\right)\left[\left(D_{ i}D^{i}-\frac{3}{4}m_{0}^{2}\right){\cal R}-\frac{3}{4}\frac{m_{0}^{2}}{M_{\rm Pl }^{2}}(S-\rho)-2\,K\,\widetilde{\cal R}\right]\] \[+\frac{3}{2}\left({\cal A}_{ij}{\cal A}^{ij}+\frac{4}{3}\,{\cal A }^{2}-2\,{\cal C}^{i}{\cal C}_{i}\right)-2\,{\cal C}^{i}\left(D^{j}K_{ij}+a^{ j}K_{ij}\right)-4\,K^{ij}D_{i}{\cal C}_{j}\] \[-2\left({\cal A}^{ij}+\frac{1}{3}\gamma^{ij}{\cal A}\right)\left({ \rm 3}R_{ij}+2\,K_{i[j}K_{k]}^{k}\right)+4\,{\cal C}^{j}\left(D_{j}K-D^{i}K_{ij}\right)\] \[-2\,{\cal A}\left(a_{i}a^{i}+D_{i}a^{i}-K^{ij}K_{ij}+\gamma^{ij} \left(n^{c}\nabla_{c}K_{ij}\right)\right)\;,\] \[n^{c}\nabla_{c}{\cal B}_{ij} =\frac{2}{3}\,{\cal B}\left(D_{(i}n_{j)}+K_{ij}\right)+2\,a^{c} \left({\cal B}_{c(i}n_{j)}+\frac{1}{3}\gamma_{c(i}n_{j)}{\cal B}+\gamma_{c(i} {\cal E}_{j)}\right)-\frac{m_{2}^{2}}{M_{\rm Pl}^{2}}\left(S_{ij}-\frac{1}{4} \gamma_{ij}(S-\rho)\right)-\frac{1}{3}\gamma_{ij}\left(n^{c}\nabla_{c}{\cal B}\right)\] \[-\left(D_{k}D^{k}+a_{k}D^{k}-m_{2}^{2}+\frac{1}{6}{\cal R}\right) \left({\cal A}_{ij}+\frac{1}{3}\gamma_{ij}{\cal A}\right)+K\left({\cal B}_{ij} +\frac{1}{3}\gamma_{ij}{\cal B}\right)\] \[+\frac{1}{3}\left(\frac{m_{2}^{2}}{m_{0}^{2}}+1\right){\cal R} \left({\cal A}_{ij}+\frac{1}{3}\gamma_{ij}{\cal A}\right)-\frac{1}{3}\left( \frac{m_{2}^{2}}{m_{0}^{2}}-1\right)\left[\left(D_{i}D_{j}-\frac{1}{4}\gamma_ {ij}m_{0}^{2}\right){\cal R}-\frac{1}{4}\frac{m_{0}^{2}}{M_{\rm Pl}^{2}} \gamma_{ij}(S-\rho)-2\,K_{ij}\hat{\cal R}\right]\] \[+\frac{1}{2}\,\gamma_{ij}\left({\cal A}^{kl}{\cal A}_{kl}+\frac{4} {3}{\cal A}^{2}-2\,{\cal C}^{k}{\cal C}_{k}\right)-2\,{\cal C}_{(i}\left(D^{k}K _{j)k}+a^{k}K_{j)k}\right)-4\,K_{k(i}D^{k}{\cal C}_{j)}\] \[-2\left({\cal A}^{kl}+\frac{1}{3}\gamma^{kl}{\cal A}\right)\left({ \rm 3}R_{ikjl}+K_{i[j}K_{l]k}\right)+4\,{\cal C}^{k}\left(D_{k}K_{ij}-D_{(i}K_{j)k}\right)\] \[-2\,{\cal A}\left(a_{i}a_{j}+D_{(i}a_{j)}-K_{i}^{k}K_{kj}+\gamma_{i} ^{k}\,\gamma_{j}^{l}\left(n^{c}\nabla_{c}K_{kl}\right)\right)\;. \tag{38}\] Here, we have used the evolution equation for \((n^{c}\nabla_{c}{\cal A})\) (cf. Eq. (35)) on the RHS of the evolution equation for (\(n^{c}\nabla_{c}\mathcal{A}_{ij}\)) (cf. Eq. (36)). It can be verified explicitly that the latter equation is (spatially) traceless. Analogously, the last term in the first line of Eq. (38) ensures that the evolution equation for \(\mathcal{B}_{ij}\) is (spatially) traceless. We refrain from plugging in Eq. (37) (as well as the evolution equation for \(K_{ij}\)) explicitly to keep the expressions as concise as possible. ### Bianchi constraints The fiducial variables \(\mathcal{R}\) and \(\widetilde{\mathcal{R}}_{ab}\) are not physical. Their only purpose is to reduce the order of the system. Naturally, in order for the reduced evolution to capture the physics of the original evolution, and the correct degrees of freedom in particular, we have to ensure that the fiducial variables evaluate to the proper metric quantities [60], i.e., that \[0=\Delta_{ab}\equiv G_{ab}(g)-\widetilde{\mathcal{R}}_{ab}+\frac{1}{4}g_{ab} \mathcal{R}. \tag{39}\] However, this equation is nothing but the metric equation itself which we already added to the system of evolution equations. Hence, projection will only reproduce the Hamiltonian constraint (temporal), the momentum constraint (mixed), and the metric evolution equation (spatial). It seems like there are no novel constraints. Crucially, since we have been replacing \(2^{\text{nd}}\)-order variables, we also have to ensure that their \(1^{\text{st}}\) and \(2^{\text{nd}}\) derivatives match the original metric quantities. The simplest such constraint is nothing but the Bianchi identity expressed in terms of the fiducial variables, i.e., \[0=\nabla_{b}\Delta_{a}^{b}= \underbrace{\left[-\mathcal{E}_{a}-K_{a}^{b}\mathcal{C}_{b}-K \mathcal{C}_{a}-D^{b}\mathcal{A}_{ab}-\frac{1}{3}D_{a}\mathcal{A}+\frac{1}{4}D _{a}\mathcal{R}\right]}_{\text{spatial}}+ n_{a}\underbrace{\left[\mathcal{B}+D^{b}\mathcal{C}_{b}+\frac{1}{4}\hat{ \mathcal{R}}+K_{bc}\mathcal{A}^{bc}+\frac{4}{3}K\mathcal{A}\right]}_{\text{ temporal}}. \tag{40}\] Following Noakes [60], we refer to these 4 constraints as "Bianchi constraints". Similarly, the normal derivative of the Bianchi constraint \[0=n_{c}\nabla^{c}\left(\nabla_{b}\Delta_{a}^{b}\right) \tag{41}\] generates 4 further constraints which we refer to as "Bianchi-dot constraints" but which need not be written explicitly for our purposes. ### Constraint evolution We recall that, in the ADM formalism, the purely temporal and the mixed projection of the Einstein equations result in the Hamiltonian and the momentum constraint. Similarly, the temporal and mixed projection of the higher-derivative equations are not propagating physical degrees of freedom. As mentioned before, the temporal projections (cf. Eq. (33)) are fully redundant and merely reproduce the spatial trace projections. The mixed projections correspond to evolution equations for \(\mathcal{C}_{i}\) and \(\mathcal{E}_{i}\). For instance, the mixed projection of \((n^{c}\nabla_{c}\widetilde{\mathcal{R}}_{ab})=-\widetilde{V}_{ab}\) (cf. Eq. (29)) with \(n^{a}\gamma_{i}^{b}\) results in \[n^{c}\nabla_{c}\mathcal{C}_{i}=a^{k}\left[\mathcal{A}_{ki}+\frac{2}{3}\gamma_ {ki}\mathcal{A}\right]+n_{i}a^{k}\mathcal{C}_{k}-\mathcal{E}_{i}\, \tag{42}\] which corresponds to an evolution equation for \(\mathcal{C}_{i}\). Similarly, the mixed projection of Eq. (30) corresponds to evolution of \(\mathcal{E}_{i}\). We refrain from showing the full expansion of the latter projection since (see Subsec. (III.7)) we can remove \(\mathcal{C}_{i}\) and \(\mathcal{E}_{i}\) by use of the momentum constraint and the spatial projection of the Bianchi constraint, respectively. Hence, there is no need to explicitly evolve the variables \(\mathcal{C}_{i}\) and \(\mathcal{E}_{i}\). Instead, \(\mathcal{C}_{i}\) and \(\mathcal{E}_{i}\) can be understood as constraint variables and the mixed projections can be interpreted as constraint evolution. ### Summary of evolution equations and constraints Overall, the (3+1) decomposition is now phrased in terms of 32 free functions of initial data5, i.e., the spatial metric \(\gamma_{ij}\) as well as its \(1^{\text{st}}\)-order variable \(K_{ij}\); the (fiducial) Ricci scalar \(\mathcal{R}\) as well as its \(1^{\text{st}}\)-order variable \(\hat{\mathcal{R}}\); and the (3+1) components of the (fiducial) traceless Ricci tensor \(\mathcal{A}\), \(\mathcal{A}_{ij}\), and \(\mathcal{C}_{i}\) as well as their \(1^{\text{st}}\)-order variables \(\mathcal{B}\), \(\mathcal{B}_{ij}\), and \(\mathcal{E}_{i}\). For these 32 free functions of initial data, only 16 correspond to physical initial data: In the metric sector, the Hamiltonian and momentum constraints in Eqs. (22) and (23) as well as 4 coordinate choices in the initial-data surface reduce from 12 to 4 pieces of initial data. Hence, the metric sector still propagates the expected 2 degrees of freedom of a massless spin-2 mode. While the constraints are modified, the constraint structure remains as in GR. In the fiducial sector, the 4 Bianchi (cf. Eq. (40)) and the 4 Bianchi-dot (cf. Eq. (41)) constraints reduce from 20 to 12 pieces of initial data. Hence, the fiducial sector contains 6 propagating degrees of freedom, corresponding to one massive spin-0 and one massive spin-2 mode. While not all of the constraints are algebraic, it is convenient that there are sufficiently many algebraic constraints in order to fully determine and thus remove the initial data for \(\mathcal{C}_{i}\) (by use of the momentum constraint in Eq. (22)) and \(\mathcal{E}_{i}\) (by use of the spatial projection of the Bianchi constraint in Eq. (40)). In practice, we thus only need to evolve \(\gamma_{ij}\) and \(K_{ij}\) (see Eqs. (20) and (21)), \(\mathcal{R}\) and \(\hat{\mathcal{R}}\) (see Eqs. (25) and (26), as well as \(\mathcal{A}\), \(\mathcal{A}_{ij}\), \(\mathcal{B}\), and \(\mathcal{B}_{ij}\) (see Eqs. (35) to (38)), i.e., only 26 variables. ## IV Method: Numerical evolution The (3+1) evolution equations derived in the previous section are fully general: we expect them to be compatible with all the state-of-the-art evolution schemes [38; 39; 68; 69] and numerical code frameworks [70; 71; 72; 73]. In the following, we will focus on the vacuum case, specify to the BSSN formulation [38; 39], and numerically evolve the system using the Dendro-GR[73] code. The purpose of our numerical efforts is twofold. The first purpose is of technical nature: We demonstrate that the evolution system is numerically stable, even in the nonlinear regime. The second purpose is physical: Given numerical stability, we then use the numerical evolution to investigate stability of the Ricci-flat subsector of QG. From here on, we specify to the usual (3+1) coordinate conventions, in which \[\beta^{a} =(0,\,\beta^{i})\;,\] \[n^{a} =(1/\alpha,\,-\beta^{i}/\alpha)\;,\] \[ds^{2} =-\alpha^{2}\,dt^{2}+\gamma_{ij}\left(dx^{i}+\beta^{i}\,dt\right) \left(dx^{j}+\beta^{j}\,dt\right)\;, \tag{43}\] with the lapse function \(\alpha\) and the shift vector \(\beta^{i}\). For the evolution of \(\alpha\) and \(\beta^{i}\) we choose a standard \((1+\log)\) slicing and a \(\Gamma\)-driver, respectively [74]. Regarding the metric dynamics, the BSSN formulation [38; 39] (see also [75] for subsequent proof of its strong hyperbolicity), proceeds exactly as in GR. For completeness, we summarize the BSSN evolution equations in App. (E). Together with the evolution equations for \(\mathcal{R}\), \(\hat{\mathcal{R}}\), \(\mathcal{A}\), \(\mathcal{A}_{ij}\), \(\mathcal{B}\), and \(\mathcal{B}_{ij}\) (see Subsec. (III.7)), these form the system of partial differential equations (PDEs) that we implement numerically. ### Numerical setup We implement the evolution equations in the Dendro-GR[73] framework. Dendro-GR combines a parallel octree-refined adaptive mesh with a wavelet adaptive multiresolution. An additional Quadratic-Gravity module is built on top of this framework6. We use a fourth-order finite-difference scheme to evaluate spatial derivatives and a fourth-order Runge-Kutta method to evolve in time. The Courant-Friedrichs-Lewy condition [76] which relates the temporal and spatial disretization is set to \(0.25\). Therefore, as we increase \(N_{x,y,z}\) (or decrease \(\Delta(x,y,z)\)), the time discretization \(\Delta t\) decreases. Other conditions are varied with respect to the test problems. Footnote 6: See the GitHub repository [https://github.com/lanl/](https://github.com/lanl/) Dendro-GRCA. ### Numerical stability To confirm numerical stability, we evolve a single Kerr black hole, perturbed only by numerical noise. We test Figure 1: Constraint plot (\(l2\)-norm of the Hamiltonian constraint in Eq. (23)) for Kerr initial data with mass \(M=1\) and spin \(a=0.01\), performing a noise test with different noise amplitudes, ranging from \(10^{-5}\) to \(10^{-10}\), top to bottom. Each curve represents an increase by a factor of ten in the initial amplitude over the curve below. with puncture initial data [77] for a Kerr black hole. Note that the additional QG variables are vanishing since Kerr is a Ricci-flat vacuum solution. The Kerr black hole is expressed in Kerr-Schild coordinates such that \[ds^{2}=(\eta_{ab}+2Hk_{a}k_{b})dx^{a}dx^{b} \tag{44}\] where \(\eta_{ab}\) is usual Minkowski spacetime and \[H =\frac{G\,M\,r}{r^{2}+a^{2}(z/r)^{2}}\;, \tag{45}\] \[k_{a}dx^{a} =-dt-\frac{r(xdx+ydy)-a(xdy-ydx)}{r^{2}+a^{2}}-\frac{zdz}{r}\;. \tag{46}\] Here, \(M\) is the black-hole mass and \(a\) is the spin parameter. In 3+1 form, we also have \[\alpha =1/\sqrt{1+2Hk_{0}k_{0}}\;, \tag{47}\] \[\beta_{i} =2Hk_{0}k_{i}\;,\] (48) \[\gamma_{ij} =\delta_{ij}+2Hk_{i}k_{j}\;, \tag{49}\] and the extrinsic curvature can be obtained as \[K_{ij}=\frac{D_{i}\beta_{j}+D_{j}\beta_{i}}{2\alpha}\;. \tag{50}\] The Kerr-Schild form is a horizon penetrating coordinate system such that there are no coordinate singularities in \(\gamma_{ij}\) and \(K_{ij}\) at the horizon. Kerr-Schild coordinates cover both the outside and the inside of the black hole. Since Kerr spacetime is Ricci flat, \(\mathcal{R}\), \(\hat{\mathcal{R}}\), \(\mathcal{A}\), \(\mathcal{A}_{ij}\), \(\mathcal{B}\), and \(\mathcal{B}_{ij}\) are initialized as zero. We aim to test for numerically (un)stable behavior of the time evolution. In anticipation of the presence of a linear instability in part of the parameter space, cf. Subsec. (V.1), we choose mass values for which the instability is not relevant. To perform a numerical stability test, we add random noise to all components of the initial data such that \[\mathbf{u}(t=0)=\mathbf{u}_{0}+A_{\text{noise}}\text{RAND}(x) \tag{51}\] where \(\mathbf{u}=(\gamma_{ij},K_{ij},\mathcal{R},\hat{\mathcal{R}},\mathcal{A}, \mathcal{A}_{ij},\mathcal{B},\mathcal{B}_{ij})\) is the state vector for all the evolution variables, \(A_{\text{noise}}\) is a noise amplitude which we vary from \(10^{-10}\) to \(10^{-5}\), and \(\text{RAND}(x)\) is a random function that generates random values between \(-1\) and \(1\). The result is summarized in Fig. (1). We find no indication for numerical instability in our evolution scheme. The same holds for all subsequent simulations. The respective constraint plots are presented in App. (F). ## V Results: Stability of the Ricci-flat Subsector of Quadratic Gravity In this section, we present our results on the Ricci-flat subsector of Quadratic Gravity. The physical upshot is twofold: first, we recover a well-known linear instability associated to massive spin-2 excitations; second, we demonstrate that - aside from this linear instability - even fully dynamical, Ricci-flat solutions like a binary merger seem to be nonlinearly stable. ### Recovering the linear instability in nonlinear evolution It is known from the linearized dynamics that a single Schwarzschild black hole can be subject to a linear instability [65; 78], akin to (i.e., linearly equivalent with) the long-wavelength Gregory-Laflamme instability of higher-dimensional black strings [79]. In QG, the onset and the timescale of this instability are determined by the mass \(m_{2}\) of the massive spin-2 degree of freedom and by the gravitational radius \(r_{g}=2\,GM\) of the Schwarzschild black hole [65; 78]. In particular, the instability occurs whenever \[2\,GM\,m_{2}\equiv p<p_{\text{crit}}\approx 0.87\;. \tag{52}\] If this inequality is fulfilled, then there exists a linear mode which grows like \(\sim e^{\text{Im}(\omega)t}\). The exponential growth rate is set by \[\text{Im}(\omega)=\frac{q(p)}{2\,GM}=m_{2}\,\frac{q(p)}{p}\;, \tag{53}\] Figure 2: To verify the presence of the linear instability, we show the evolution of the spatially averaged Ricci scalar \(\log_{10}[\langle\mathcal{R}_{\zeta}\rangle]\) as a function of evolution time. The spatial average is taken within a cube of edge length \(2\zeta\). where \(q(p)\) is a concave function which has been determined numerically (see, e.g., [65, Fig. 2]) and is bounded by \[q(p)<q_{\rm max}=q(p_{\rm max})\approx 0.1\;, \tag{54}\] with \(p_{\rm max}\approx 0.4\). Moreover, \(\lim_{p\to p_{\rm crit}}q(p)=0\) and the numerical results indicate that also \(\lim_{p\to 0}q(p)=0\). Equivalently, the instability timescale in units of the black-hole mass is given by \[\frac{t_{\rm GL}}{GM}\sim\frac{1}{GM\,{\rm Im}(\omega)}=\frac{2}{q(p)}\gtrsim 2 0\;. \tag{55}\] This means that, with regards to the linear instability, there are three different regimes: * If \(2\,GM\,m_{2}>p_{\rm crit}\), no linear instability is present. * If \(2\,GM\,m_{2}\ll p_{\rm max}\), the single Schwarzschild black hole exhibits a linear instability but the exponential growth rate is comparatively slow. * At \(2\,GM\,m_{2}\approx p_{\rm max}\), the exponential growth rate of the linear instability is maximized, growing \(e\)-fold roughly every \(t_{\rm GL}\approx 20\,GM\). We probe and recover this instability within our numerical evolution. As in Subsec. (IV.2), we initialize a single Schwarzschild black hole. We detect the instability by calculating the spatially averaged Ricci scalar \(\langle\mathcal{R}\rangle_{\zeta}\) where the spatial average is taken over a cube with \(x\), \(y\), \(z\in[-\zeta,+\zeta]\) and \(\zeta=200\,GM\) extends across the full computational domain. If the instability is present, a non-vanishing \(\langle\mathcal{R}\rangle_{\zeta}\) is excited by the numerical noise floor in the initial data. We probe the three different regimes identified above, cf. Fig. (2), and find agreement with the expectation from the linear analysis. In particular, at \(2\,GM\,m_{2}\approx p_{\rm max}\), we recover the expected timescale of the linear instability. This also means that we can exclude the presence of further growth modes with a faster timescale. We thus conclude that the unstable monopole mode identified in the linear analysis is indeed the dominant unstable mode. As we demonstrate in Fig. (2), the linear instability breaks Ricci flatness. Nevertheless, we find that the evolution remains numerically stable, cf. Fig. (6) in App. (F). In particular, the constraint violations remain small, even in the presence of a substantial breaking of Ricci flatness. We thus find no indication that well-posed evolution is restricted to the Ricci-flat sector. We also note that exponential growth - as expected from the linear analysis - corresponds to straight lines, given the log-scale in Fig. (2). Hence, our numerical simulations are in agreement with the linear analysis. Prolonged nonlinear evolution will allow us to clarify the nonlinear fate of the instability. We plan to report on this in future work. Moreover, the numerical evolution can straightforwardly be extended to rotating Kerr initial data. This allows to numerically explore a potential onset of physical instability for spinning black holes, where only partial results are known in the linearized regime [78, 80]. Having recovered the Gregory-Laflamme-type instability, from here on, we work in the regime in which this linear instability does not occur. In this regime, we expect that a single (Schwarzschild) black hole is stable. In the following two sections, we investigate physical Ricci-flat perturbations. First, in Subsec. (V.2), we perturb a single black hole by a gravitational (Teukolsky) wave. Then, in Subsec. (V.3), we investigate a full binary merger. ### Physical perturbations: Teukolsky waves In the previous section, we have recovered the well-known linear instability of Schwarzschild black holes in QG. In particular, we have demonstrated how the instability - if present and with sufficiently fast growth rate - is excited by the numerical noise floor. In the present section, we now separate Ricci-flat physical perturbations from the noise floor. We emphasize that while we consider small perturbations, we nevertheless solve the nonlinear evolution. Constructing initial data which corresponds to physical excitations of modes that break Ricci flatness (as, e.g., the mode that excites the linear instability in the previous section) is thus nontrivial since it requires to solve the modified nonlinear constraints. In contrast to the previous section, we, therefore, focus on Figure 3: We verify that Schwarzschild initial data subject to an incident Teukolsky wave remains Ricci flat. Different lines show different magnitudes \(A_{tw}\) of the incident Teukolsky wave (cf. legend). The black hole is placed at \(x=y=z=0\) and without initial velocity. The Teukolsky wave is initialized at \(x=50M\), \(y=z=0\) such that it interacts with the black hole at roughly \(t/M=50\). We plot the spatially averaged Ricci scalar \(\log_{10}[\langle\mathcal{R}\rangle_{\zeta}]\). Ricci-flat perturbations only. For the latter, we can construct initial data just like in GR, once more, making use of the fact that every Ricci-flat solution to GR is also a solution to QG. There are various ways to construct gravitational-wave initial data, see [81] for Teukolsky waves which correspond to purely quadrupolar gravitational-wave excitations and [82] for the nonlinear construction of Brill waves which correspond to a tower of multipole modes. We specify to Teukolsky waves and adopt Cartesian coordinates in the following. By construction, Teukolsky waves satisfy the nonlinear momentum constraint. We follow the standard procedure [81; 83; 84; 85] to ensure that initial data also satisfies the nonlinear Hamiltonian constraint, i.e., we employ the spatial part of the metric as a conformally related metric in the Hamiltonian constraint and then solve this equation for the conformal factor, i.e., for \(\phi\) in our case (cf. App. (E)). More details of the Teukolsky wave initial data can be found in [84; 85]. We initialize the black hole at the origin of the computational domain and without initial velocity. The Teukolsky wave perturbation is initialized at \(50\,GM\) distance to the black hole, from where it propagates radially in all directions. We evolve the resulting simulation up to \(t=250\,GM\) such that the evolution time encompasses how the Teukolsky wave interacts with the black hole. In order to confirm physical stability of the Ricci-flat subsector, we show the spatially averaged Ricci scalar \(\langle\mathcal{R}\rangle_{\zeta}\) in Fig. (3). Clearly, the Ricci scalar remains vanishing up to numerical noise fluctuations. In particular, the latter noise floor is well separated from the amplitude of physical Teukolsky-wave perturbations \(A_{\rm tw}\) which are up to \(10^{7}\) times larger, cf. the legend in Fig. (3). We conclude that, even with significant Ricci-flat perturbations, QG exhibits a stable subsector which mimics vacuum GR. To probe this conclusion further, we now proceed to the fully nonlinear regime of a binary merger. ### Stability during nonlinear binary evolution From the astrophysical perspective, one of the most interesting questions is to study the evolution of binary systems and the resulting gravitational wave emission. A continuously growing catalog [34; 35; 36] of gravitational-wave events is being detected by the LIGO/Virgo collaboration. At the same, when binary systems come close to merger, they probe the fully nonlinear regime of the theory and may thus reveal otherwise hidden deviations from GR. One of the possible deviations are the quadratic-curvature corrections investigated in this work, see also [87; 44; 57; 86] for the evolution of binary systems with the inclusion of other (related) deviations from GR. Eventually, one would like to compare the theoretical predictions for the extracted gravitational-wave form in GR and in QG (or beyond-GR more generally). However, the previous section suggests that the vacuum sector of QG is fully equivalent to the vacuum sector of GR. If this holds true in the fully nonlinear regime, QG can mimic any binary black-hole (BBH) system and, in particular, the respective gravitational-wave forms obtained in GR. Indeed, this is what we find (see below). Hence, the relevant constraints on QG will likely come from non-vacuum systems and we plan to address this in future work. As a specific binary example, we use Bowen-York initial data [88; 89], approximating a binary system which has been matched to the GW150914 LIGO/Virgo event [90]. The respective binary parameters are taken from the Einstein-Toolkit library [70; 91]. Since the physical initial data is Ricci-flat, we initialize all the additional QG variables with vanishing values. We then track the lapse function to extract the motion of the respective black holes. The trajectory comparison in Fig. (4) confirms our expectation that the two evolutions are fully equivalent. Once more, we find evidence that QG exhibits a physically stable Ricci-flat subsector which is fully equivalent to GR. As mentioned above, the obvious next physical question concerns an extension to non-vacuum (and hence non-Ricci-flat) binary systems. In contrast to the present initial data, the fiducial Ricci variables \(\mathcal{R}\), \(\hat{\mathcal{R}}\), \(\mathcal{A}\), \(\mathcal{A}_{ij}\), \(\mathcal{B}\) Figure 4: We show the trajectory comparison between GR and QG. The evolution captures the last 6 orbits and the merger of a binary-black-hole system for which the initial data matches the one inferred from GW150914. and \(\mathcal{B}_{ij}\) (see Subsec. (III.7)), corresponding to the massive spin-0 and the massive spin-2 degrees of freedom, will then, presumably, be excited. We thus expect non-vacuum binary systems, e.g., neutron stars, to show appreciable differences to GR and, therefore, expect the respective waveforms to constrain the quadratic-curvature deviations from GR. All of this comes with the question whether new instabilities arise in the non-vacuum sector of QG. We will address the non-vacuum sector in a separate publication. ## VI Discussion We derive a (3+1) evolution system for the nonlinear gravitational dynamics of quadratic-curvature corrections to General Relativity (GR), i.e., for Quadratic Gravity (QG). After verifying numerical stability, we use the nonlinear evolution to verify the nonlinear stability of a Ricci-flat subsector of QG which can mimic GR. ### Key results The key to well-posed nonlinear evolution is based on Noakes' insight [60] that the Ricci scalar and traceless Ricci tensor can be treated as fiducial variables representing the additional degrees of freedom. We find that it is possible to solve part of the constraint system algebraically such that we reduce the number of redundant evolution variables. As for GR, in the metric sector, we evolve twelve 1\({}^{\text{st}}\)-order variables, i.e., the spatial metric \(\gamma_{ij}\) and the extrinsic curvature \(K_{ij}\), which represent the two degrees of freedom associated with the massless spin-2 graviton. In the trace sector, the Ricci scalar \(\mathcal{R}\) (and its 1\({}^{\text{st}}\)-order variable \(\hat{\mathcal{R}}\)) correspond directly to an additional massive spin-0 degree of freedom. In the traceless sector, the spatial part of the traceless Ricci tensor - which we decompose into a 3-trace and 3-traceless part \(\mathcal{A}\) and \(\mathcal{A}_{ij}\), respectively - and the respective 1\({}^{\text{st}}\)-order variables \(\mathcal{B}\) and \(\mathcal{B}_{ij}\) all-together propagate another twelve pieces of initial data. Two of these are redundant but we do not find an obvious way to remove this redundancy analytically. Overall, these variables correspond to the 5 degrees of freedom of the massive spin-2 mode. The respective evolution system, summarized in Subsec. (III.7), can be understood as the QG equivalent of the ADM equations for GR, cf. [66]. In fact, the evolution system contains the standard ADM equations in which the higher-derivative variables appear as fiducial matter sources. Minimally coupled physical matter sources enter the evolution system via the higher-derivative sector. We then treat the metric sector as in the BSSN formalism [38; 39] and verify that the evolution of the resulting system of PDEs is numerically stable. After verifying numerical stability (which we also continue to check throughout all subsequent numerical evolutions, cf. App. (F)), we investigate the physical stability of the Ricci-flat (GR vacuum) subsector of the theory, and find: * Our nonlinear results recover a well-known linear instability of Schwarzschild black holes [78; 65]. At the linear level, this instability is fully equivalent to the Gregory-Laflamme instability [79]. It occurs only if both the spin-2 mass \(m_{2}\) and the black-hole mass \(M\) are sufficiently small (in comparison to the Planck mass), i.e., if \(\frac{1}{4\pi}\frac{m_{2}}{M_{\text{Pl}}}\frac{\dot{M}}{M_{\text{Pl}}}<0.87\). * Aside from this linear instability, we find that, both, physical metric perturbations (e.g., Teukolsky waves as presented in Subsec. (V.2)) and the fully nonlinear Ricci-flat evolution (e.g., a binary merger as the one presented in Subsec. (V.3)) are physically stable. The latter result is quite nontrivial and suggests that - at least in parameter ranges for which the Gregory-Laflamme-type instablity is either not present or negligibly small - QG exhibits a physically stable Ricci-flat subsector. In particular, this suggests that QG can mimic all of the vacuum physics of GR. ### Outlook The presence of a linear instability raises the question of its nonlinear endpoint and the relation to cosmic censorship. (See [92; 93; 94] for numerical investigation of the nonlinear fate of the Gregory-Laflamme instability for higher-dimensional black strings.) More generally, the global stability (i.e., the absence of runaway solutions) and the local stability (i.e., the identification of Lyapunov stable vacua) of Quadratic Gravity are yet to be determined, see also [95]. We note that stable motion and ghost-like degrees of freedom may not be mutually exclusive [96; 97]. With the nonlinear evolution system at hand, we are well-equipped to numerically investigate these questions in future work. The apparent nonlinear stability of the Ricci-flat sector raises the question how the theory behaves if minimally coupled matter is added to the system. Are there also stable regimes of the non-vacuum theory? If so, is there a stable sector of the theory which deviates appreciably from General Relativity? As our evolution system already includes matter terms, we plan to also address this question in future work. The key difficulty will be to construct consistent (as in obeying all of the modified constraint equations) initial data for the non-Ricci-flat sector of Quadratic Gravity. Overall, the numerical stability of the presented evolution system gives access to the fully nonlinear sector of Quadratic Gravity. Moreover, the presented treatment of quadratic-curvature corrections may also inform how to achieve fully stable nonlinear evolution when curvature corrections of yet higher order are present. In particular, any gravitational theory constructed only from Riemann curvature scalars (i.e., scalars formed solely from contractions of the Riemann curvature tensor, in particular, not involving additional covariant derivatives) still maintains fourth-order equations of motion [95; 98]. This suggests that similar techniques to the ones presented here may also apply to a much wider class of gravitational theories, for instance, to the cubic and/or quartic theory [29; 57]. _Acknowledgements._ We thank Pau Figueras and Frans Pretorius for many helpful discussions. The work leading to this publication was supported by the PRIME programme of the German Academic Exchange Service (DAAD) with funds from the German Federal Ministry of Education and Research (BMBF). AH acknowledges support by the Deutsche Forschungsgemeinschaft (DFG) under Grant No 406116891 within the Research Training Group RTG 2522/1. HL is supported by the LANL ASC Program and LDRD grant 20230555ER. This work used resources provided by the LANL Darwin testbed. Darwin is a research testbed/heterogeneous cluster funded by the Computational Systems and Software Environments subprogram of ASC program. LANL is operated by Triad National Security, LLC, for the National Nuclear Security Administration of the U.S.DOE (Contract No. 89233218CNA000001). This work is authorized for unlimited release under LA-UR-23-23440 ## Appendix A Gauss-Codazzi-Ricci equations The Gauss-, Codazzi-, and Ricci equations are of purely geometric nature. They determine the foliation and are therefore independent of the dynamics, i.e., valid both in GR and QG. They follow from the (3+1) decomposition of the Riemann tensor, i.e., from \[R_{acbd} ={}^{(3)}\!R_{acbd}+2\,K_{a[b}K_{d]c}+4\,n_{[a}\,K_{c]}^{e}\,K_{e[ b}n_{d]}\] \[\quad+4\left(D_{[a}K_{c][b]}\,n_{d]}+4\left(D_{[b}K_{d][a]}\,n_{c]}\right.\] \[\quad+4\,\left(\gamma_{[a}^{f}n_{c]}\gamma_{[b}^{g}n_{d]}\right) \,n^{e}\left(\nabla_{e}K_{fg}\right)\,,\] \[\quad+4\,a_{[a}n_{c]}a_{[b}n_{d]}-4\,n_{[b}\left(D_{d]}a_{[a} \right)n_{c]}\,. \tag{24}\] Projecting the decomposition onto the respective temporal and spatial indices (and specifying to the \((3+1)\) coordinate conventions in Eq. (43)) results in the Gauss-, Codazzi-, and Ricci-equation, respectively, i.e., \[\gamma_{a}^{e}\gamma_{b}^{f}\gamma_{c}^{g}\gamma_{d}^{h}R_{efgh} ={}^{(3)}\!R_{acbd}+2\,K_{a[c}K_{d]b}\;, \tag{25}\] \[\gamma_{a}^{e}\gamma_{b}^{f}\gamma_{c}^{g}n^{d}R_{efgd} =-2\,D_{[a}K_{b]c}\;,\] (26) \[\gamma_{b}^{e}\gamma_{d}^{f}n^{a}n^{c}R_{aecf} =\mathcal{L}_{n}K_{bd}+K_{b}^{e}K_{de}+\frac{1}{\alpha}D_{b}D_{d }\alpha\;. \tag{27}\] Other contractions with two normal vectors are either equivalent (by the symmetries of the Riemann tensor) to the above or vanish. All contractions with more than two normal vectors also vanish. ## Appendix B Decomposition of \(\nabla_{a}\nabla_{b}\mathcal{R}\) Here, we detail the split of \(\nabla_{a}\nabla_{b}\mathcal{R}\) into spatial and temporal part. We start from \[\nabla_{a}\nabla_{b}\mathcal{R} =g_{a}^{c}\nabla_{c}(g_{b}^{d}\nabla_{d}\mathcal{R})\] \[=(\gamma_{a}^{c}-n_{a}n^{c})\nabla_{c}[(\gamma_{b}^{d}-n_{b}n^{d} )\nabla_{d}\mathcal{R}]\] \[=+\underbrace{\gamma_{a}^{c}\nabla_{c}(\gamma_{b}^{d}\nabla_{d} \mathcal{R})}_{(\text{I})}+\underbrace{n_{a}n^{c}\nabla_{c}(n_{b}n^{d}\nabla _{d}\mathcal{R})}_{(\text{II})}\] \[\quad-\underbrace{\gamma_{a}^{c}\nabla_{c}(n_{b}n^{d}\nabla_{d} \mathcal{R})}_{(\text{III})}-\underbrace{n_{a}n^{c}\nabla_{c}(\gamma_{b}^{d} \nabla_{d}\mathcal{R})}_{(\text{IV})}\,,\] and look at each term individually, i.e., \[(\text{I}) =\gamma_{a}^{c}\nabla_{c}(\gamma_{b}^{d}\nabla_{d}\mathcal{R}) \equiv D_{a}D_{b}\mathcal{R}\;,\] \[(\text{II}) =n_{a}n^{c}\nabla_{c}(n_{b}n^{d}\nabla_{d}\mathcal{R})\] \[=-n_{a}n_{b}\left(n^{c}\nabla_{c}\hat{\mathcal{R}}\right)-n_{a} a_{b}\hat{\mathcal{R}}\;,\] \[(\text{III}) =\gamma_{a}^{c}\nabla_{c}(n_{b}n^{d}\nabla_{d}\mathcal{R})=-n_{b }D_{a}\hat{\mathcal{R}}-\gamma_{a}^{c}(\nabla_{c}n_{b})\hat{\mathcal{R}}\] \[=-n_{b}D_{a}\hat{\mathcal{R}}+K_{ab}\hat{\mathcal{R}}\;,\] where we have introduced the acceleration \(a_{b}\equiv n^{c}\nabla_{c}n_{b}\) and inserted the definition of \(\hat{\mathcal{R}}\equiv-n^{a}\nabla_{a}\mathcal{R}\). Finally, term (IV) can be rewritten by commuting covariant derivatives, i.e., \[(\text{IV}) =n_{a}n^{c}\nabla_{c}(\gamma_{b}^{d}\nabla_{d}\mathcal{R})\] \[=n_{a}(n^{c}\nabla_{c}\gamma_{b}^{d})(\nabla_{d}\mathcal{R})+n_ {a}\gamma_{b}^{d}\,n^{c}\nabla_{c}\nabla_{d}\mathcal{R}\] \[=n_{a}n^{c}(n^{d}\nabla_{c}n_{b}+n_{b}\nabla_{c}n^{d})(\nabla_{d} \mathcal{R})+n_{a}\gamma_{b}^{d}\,n^{c}\nabla_{d}\nabla_{c}\mathcal{R}\] \[=-n_{a}\,a_{b}\,\hat{\mathcal{R}}-n_{a}n_{b}\,a_{c}D^{c}\mathcal{R} +n_{a}\gamma_{b}^{d}\,\nabla_{d}(n^{c}\nabla_{c}\mathcal{R})\] \[\quad-\gamma_{b}^{d}\,(n_{a}\nabla_{d}n^{c})(\nabla_{c}\mathcal{R})\] \[=-n_{a}\,a_{b}\,\hat{\mathcal{R}}-n_{a}n_{b}\,a_{c}D^{c}\mathcal{R} -n_{a}D_{b}\hat{\mathcal{R}}\] \[\quad+\gamma_{b}^{d}\,(n^{c}\nabla_{d}n_{a})(\nabla_{c}\mathcal{R}) -\gamma_{b}^{d}\,(\nabla_{d}\gamma_{a}^{c})(\nabla_{c}\mathcal{R})\] \[=-n_{a}\,a_{b}\,\hat{\mathcal{R}}-n_{a}n_{b}\,a_{c}D^{c}\mathcal{R} -n_{a}D_{b}\hat{\mathcal{R}}+K_{ab}\hat{\mathcal{R}}\;,\] where we have twice used that \(0=\nabla_{d}g_{a}^{c}=\nabla_{d}(\gamma_{a}^{c}-n_{a}n^{c})=\nabla_{d}\gamma_{a} ^{c}-n_{a}\nabla_{d}n^{c}-n^{c}\nabla_{d}n_{a}\;.\) Note that there are no remaining temporal derivatives in any of these terms. Collecting results, we find \[\nabla_{a}\nabla_{b}\mathcal{R} =D_{a}D_{b}\mathcal{R}+2\,n_{(a}D_{b)}\mathcal{\hat{R}}-2\,K_{ab} \mathcal{\hat{R}}\] \[\quad-n_{a}n_{b}\left(n^{c}\nabla_{c}\mathcal{\hat{R}}\right)+n_{ a}n_{b}\,a_{c}D^{c}\mathcal{R}\;, \tag{11}\] which is also given in the main text. ## Appendix C Decomposition of \(\square\widetilde{\mathcal{R}}_{ab}\) Here, we detail the split of \(\square\widetilde{\mathcal{R}}_{ab}\) into spatial and temporal part. We start from \[\square\widetilde{\mathcal{R}}_{ab} =-\underbrace{\gamma^{d}_{\phantom{d}c}\nabla_{d}(n^{c}n^{c} \nabla_{e}\widetilde{\mathcal{R}}_{ab})}_{\text{(I)}}+\underbrace{n_{c}n^{d} \nabla_{d}(n^{c}n^{c}\nabla_{e}\widetilde{\mathcal{R}}_{ab})}_{\text{(II)}}\] \[\quad+\underbrace{n_{c}n^{d}\nabla_{d}(\gamma^{ce}\nabla_{e} \widetilde{\mathcal{R}}_{ab})}_{\text{(III)}}-\underbrace{\gamma^{d}_{\phantom{ d}c}\nabla_{d}(\gamma^{ce}\nabla_{e}\widetilde{\mathcal{R}}_{ab})}_{\text{(IV)}}\;,\] project the covariant derivatives onto spatial and temporal part, and look at each term individually. In the first two terms, we can introduce the first-order fiducial variable \(\widetilde{V}_{ab}=-n^{c}\nabla_{c}\widetilde{\mathcal{R}}_{ab}\) to find \[\text{(I)} =\gamma^{d}_{\phantom{d}c}\nabla_{d}(n^{c}n^{c}\nabla_{e} \widetilde{\mathcal{R}}_{ab})=-\gamma^{d}_{\phantom{d}c}\nabla_{d}\left(n^{c }\widetilde{V}_{ab}\right)=K\widetilde{V}_{ab}\;,\] \[\text{(II)} =n_{c}n^{d}\nabla_{d}(n^{c}n^{c}\nabla_{e}\widetilde{\mathcal{R} }_{ab})=-n_{c}n^{d}\nabla_{d}(n^{c}\widetilde{V}_{ab})\] \[=n^{d}\nabla_{d}V_{ab}\;,\] For the third term, we find \[\text{(III)} =n_{c}n^{d}\nabla_{d}(\gamma^{ce}\nabla_{e}\widetilde{\mathcal{R }}_{ab})=n_{c}n^{d}(\nabla_{d}\gamma^{ce})(\nabla_{e}\widetilde{\mathcal{R}}_ {ab})\] \[=n_{c}n^{d}(\nabla_{d}n^{c}n^{c})(\nabla_{e}\widetilde{\mathcal{R }}_{ab})=a^{e}\nabla_{e}\widetilde{\mathcal{R}}_{ab}\] \[=a^{e}\gamma_{e}^{e}\nabla_{c}\widetilde{\mathcal{R}}_{ab}\equiv a ^{c}D_{c}\widetilde{\mathcal{R}}_{ab}\;.\] Here, as well as in the fourth term, \[\text{(IV)}=\gamma^{d}_{\phantom{d}c}\nabla_{d}(\gamma^{ce}\nabla_{e} \widetilde{\mathcal{R}}_{ab})\equiv D_{c}D^{c}\widetilde{\mathcal{R}}_{ab}\;,\] the spatial covariant derivatives should be understood as a shorthand notation and not yet as a purely spatial quantity. This is important since \(\widetilde{\mathcal{R}}_{ab}\) is not yet projected and thus contains temporal components. With this subtlety in mind, we collect results and find \[\square\widetilde{\mathcal{R}}_{ab}=n^{c}\nabla_{c}\widetilde{V}_{ab}+(D_{c}+ a_{c})D^{c}\widetilde{\mathcal{R}}_{ab}-K\widetilde{V}_{ab}\;, \tag{12}\] which is also given in the main text. Appendix D Projections of \((n^{c}\nabla_{c}\widetilde{\mathcal{R}}_{ab})\) and \((n^{c}\nabla_{c}\widetilde{V}_{ab})\) Here, we project the left-hand side of the covariant traceless evolutions equations, i.e., \((n^{c}\nabla_{c}\widetilde{\mathcal{R}}_{ab})\) and \((n^{c}\nabla_{c}\widetilde{V}_{ab})\), onto spatial and temporal parts. In the following, we go through the \(\widetilde{\mathcal{R}}_{ab}\)-case, but the \(\widetilde{V}_{ab}\)-case proceeds analogously. For the spatial projection, we derive \[\gamma^{a}_{i}\gamma^{b}_{j} \left(n^{c}\nabla_{c}\widetilde{\mathcal{R}}_{ab}\right)=\left(n^{ c}\nabla_{c}\gamma^{a}_{i}\gamma^{b}_{j}\widetilde{\mathcal{R}}_{ab}\right)- \left(n^{c}\nabla_{c}\gamma^{a}_{i}\gamma^{b}_{j}\right)\widetilde{\mathcal{R }}_{ab}\] \[=\left(n^{c}\nabla_{c}\mathcal{A}_{ij}\right)+\frac{1}{3}\left(n^{ c}\nabla_{c}\gamma_{ij}\mathcal{A}\right)-2\left(n^{c}\nabla_{c}\gamma^{a}_{i} \right)\gamma^{b}_{j}\widetilde{\mathcal{R}}_{ab}\] \[=\left(n^{c}\nabla_{c}\mathcal{A}_{ij}\right)+\frac{1}{3}\gamma_{ ij}\left(n^{c}\nabla_{c}\mathcal{A}\right)+\frac{1}{3}\mathcal{A}\left(n^{c} \nabla_{c}\gamma_{ij}\right)\] \[\quad-2\left(n^{c}\nabla_{c}n^{a}n_{(i)}\right)\gamma^{b}_{j} \widetilde{\mathcal{R}}_{ab}\] \[=\left(n^{c}\nabla_{c}\mathcal{A}_{ij}\right)+\frac{1}{3}\gamma_{ ij}\left(n^{c}\nabla_{c}\mathcal{A}\right)-\frac{2}{3}\mathcal{A}\left(D_{(i}n_{j)}+K_{ ij}\right)\] \[\quad-2\left(a^{a}n_{(i}+n^{a}a_{(i)}\right)\gamma^{b}_{j} \widetilde{\mathcal{R}}_{ab}\] \[=\left(n^{c}\nabla_{c}\mathcal{A}_{ij}\right)+\frac{1}{3}\gamma_{ ij}\left(n^{c}\nabla_{c}\mathcal{A}\right)-\frac{2}{3}\mathcal{A}\left(D_{(i}n_{j)}+K_{ ij}\right)\] \[\quad-2\,a^{c}\left(\mathcal{A}_{c(i}n_{j)}+\frac{1}{3}\gamma_{ c(i}n_{j)}\mathcal{A}\right)-2a_{(i}\mathcal{C}_{j)}\;, \tag{13}\] where we have used the decomposition of \(\widetilde{\mathcal{R}}_{ab}\) (cf. Eq. (14)) in the second and last equality; the evolution equation for the spatial metric (cf. Eq. (20)) in the fourth equality; and throughout, the decomposition of the metric itself (cf. Eq. (7)). We can independently derive the spatial trace as \[\gamma^{ab}\left(n^{c}\nabla_{c}\widetilde{\mathcal{R}}_{ab}\right) =\left(n^{c}\nabla_{c}\gamma^{ab}\widetilde{\mathcal{R}}_{ab} \right)-\left(n^{c}\nabla_{c}\gamma^{ab}\right)\widetilde{\mathcal{R}}_{ab}\] \[=\left(n^{c}\nabla_{c}\mathcal{A}\right)-\left(n^{c}\nabla_{c}n^{a}n ^{b}\right)\widetilde{\mathcal{R}}_{ab}\] \[=\left(n^{c}\nabla_{c}\mathcal{A}\right)-2\,n^{(a}a^{b)} \widetilde{\mathcal{R}}_{ab}\] \[=\left(n^{c}\nabla_{c}\mathcal{A}\right)-2\,n^{(a}a^{b)} \widetilde{\mathcal{R}}_{ab}\] \[=\left(n^{c}\nabla_{c}\mathcal{A}\right)-2\,n^{(a}a^{b)} \widetilde{\mathcal{R}}_{ab}\] \[=\left(n^{c}\nabla_{c}\mathcal{A}\right)-2\,n^{(a}\mathcal{C}_{c} \;, \tag{14}\] which serves as a crosscheck and agrees with the trace of Eq. (13). Analogously, we derive the mixed projection, i.e., \[n^{a}\gamma^{b}_{d}\left(n^{c}\nabla_{c}\widetilde{\mathcal{R}}_ {ab}\right) =\left(n^{c}\nabla_{c}n^{a}\gamma^{b}_{d}\widetilde{\mathcal{R}}_{ab} \right)-\left(n^{c}\nabla_{c}n^{a}\right)\gamma^{b}_{d}\widetilde{\mathcal{R}}_{ab}\] \[\quad-n^{a}\left(n^{c}\nabla_{c}\gamma^{b}_{d}\right)\widetilde{ \mathcal{R}}_{ab}\] \[=\left(n^{c}\nabla_{c}\mathcal{C}_{d}\right)-a^{a}\left(\mathcal{A}_ {ad}-\frac{1}{3}\gamma_{ad}\mathcal{A}\right)\] \[\quad-n^{a}\left(n^{c}\nabla_{c}n^{b}n_{d}\right)\widetilde{ \mathcal{R}}_{ab}\] \[=\left(n^{c}\nabla_{c}\mathcal{C}_{d}\right)-a^{a}\left(\mathcal{A}_ {ad}-\frac{2}{3}\gamma_{ad}\mathcal{A}\right)\] \[\quad-n_{d}a^{c}\mathcal{C}_{c}\;, \tag{15}\] and the temporal projection (which - by construction - agrees with the spatial trace, cf. Eq. (14), such that the 4D trace vanishes), i.e., \[n^{a}n^{b}\left(n^{c}\nabla_{c}\widetilde{\mathcal{R}}_{ab}\right) =\left(n^{c}\nabla_{c}n^{a}n^{b}\widetilde{\mathcal{R}}_{ab}\right) -\left(n^{c}\nabla_{c}n^{a}n^{b}\right)\widetilde{\mathcal{R}}_{ab}\] \[=\left(n^{c}\nabla_{c}\mathcal{A}\right)-2\,a^{c}\mathcal{C}_{c}\;. \tag{10}\] ## Appendix E BSSN equations For completeness, we provide the implemented BSSN equations that we use to evolve the metric sector of the theory. Our conventions agree with [83]. With a split of the conformal metric and the extrinsic curvature into trace and traceless part, i.e., \[\tilde{\gamma}_{ij} =e^{-4\phi}\gamma_{ij}\;,\quad\text{with}\quad\phi=\frac{\ln( \gamma)}{12}\;, \tag{11}\] \[\tilde{A}_{ij} =e^{-4\phi}\left(K_{ij}-\frac{1}{3}\gamma_{ij}K\right)\;, \tag{12}\] the York-variant of the ADM equations (cf. Eqs. (20) and (21)) can be recast into BSSN form, i.e., \[\partial_{t}\phi= -\frac{1}{6}\alpha K+\beta^{i}\partial_{i}\phi+\frac{1}{6} \partial_{i}\beta^{i} \tag{13}\] \[\partial_{t}K= -\gamma^{ij}D_{j}D_{i}\alpha+\alpha(\tilde{A}_{ij}\tilde{A}^{ij}+ \frac{1}{3}K^{2})\] \[\quad+\frac{1}{2M_{\rm Pl}^{2}}(\widetilde{\rho}+\widetilde{S})+ \beta^{i}\partial_{i}K\;,\] (14) \[\partial_{t}\tilde{\gamma}_{ij}= -2\alpha\tilde{A}_{ij}+\beta^{k}\partial_{k}\tilde{\gamma}_{ij}+ \tilde{\gamma}_{ik}\partial_{j}\beta^{k}+\tilde{\gamma}_{kj}\partial_{i}\beta^ {k}\] \[\quad-\frac{2}{3}\tilde{\gamma}_{ij}\partial_{k}\beta^{k}\;,\] (15) \[\partial_{t}\tilde{A}_{ij} =e^{-4\phi}\left[-(D_{i}D_{j}\alpha)^{\rm TF}+\alpha\left({}^{(3 )}\!R_{ij}^{\rm TF}-\frac{1}{M_{\rm Pl}^{2}}\widetilde{S}_{ij}^{\rm TF}\right)\right]\] \[+\beta^{k}\partial_{k}\tilde{A}_{ij}+\tilde{A}_{ik}\partial_{j} \beta^{k}+\tilde{A}_{kj}\partial_{i}\beta^{k}-\frac{2}{3}\tilde{A}_{ij} \partial_{k}\beta^{k}\] \[+\alpha(K\tilde{A}_{ij}-2\tilde{A}_{il}\tilde{A}^{l}{}_{j})\;,\] (16) \[\partial_{t}\tilde{\Gamma}^{i} =2\alpha\left(\tilde{\Gamma}^{i}_{jk}\tilde{A}^{kj}-\frac{2}{3} \tilde{\gamma}^{ij}\partial_{j}K-\frac{1}{M_{\rm Pl}^{2}}\tilde{\gamma}^{ij} \widetilde{S}_{j}+6\tilde{A}^{ij}\partial_{j}\phi\right)\] \[-2\tilde{A}^{ij}\partial_{j}\alpha+\beta^{j}\partial_{j}\tilde{ \Gamma}^{i}-\tilde{\Gamma}^{j}\partial_{j}\beta^{i}\] \[+\frac{2}{3}\tilde{\Gamma}^{i}\partial_{j}\beta^{j}+\frac{1}{3} \tilde{\gamma}^{i}\partial_{l}\partial_{j}\beta^{j}+\tilde{\gamma}^{lj} \partial_{j}\partial_{l}\beta^{i}\;. \tag{17}\] Eqs. (13) and (14) (as well as Eqs. (15) and (16)) evolve the trace part (as well as the traceless part) of the metric and extrinsic curvature. They are obtained from the York-ADM equations by tracing and subtracting the trace, respectively. Superscripts \({}^{\rm TF}\) denote trace-free parts. Eq. (17) is introduced to remove \(2^{\rm nd}\)-order mixed spatial derivatives in \(R_{ij}^{\rm TF}\) of Eq. (16) by extending the system. The explicit expression for \(R_{ij}^{\rm TF}\) in terms of the conformal connection functions \(\tilde{\Gamma}^{i}\) can be found, e.g., in [39]. Their definition \[\tilde{\Gamma}^{i}\equiv\tilde{\gamma}^{jk}\tilde{\Gamma}^{i}_{jk}=-\partial_ {j}\tilde{\gamma}^{ij} \tag{18}\] serves as an additional constraint. Initial data is physical only if it also obeys Eq. (18). Finally - and crucially with regards to numerical stability - the shift constraint has been used in Eq. (17) to remove spatial derivatives of \(\tilde{A}_{ij}\). ## Appendix F Convergence Tests All simulations were performed under LANL supercomputer Darwin. Darwin is a very heterogeneous cluster with a wide variety of hardware available, including x86, Power PC and ARM CPU architectures, systems with terabytes of memory, and a variety of GPUs and other accelerators. In particular, we choose x86_64 Intel CPUs partition which has dual socket 2.1 GHz 18 core Intel Broadwell E5 2695v4 processor with 45MB of cache and 128GB of RAM on each node. We perform standard convergence tests. To be specific, the self-convergence ratio is given by \[\mathcal{C}_{\rm self}=\log_{2}\frac{||\mathbf{F}_{h_{i}}-\mathbf{F}_{h_{i+1}} ||_{q}}{||\mathbf{F}_{h_{i+1}}-\mathbf{F}_{h_{i+2}}||_{q}}, \tag{19}\] where \(\mathbf{F}\) is the state vector for all evolution variables, and \(||\cdot||_{q}\) is a general expression for different norms. Convergence tests have to be performed with respect to a specific norm which is suitable for given system of evolution of equations. In the following, we denote with \(||\cdot||_{H_{1}}\) the \(H_{1}\) norm. This norm is computed in a discrete approximation that replaces the respective continuum norm [99]. Similarly, the exact convergence ratio, with \(\mathbf{F}_{\rm exact}=0\), can be computed \[\mathcal{C}_{\rm exact}=\log_{2}\frac{||\mathbf{F}_{h_{i}}-\mathbf{F}_{\rm exact }||_{q}}{||\mathbf{F}_{h_{i+1}}-\mathbf{F}_{\rm exact}||_{q}}=\log_{2}\frac{|| \mathbf{F}_{h_{i}}||_{q}}{||\mathbf{F}_{h_{i+1}}||_{q}}\;. \tag{20}\] Given the employed fourth-order scheme, the expected convergence rate is four, in both cases. A more detailed discussions of convergence tests is given in [59]. In Fig. (5), we show the self-convergence test for Schwarzschild spacetime (upper) and the exact convergence test. In both cases, we find the expected fourth-order convergence ratio which matches the implemented fourth-order discretization scheme. For completeness, we show plots of the constraint (i.e., the \(l2\)-norm of the Hamiltonian constraint in Eq. (23)) for all of our numerical simulations: Fig. (6) refers to the linear instability in Subsec. (V.1), see also Fig. (2); Fig. (7) refers to the Teukolsky wave test in Subsec. (V.2), see also Fig. (3); Fig. (8) refers to the linear instability in Subsec. (V.3), see also Fig. (4). Clearly, in all cases, the constrain violations remain small and even decay.
2305.09779
A Scalable Walsh-Hadamard Regularizer to Overcome the Low-degree Spectral Bias of Neural Networks
Despite the capacity of neural nets to learn arbitrary functions, models trained through gradient descent often exhibit a bias towards ``simpler'' functions. Various notions of simplicity have been introduced to characterize this behavior. Here, we focus on the case of neural networks with discrete (zero-one), high-dimensional, inputs through the lens of their Fourier (Walsh-Hadamard) transforms, where the notion of simplicity can be captured through the degree of the Fourier coefficients. We empirically show that neural networks have a tendency to learn lower-degree frequencies. We show how this spectral bias towards low-degree frequencies can in fact hurt the neural network's generalization on real-world datasets. To remedy this we propose a new scalable functional regularization scheme that aids the neural network to learn higher degree frequencies. Our regularizer also helps avoid erroneous identification of low-degree frequencies, which further improves generalization. We extensively evaluate our regularizer on synthetic datasets to gain insights into its behavior. Finally, we show significantly improved generalization on four different datasets compared to standard neural networks and other relevant baselines.
Ali Gorji, Andisheh Amrollahi, Andreas Krause
2023-05-16T20:06:01Z
http://arxiv.org/abs/2305.09779v2
# A Scalable Walsh-Hadamard Regularizer to Overcome the ###### Abstract Despite the capacity of neural nets to learn arbitrary functions, models trained through gradient descent often exhibit a bias towards "simpler" functions. Various notions of simplicity have been introduced to characterize this behavior. Here, we focus on the case of neural networks with discrete (zero-one), high-dimensional, inputs through the lens of their Fourier (Walsh-Hadamard) transforms, where the notion of simplicity can be captured through the _degree_ of the Fourier coefficients. We empirically show that neural networks have a tendency to learn lower-degree frequencies. We show how this spectral bias towards low-degree frequencies can in fact _hurt_ the neural network's generalization on real-world datasets. To remedy this we propose a new scalable functional regularization scheme that aids the neural network to learn higher degree frequencies. Our regularizer also helps avoid erroneous identification of low-degree frequencies, which further improves generalization. We extensively evaluate our regularizer on synthetic datasets to gain insights into its behavior. Finally, we show significantly improved generalization on four different datasets compared to standard neural networks and other relevant baselines. ## 1 Introduction Classical work on neural networks shows that deep fully connected neural networks have the capacity to approximate arbitrary functions (Hornik et al., 1989; Cybenko, 1989). However, in practice, neural networks trained through (stochastic) gradient descent have a "simplicity" bias. This notion of simplicity is not agreed upon and works such as (Arpit et al., 2017; Nakkiran et al., 2019; Valle-Perez et al., 2019; Kalimeris et al., 2019) each introduce a different notion of "simplicity". The simplicity bias can also be studied by considering the function the neural net represents (function space view) and modeling it as Gaussian processes (GP)(Rasmussen, 2004). Daniely et al. (2016); Lee et al. (2018) show that a wide, randomly initialized, neural network in function space is a sample from a GP with a kernel called the "Conjugate Kernel" (Daniely, 2017). Moreover, the evolution of gradient descent on a randomly initialized neural network can be described through the "Neural Tangent Kernel" Jacot et al. (2018); Lee et al. (2019). These works open up the road for analyzing the simplicity bias of neural nets in terms of a _spectral_ bias in Fourier space. Rahaman et al. (2019) show empirically that neural networks tend to learn sinusoids of lower frequencies earlier on in the training phase compared to those of higher frequencies. Through the GP perspective introduced by Jacot et al. (2018); Lee et al. (2019), among others, Ronen et al. (2019); Basri et al. (2020) were able to prove these empirical findings. These results focus on _continuous_ domains and mainly emphasize the case where the input and output are both _one-dimensional_. Here, we focus on _discrete_ domains where the input is a _high-dimensional_ zero-one vector and we analyze the function learned by the neural network in terms of the amount of interactions among its input features in a quantitative manner. Our work is complementary to the majority of the aforementioned work that has been done on the spectral bias of neural networks in the setting of _continuous_, _one-dimensional_ inputs (Ronen et al., 2019; Basri et al., 2020; Rahaman et al., 2019). Yang and Salman (2020), Valle-Perez et al. (2019) are the first to provide spectral bias results for the discrete, higher dimensional, setting (our setting). By viewing a fully connected neural network as a function that maps zero-one vectors to real values, one can expand this function in terms of the Fourier -a.k.a Walsh-Hadamard - basis functions. The Walsh-Hadamard basis functions have a natural ordering in terms of their complexity called their _degree_. The degree specifies how many features each basis function is dependent upon. For example, the zero-degree basis function is the constant function and the degree-one basis functions are functions that depend on exactly one feature. Through analysis of the NTK gram matrix on the Boolean cube, Yang and Salman (2020) theoretically show that, roughly speaking, neural networks learn the lower degree basis functions earlier in training. This tendency to prioritize simpler functions in neural networks has been suggested as a cardinal reason for their remarkable generalization ability despite their over-parameterized nature (Neyshabur et al., 2017; Arpit et al., 2017; Kalimeris et al., 2019; Poggio et al., 2018). However, much less attention has been given to the case where the simplicity bias can _hurt_ generalization (Tancik et al., 2020; Shah et al., 2020). Tancik et al. (2020) show how transforming the features with random Fourier features embedding helps the neural network overcome its spectral bias and achieve better performance in a variety of tasks. They were able to explain, in a unified way, many empirical findings in computer vision research such as sinusoidal positional embeddings through the lens of overcoming the spectral bias. In the same spirit as these works, we show that the spectral bias towards low-degree functions can hurt generalization and how to remedy this through our proposed regularizer. In more recent lines of work, regularization schemes have been proposed to directly impose priors on the function the neural network represents (Benjamin et al., 2019; Sun et al., 2019; Wang et al., 2019). This is in contrast to other methods such as dropout, batch normalization, or other methods that regularize the weight space. In this work, we also regularize neural networks in function space by imposing sparsity constraints on their Walsh-Hadamard transform. Closest to ours is the work of Aghazadeh et al. (2021). Inspired by studies showing that biological landscapes are sparse and contain high-degree frequencies (Sailer and Harms, 2017; Yang et al., 2019; Brookes et al., 2022; Ballal et al., 2020; Poelwijk et al., 2019), they propose a functional regularizer to enforce sparsity in the Fourier domain and report improvements in generalization scores. **Our contributions:** * We analyze the spectral behavior of a simple MLP during training through extensive experiments. We show that the standard (unregularized) network not only is unable to learn (more complex) high-degree frequencies but it also starts learning erroneous low-degree frequencies and hence overfitting on this part of the spectrum. * HashWH (Hashed Walsh Hadamard) - to remedy the aforementioned phenomenon. The regularizer acts as a "sparsifier" on the Fourier (Walsh-Hadamard) basis. In the most extreme cases, it reduces to simply imposing an \(L_{1}\)-norm on the Fourier transform of the neural network. Since computing the exact Fourier transform of the neural net is intractable, our regularizer hashes the Fourier coefficients to buckets and imposes an L1 norm on the buckets. By controlling the number of hash buckets, it offers a smooth trade-off between computational complexity and the quality of regularization. * We empirically show that HashWH aids the neural network in avoiding erroneous low-degree frequencies and also learning relevant high-degree frequencies. The regularizer guides the training procedure to allocate more energy to the high-frequency part of the spectrum when needed and allocate less energy to the lower frequencies when they are not present in the dataset. * We show on real-world datasets that, contrary to popular belief of simplicity biases for neural networks, fitting a low degree function does not imply better generalization. Rather, what is more important, is keeping the _higher amplitude_ coefficients regardless of their degree. We use our regularizer on four real-world datasets and provide state of the art results in terms of \(R^{2}\) score compared to standard neural networks and other baseline ML models, especially for the low-data regime. ## 2 Background In this section, we first review Walsh Hadamard transforms, and notions of degree and sparsity in the Fourier (Walsh-Hadamard) domain (O'Donnell, 2014). Next, we review the notion of simplicity biases in neural networks and discuss why they are spectrally biased toward low-degree functions. ### Walsh Hadamard transforms Let \(g:\{0,1\}^{n}\rightarrow\mathbb{R}\) be a function mapping Boolean zero-one vectors to the real numbers, also known as a "pseudo-boolean" function. The family of \(2^{n}\) functions \(\{\Psi_{f}:\{0,1\}^{n}\rightarrow\mathbb{R}\,|f\in\{0,1\}^{n}\}\) defined below consists of the Fourier basis functions. This family forms a basis over the vector space of all pseudo-boolean functions: \[\Psi_{f}(x)=\frac{1}{\sqrt{2^{n}}}(-1)^{\langle f,x\rangle},f,x\in\{0,1\}^{n}\] where \(\langle f,x\rangle=\sum_{i}f_{i}x_{i}\). Here, \(f\in\{0,1\}^{n}\) is called the _frequency_ of the basis function. For any frequency \(f\in\{0,1\}^{n}\) we denote its _degree_ by \(\text{deg}(f)\) which is defined as the number of non-zero elements. For example, \(f_{1}=[0,0,0,0,0]\) and \(f_{2}=[0,0,1,0,1]\) have degrees \(\text{deg}(f_{1})=0\) and \(\text{deg}(f_{2})=2\), respectively. One can think of the degree as a measure of the complexity of basis functions. For example, \(\Psi_{0}(x)\) is constant, and \(\Psi_{e_{i}}(x)\) where \(e_{i}\) is a standard basis vector (\(\text{deg}(e_{i})=1\)) only depends on feature \(i\) of the input. It is equal to \(+1\) when feature \(i\) is zero and equal to \(-1\) when feature \(i\) is one. More generally, a degree \(d\) basis function depends on exactly \(d\) input features. Since the Fourier basis functions form a basis for the vector space of all pseudo-boolean functions, any function \(g:\{0,1\}^{n}\rightarrow\mathbb{R}\) can be written as a unique linear combination of these basis functions: \[g(x)=\frac{1}{\sqrt{2^{n}}}\sum_{f\in\{0,1\}^{n}}\widehat{g}(f)(-1)^{\langle f, x\rangle}\] The (unique) coefficients \(\widehat{g}(f)\) are called the "Fourier coefficients" or "Fourier amplitudes" and are computed as \(\widehat{g}(f)=\frac{1}{\sqrt{2^{n}}}\sum\limits_{x\in\{0,1\}^{n}}g(x)(-1)^{ \langle f,x\rangle}\). The _Fourier spectrum_ of \(g\) is the vector consisting of all of its \(2^{n}\) Fourier coefficients, which we denote by the bold symbol \(\widehat{\mathbf{g}}\in\mathbb{R}^{2^{n}}\). Assume \(\mathbf{X}\in\{0,1\}^{2^{n}\times n}\) to be the matrix of an enumeration over all possible \(n\)-dimensional binary sequences (\(\{0,1\}^{n}\)), and \(\mathbf{g}(\mathbf{X})\in\mathbb{R}^{2^{n}}\) to be the vector of \(g\) evaluated on the rows of \(\mathbf{X}\). We can compute the Fourier spectrum using Walsh-Hadamard transform as \(\widehat{\mathbf{g}}=\frac{1}{\sqrt{2^{n}}}\mathbf{H}_{n}\mathbf{g}(\mathbf{X})\), where \(\mathbf{H}_{n}\in\{\pm 1\}^{2^{n}\times 2^{n}}\) is the orthogonal Hadamard matrix (see Appendix A). Lastly, we define the _support_ of \(g\) as the set of frequencies with non-zero Fourier amplitudes \(\text{supp}(g):=\{f\in\{0,1\}^{n}|\widehat{g}(f)\neq 0\}\). The function \(g\) is called \(k\)-_sparse_ if \(|\text{supp}(g)|\leq k\). The function \(g\) is called _of degree_\(d\) if all frequencies in its support have degree at most \(d\). ### Spectral Bias Theory The function that a ReLU neural network represents at initialization can be seen as a sample from a GP \(N(0,K)\) in the infinite width limit (Daniely et al., 2016; Lee et al., 2018) (randomness is over the initialization of the weights and biases). The kernel \(K\) of the GP is called the "Conjugate Kernel" (Daniely et al., 2016) or the "nn-GP kernel" (Lee et al., 2018). Let the kernel Gram matrix \(\mathcal{K}\) be formed by evaluating the kernel on the Boolean cube i.e. \(\{0,1\}^{n}\) and let \(\mathcal{K}\) have the following spectral decomposition: \(\mathcal{K}=\sum\limits_{i=1}^{2^{n}}\lambda_{i}u_{i}u_{i}^{\top}\), where we assume that the eigenvalues \(\lambda_{1}\geq\cdots\geq\lambda_{2^{n}}\) are in decreasing order. Each sample of the GP can be obtained as \(\sum\limits_{i=1}^{2^{n}}\lambda_{i}\mathbf{w_{i}}u_{i},\mathbf{w_{i}}\sim\mathcal{N}( 0,1)\). Say that \(\lambda_{1}\gg\sum_{i\geq 2}\lambda_{i}\). Then a sample from the GP will, roughly speaking, look very much like \(u_{1}\). Let \(u_{f},f\in\{0,1\}^{n}\) be obtained by evaluating the Fourier basis function \(\Psi_{f}\) at the \(2^{n}\) possible inputs on \(\{0,1\}^{n}\). Yang and Salman (2020) show that \(u_{f}\) is a eigenvector for \(\mathcal{K}\). Moreover, they show (weak) spectral bias results in terms of the degree of \(f\). Namely, the eigenvalues corresponding to higher degrees have smaller values 1. The result is _weak_ as they do not provide a _rate_ as to which the eigenvalues decrease with increasing degrees. Their results show that neural networks are similar to low-degree functions at initialization. Footnote 1: To be more precise, they show that the eigenvalues corresponding to even and odd degree frequencies form decreasing sequences. That is, even and odd degrees are considered separately. Other works show that in infinite-width neural networks weights after training via (stochastic) gradient descent do not end up too far from the initialization (Chizat et al., 2019; Jacot et al., 2018; Du et al., 2019; Allen-Zhu et al., 2019, 2019), referred to as "lazy training" by Chizat et al. (2019). Lee et al. (2018, 2019) show that training the last layer of a randomly initialized neural network via full batch gradient descent for an infinite amount of time corresponds to GP posterior inference with the kernel \(K\). Jacot et al. (2018); Lee et al. (2019) proved that when training _all_ the layers of a neural network (not just the final layer), the evolution can be described by a kernel called the "Neural Tangent Kernel" and the trained network yields the mean prediction of GP \(N(0,K_{NTK})\)(Yang and Salman, 2020) after an infinite amount of time. Yang and Salman (2020) again show that \(u_{f}\) are eigenvectors and weak spectral bias holds. Furthermore, Yang and Salman (2020) provides empirical results for the generalization of neural nets of different depths on datasets arising from \(k=1\)-sparse functions of varying degrees. ## 3 Low-degree Spectral Bias In this section, we conduct experiments on synthetically generated datasets to show neural networks' spectral bias and their preference toward learning lower-degree functions over higher-degree ones. Firstly, we show that the neural network is not able to pick up the high-degree frequency components. Secondly, it can learn erroneous lower-degree frequency components. To address these issues, in Section 4, we introduce our regularization scheme called HashWH (Hashed Walsh Hadamard) and demonstrate how it can remedy both problems. ### Fourier Spectrum Evolution We analyze the evolution of the function learned by neural networks during training. We train a neural network on a dataset arising from a synthetically generated sparse function with a low-dimensional input domain. Since the input is low-dimensional it allows us to calculate the Fourier spectrum of the network (exactly) at the end of each epoch. **Setup.** Let \(g^{*}:\{0,1\}^{10}\rightarrow\mathbb{R}\) be a synthetic function with five frequencies in its support with degrees 1 to 5 (\(\text{supp}(g^{*})=\{f_{1},f_{2},f_{3},f_{4},f_{5}\},\text{deg}(f_{i})=i\)), all having equal Fourier amplitudes of \(\widehat{g}^{*}(f_{i})=1\). Each \(f_{i}\) is sampled uniformly at random from all possible frequencies of degree \(i\). The training set is formed by drawing uniform samples from the Boolean cube \(x\sim\mathcal{U}_{\{0,1\}^{10}}\) and evaluating \(g^{*}(x)\). We draw five such target functions \(g^{*}\) (with random support frequencies). For each draw of the target function, we create five different datasets all with 200 training points and sampled uniformly from the input domain but with different random seeds. We then train a standard five-layer fully connected neural network using five different random seeds for the randomness in the training procedure (such as initialization weights and SGD). We aggregate the results over the \(125\) experiments by averaging. We experiment the same setting with three other training set sizes. Results with training set size other than 200 and further setup details are reported in Appendices F.1 and D, respectively. **Results.** We first inspect the evolution of the learned Fourier spectrum over different epochs and limited to the target support (\(\text{supp}(g^{*})\)). Figure 0(a) shows the learned amplitudes for frequencies in the target support at each training epoch. Aligned with the literature on simplicity bias [22, 23], we observe that neural networks learn the low-degree frequencies earlier in the epochs. Moreover, we can see in the left-most figure in Figure 0(a) that despite eventually learning low-degree frequencies, the standard network is unable to learn high-degree frequencies. Next, we expand the investigation to the whole Fourier spectrum instead of just focusing on the support frequencies. The first row of Figure 0(b) shows the evolution of the Fourier spectrum during training and compares it to the spectrum of the target function on the bottom row. We average the spectrum linked to one of the five target synthetic functions (over the randomness of the dataset sampling and training procedure) and report the other four in Appendix F.1. We observe that in addition to the network not being able to learn the high-degree frequencies, the standard network is prone to learning incorrect low-degree frequencies as well. ## 4 Overcoming the Spectral Bias via Regularization Now, we introduce our regularization scheme HashWH (Hashed Walsh-Hadamard). Our regularizer is essentially a "sparsifier" in the Fourier domain. That is, it guides the neural network to have a sparse Fourier spectrum. We empirically show later how sparsifying the Fourier spectrum can both stop the network from learning erroneous low-degree frequencies and aid it in learning the higher-degree ones, hence remedying the two aforementioned problems. Assume \(\mathcal{L}_{net}\) is the loss function that a standard neural network minimizes, e.g., the MSE loss in the above case. We modify it by adding a regularization term \(\lambda\mathcal{L}_{sparsity}\) Figure 1: Evolution of the Fourier spectrum during training. Standard is the unregularized neural network. FullWH imposes \(L_{1}\)-norm regularization on the exact Fourier spectrum and is intractable. EN-S alternates between computing a sparse Fourier approximation (computationally very expensive) and regularization. HashWH (ours) imposes \(L_{1}\) regularization on the hashed spectrum. Figure (a) is limited to the target support. The standard neural network is unable to learn higher degree frequencies. Our regularizer fixes this. Figure (b) is on the whole spectrum. The standard neural network picks up erroneous low-degree frequencies while not being able to learn the higher-degree frequencies. Our regularizer fixes both problems. Hence the total loss is given by: \(\mathcal{L}=\mathcal{L}_{net}+\lambda\mathcal{L}_{sparsity}\). The most intuitive choice is \(\mathcal{L}_{sparsity}=\|\widehat{\mathbf{g_{N}}}\|_{0}\), where \(\widehat{\mathbf{g_{N}}}\) is the Fourier spectrum of the neural network function \(g_{N}:\{0,1\}^{n}\rightarrow\mathbb{R}\). Since the \(L_{0}\)-penalty's derivative is zero almost everywhere, one can use its tightest convex relaxation, the \(L_{1}\)-norm, which is also sparsity-inducing, as a surrogate loss. Aghazadeh et al. (2021) use this idea and name it as Epistatic-Net or "EN" regularization: \(\mathcal{L}_{EN}:=\mathcal{L}_{net}+\lambda\|\widehat{\mathbf{g_{N}}}\|_{1}\). In this work, we call this regularization FullWH (Full Walsh Hadamard transform). FullWH requires the evaluation of the network output on all \(2^{n}\) possible inputs at each iteration of back-prop. Therefore, the computational complexity grows _exponentially_ with the number of dimensions \(n\), making it computationally intractable for \(n>20\) in all settings of practical importance. Aghazadeh et al. (2021) also suggest a more scalable version of FullWH, called "EN-S", which roughly speaking, alternates between computing the sparse _approximate_ Fourier transform of the network at the end of each epoch and doing normal back-prop, as opposed to the exact computation of the exact Fourier spectrum when back-propagating the gradients. In our experiments, we show EN-S can be computationally expensive because the sparse Fourier approximation primitive can be time-consuming. For a comprehensive comparison see Appendix B.3. Later, we show that empirically, it is also less effective in overcoming the spectral bias as measured by achievable final generalization error. ### Hashwh We avoid the exponentially complex burden of computing the exact Fourier spectrum of the network by employing a hashing technique to approximate the regularization term \(\lambda\|\widehat{\mathbf{g_{N}}}\|_{1}\). Let \(g:\{0,1\}^{n}\rightarrow\mathbb{R}\) be a pseudo-boolean function. We define the lower dimensional function \(u_{\sigma}:\{0,1\}^{b}\rightarrow\mathbb{R}\), where \(b\ll n\), by sub-sampling \(g\) on its domain: \(u_{\sigma}(\tilde{x})\triangleq\sqrt{\frac{2^{n}}{2^{b}}}\;g(\sigma\tilde{x}), \;\tilde{x}\in\{0,1\}^{b}\) where \(\sigma\in\{0,1\}^{n\times b}\) is some matrix which we call the _hashing matrix_. The matrix-vector multiplication \(\sigma\tilde{x}\)is taken modulo 2. \(u_{\sigma}\) is defined by sub-sampling \(g\) on all the points lying on the (at most) \(b\)-dimensional subspace spanned by the columns of the hashing matrix \(\sigma\). The special property of sub-sampling the input space from this subspace is in the arising Fourier transform of \(u_{\sigma}\) which we will explain next. The Fourier transform of \(u_{\sigma}\) can be derived as (see Appendix B.1): \[\widehat{u}_{\sigma}(\tilde{f})=\sum_{f\in\{0,1\}^{n}:\;\sigma^{\top}f=\tilde {f}}\widehat{g}(f),\;\tilde{f}\in\{0,1\}^{b} \tag{1}\] One can view \(\widehat{u}_{\sigma}(\tilde{f})\) as a "bucket" containing the sum of all Fourier coefficients \(\widehat{g}(\tilde{f})\) that are "hashed" (mapped) into it by the linear hashing function \(h(f)=\sigma^{\top}f\). There are \(2^{b}\) such buckets and each bucket contains frequencies lying in the kernel (null space) of the hashing map plus some shift. In practice, we let \(\sigma\sim\mathcal{U}_{\{0,1\}^{n\times b}}\) be a uniformly sampled hash matrix that is re-sampled after each iteration of back-prop. Let \(\mathbf{X}_{b}\in\{0,1\}^{2^{b}\times b}\) be a matrix containing as rows the enumeration over all points on the Boolean cube \(\{0,1\}^{b}\). Our regularization term approximates (4) and is given by: \[\mathcal{L}_{\text{{\sc HashWH}}}\triangleq\mathcal{L}_{net}+\lambda\| \mathbf{H}_{b}\mathbf{g_{N}}(\mathbf{X}_{b}\sigma^{T})\|_{1}=\mathcal{L}_{net} +\lambda\|\widehat{\mathbf{u_{\sigma}}}\|_{1}\] That is, instead of imposing the \(L_{1}\)-norm directly on the whole spectrum, this procedure imposes the norm on the "bucketed" (or partitioned) spectrum where each bucket (partition) contains sums of coefficients mapped to it. The larger \(b\) is the more partitions we have and the finer-grained the sparsity-inducing procedure is. Therefore, the quality of the approximation can be controlled by the choice of \(b\). Larger \(b\) allows for a finer-grained regularization but, of course, comes at a higher computational cost because a Walsh-Hadamard transform is computed for a higher dimensional sub-sampled function \(u\). Note that \(b=n\) corresponds to hashing to \(2^{n}\) buckets. As long as the hashing matrix is invertible, this precisely is the case of FullWH regularization. The problem with the above procedure arises when, for example, two "important" frequencies \(f_{1}\) and \(f_{2}\) are hashed into the same bucket, i.e., \(\sigma^{\top}f_{1}=\sigma^{\top}f_{2}\), an event which we call a "collision". This can be problematic when the absolute values \(|\widehat{g}(f_{1})|\) and \(|\widehat{g}(f_{2})|\) are large (hence they are important frequencies) but their sum can cancel out due to differing signs. In this case, the hashing procedure can zero out the sum of these coefficients. We can reduce the probability of a collision by increasing the number of buckets, i.e., increasing \(b\)(Alon et al., 1999). In Appendix B.2 we show that the expected number of collisions \(C\) is given by: \(\mathbb{E}[C]=\frac{(k-1)^{2}}{2^{b}}\) which decreases linearly with the number of buckets \(2^{b}\). Furthermore, we can upper bound the probability \(p\) that a given important frequency \(f_{i}\) collides with any other of the \(k-1\) important frequencies in one round of hashing. Since we are independently sampling a new hashing matrix \(\sigma\) at each round of back-prop, the number of collisions of a given frequency over the different rounds has a binomial distribution. In Appendix B.2 we show that picking \(b\geq\log_{2}(\frac{k-1}{\epsilon}),\epsilon>0\) guarantees that collision of a given frequency happens approx. an \(\epsilon\)-fraction of the \(T\) rounds, and not much more. **Fourier spectrum evolution of different regularization methods.** We analyze the effect of regularizing the network with various Fourier sparsity regularizers in the setting of the previous section. Our regularizers of interest are FullWH, EN-S with \(m=5\) (\(2^{m}\) is the number of buckets their sparse Fourier approximation algorithm hashes into), and HashWH with \(b\in\{5,7,8\}\). Returning to Figure 0(a), we see that despite the inability of the standard neural network in picking up the _high-degree_ frequencies, all sparsity-inducing regularization methods display the capacity for learning them. FullWH is capable of perfectly learning the entire target support. It can also be seen that increasing the size of the hashing matrix in HashWH (ours) boosts the learning of high-degree frequencies. Furthermore, Figure 0(b) shows that in addition to the better performance of the sparsity-inducing methods in learning the target support, they are also better at filtering out non-relevant _low-degree_ frequencies. We define a notion of approximation error which is basically the normalized energy of the error in the learned Fourier spectrum on an arbitrary subset of frequencies. **Metric 4.1** (Spectral Approximation Error (SAE)).: _Let \(g_{N}:\{0,1\}^{n}\rightarrow\mathbb{R}\) be an approximation of the target function \(g^{*}:\{0,1\}^{n}\rightarrow\mathbb{R}\). Consider a subset of frequencies \(S\subseteq\{0,1\}^{n}\), and assume \(\widehat{\mathfrak{s}}_{\text{NS}}\) and \(\widehat{\mathfrak{s}^{*}}_{S}\) to be the vector of Fourier coefficients of frequencies in \(S\), for \(g_{N}\) and \(g^{*}\) respectively. As a measure of the distance between \(g_{N}\) and \(g\) on the subset of frequencies \(S\), we define Spectral Approximation Error as: SAE\(=\frac{\|\widehat{\mathfrak{s}}_{\text{NS}}-\widehat{\mathfrak{s}^{*}}_{S}\|_{2}^{2}}{\| \widehat{\mathfrak{s}^{*}}_{S}\|_{2}^{2}}\)_ Figure 2 shows the SAE of the trained network using different regularization methods over epochs, for both when \(S\) is target support as well as when \(S=\{0,1\}^{n}\) (whole Fourier spectrum). The standard network displays a significantly higher (worse) SAE on the whole Fourier spectrum compared to the target support, while Walsh-Hadamard regularizers exhibit consistent performance across both. This shows the importance of enforcing the neural network to have zero Fourier coefficients on the non-target frequencies. Moreover, we can see HashWH (ours) leads to a reduction in SAE that can be smoothly controlled by the size of its hashing matrix. To gain more insight, we split the frequencies into subsets \(S\) consisting of frequencies with the same degree. We visualize the evolution of SAE and also the Fourier energy of the network defined as \(\|\widehat{\mathfrak{s}}_{\text{NS}}\|_{2}^{2}\) in Figure 3. Firstly, the energy of high-degree frequencies is essentially zero for the standard neural network when compared to the low-degree frequencies, which further substantiates the claim that standard neural network training does not learn any high-degree frequencies. We can see that our HashWH regularization scheme helps the neural network learn higher degree frequencies as there is more energy in the high degree components. Secondly, looking at the lower degrees 2 and 3 we can see that the standard neural network reduces the SAE up to some point but then starts overfitting. Looking at the energy plot one can attribute the overfitting to picking up irrelevant degree 2 and 3 frequencies. We see that the regularization scheme helps prevent the neural net from overfitting on the low-degree frequencies and their SAE reduces roughly monotonously. We observe that HashWH (ours) with a big enough hashing matrix size exhibits the best performance among tractable methods in terms of SAE on all degrees. Finally, we can see HashWH is distributing the energy to where it should be for this dataset: less in the low-degree and more in the high-degree frequencies. Finally, it is worth noting that our regularizer makes the neural network behave more like a _decision tree_. It is well known that ensembles of decision tree models have a sparse and low-degree Fourier transform [11]. Namely, let \(g:\{0,1\}^{n}\rightarrow\mathbb{R}\) be a function that can be represented as an ensemble of \(T\) trees each of depth at most \(d\). Then \(g\) is \(k=O(T\cdot 4^{d})\)-sparse and of degree at most \(d\) (Appendix E.1). Importantly, their spectrum is _exactly sparse_ and unlike standard neural networks, which seem to "fill up" the spectrum on the low-degree end, i.e., learn irrelevant low-degree coefficients, decision trees avoid this. Decision trees are well-known to be effective on discrete/tabular data [1], and our regularizer prunes the spectrum of the neural network so it behaves similarly. ## 5 Experiments In this section, we first evaluate our regularization method on higher dimensional input spaces (higher \(n\)) on synthetically generated datasets. In this setting, FullWH is not applicable due to its exponential runtime in \(n\). In addition, we allow varying training set sizes to showcase the efficacy of the regularizer in improving generalization at varying levels in terms of the number of training points in the dataset and especially in the low-data sample regime. Next, we move on to four real-world datasets. We first show the efficacy of our proposed regularizer hashWH on real-world datasets in terms of achieving better generalization errors, especially in the low-data sample regimes. Finally, using an ablation Figure 2: Evolution of the spectral approximation error during training. The left plot limits the error to the target support, while the right one considers the whole Fourier spectrum. For the standard neural network, the SAE is considerably worse on the full spectrum which shows the importance of eliminating the erroneous frequencies that are not in the support of the target function. We also see the graceful scaling of SAE of HashWH (ours) with the hashing matrix size. study, we experimentally convey that the low-degree bias does not result in lower generalization error. ### Synthetic Data **Setup.** Again, we consider a synthetic pseudo-boolean target function \(g^{*}:\{0,1\}^{n}\rightarrow\mathbb{R}\), which has \(25\) frequencies in its support \(|\text{supp}(g^{*})|=25\), with the degree of maximum five, i.e., \(\forall f\in\text{supp}(g^{*}):\text{deg}(f)\leq 5\). To draw a \(g^{*}\), we sample each of its support frequencies \(f_{i}\) by first uniformly sampling its degree \(d\sim\mathcal{U}_{\{1,2,3,4,5\}}\), based on which we then sample \(f_{i}\sim\{f\in\{0,1\}^{n}|\text{deg}(f)=d\}\) and its corresponding amplitude uniformly \(\widehat{g^{*}}(f_{i})\sim\mathcal{U}_{[-1,1]}\). We draw \(g^{*}\) as above for different input dimensions \(n\in\{25,50,100\}\). We pick points uniformly at random from the input domain \(\{0,1\}^{n}\) and evaluate \(g^{*}\) to generate datasets of various sizes: we generate five independently sampled datasets of size \(c\cdot 25n\), for different multipliers \(c\in\{1,..,8\}\) (40 datasets for each \(g^{*}\)). We train a 5-layer fully-connected neural network on each dataset using five different random seeds to account for the randomness in the training procedure. Therefore, for each \(g^{*}\) and dataset size, we train and average over 25 models to capture variance arising from the dataset generation, and also the training procedure. **Results.** Figure 3(a) shows the generalization performance of different methods in terms of their \(R^{2}\) score on a hold-out dataset (details of dataset splits in Appendix D) for different dataset sizes. Our regularization method, HashWH, outperforms the standard network and EN-S in all possible combinations of input dimension, and dataset size. Here, EN-S does not show any significant improvements over the standard neural network, while HashWH (ours) improves generalization by a large margin. Moreover, its performance is tunable via the hashing matrix size \(b\). To stress the computational scalability of HashWH (ours), Figure 3(b) shows the achievable \(R^{2}\)-score by the number of training epochs and training time for different methods, when \(n=50\) and \(c=5\) (see Appendix F.2 for other settings). The trade-off between the training time and generalization can be directly controlled with the choice of the hashing size \(b\). More importantly, comparing HashWH with EN-S, we see that for any given \(R^{2}\) we have runtimes that are orders of magnitude smaller. This is primarily due to the very time-consuming approximation of the Fourier transform of the network at each epoch in EN-S. ### Real Data Next, we assess the performance of our regularization method on four different real-world datasets of varying nature and dimensionality. For baselines, we include not only standard neural networks and EN-S regularization, but also other popular machine learning methods that work well on discrete data, such as ensembles of trees. Three of our datasets are related to protein landscapes [19, 20, 21] which are identical to the ones used by the proposers of EN-S [1], and one is a GPU-tuning [20] dataset. See Appendix C for dataset details. **Results.** Figure 4(a) displays the generalization performance of different models in learning the four datasets mentioned, Figure 3: Evolution of the Spectral Approximation Error (SAE) and energy of the network during training, split by frequency degree. Firstly, in a standard neural network, the energy of high-degree frequencies is essentially zero compared to low-degree frequencies. Secondly, for low degrees (2 and 3) the energy continues to increase while the SAE exhibits overfitting behavior. This implies the neural network starts learning erroneous low-degree frequencies after some epochs. Our regularizer prevents overfitting in lower degrees and enforces higher energy on higher-degree frequencies. Regularized networks show lower energies for lower degrees and higher energy for higher degrees when compared to the standard neural network. using training sets of small sizes. For each given dataset size we randomly sample the original dataset with five different random seeds to account for the randomness of the dataset sub-sampling. Next, we fit five models with different random seeds to account for the randomness of the training procedure. One standard deviation error bars and averages are plotted accordingly over the 25 runs. It can be seen that our regularization method significantly outperforms the standard neural network as well as popular baseline methods on nearly all datasets and dataset sizes. The margin, however, is somewhat smaller than on the synthetic experiments in some cases. This may be partially explained by the distribu Figure 4: (a) Generalization performance on learning a synthetic function \(g^{*}:\{0,1\}^{n}\rightarrow\mathbb{R}\) with train set size: \(c\cdot 25n\) (b) Best achievable test \(R^{2}\) (I) at end of each epoch (II) up to a certain time (seconds). (III) Shows the early stopped \(R^{2}\) score vs. time (seconds). We provide significant improvements across all training sizes over EN-S and standard neural networks, while also showing an order of magnitude speed-up compared to EN-S. Figure 5: (a) Generalization performance of standard and regularized neural networks and benchmark ML models on four real datasets. (b) Training times of different models on the GB1 dataset (c) Results of an ablation study on the potential effect of simplicity bias in the generalization error. This figure shows picking higher amplitude coefficients results in better generalization compared to picking the lower degree terms (d) Distribution of the energy over degree-based sets of frequencies in Entacmaea’s top 100 Fourier coefficients. This shows high-degree components constitute a non-negligible portion of the energy of the function. tion of energy in a real dataset (Figure 5d), compared to the uniform distribution of energy over different degrees in our synthetic setting. To highlight the importance of higher degree frequencies, we compute the exact Fourier spectrum of the Entacmaea dataset (which is possible, since all possible input combinations are evaluated in the dataset). Figure 5d shows the energy of 100 frequencies with the highest amplitude (out of 8192 total frequencies) categorized into varying degrees. This shows that the energy of the higher degree frequencies 3 and 4 is comparable to frequencies of degree 1. However, as we showed in the previous section, the standard neural network may not be able to pick up the higher degree frequencies due to its simplicity bias (while also learning erroneous low-degree frequencies). We also study the relationship between the low-degree spectral bias and generalization in Figure 5c. The study is conducted on the two datasets "Entacmaea" and "SGEMM". We first fit a sparse Fourier function to our training data (see Appendix E). We then start deleting coefficients once according to their degree (highest to lowest and ties are broken randomly) and in another setting according to their amplitude (lowest to highest). To assess generalization, we evaluate the \(R^{2}\) of the resulting function on a hold-out (test) dataset. This study shows that among functions of equal complexity (in terms of size of support), functions that keep the higher amplitude frequencies as opposed to ones that keep the low-degree ones exhibit better generalization. This might seem evident according to Parseval's identity, which states that time energy and Fourier energy of a function are equal. However, considering the fact that the dataset distribution is not necessarily uniform, there is no reason for this to hold in practice. Furthermore, it shows the importance of our regularization scheme: deviating from low-degree functions and instead aiding the neural network to learn higher amplitude coefficients _regardless_ of the degree. **Conclusion** We showed through extensive experiments how neural networks have a tendency to not learn high-degree frequencies and overfit in the low-degree part of the spectrum. We proposed a computationally efficient regularizer that aids the network in not overfitting in the low-degree frequencies and also picking up the high-degree frequencies. Finally, we exhibited significant improvements in terms of \(R^{2}\) score on four real-world datasets compared to various popular models in the low-data regime. ## Acknowledgements This research was supported in part by the NCCR Catalysis (grant number 180544), a National Centre of Competence in Research funded by the Swiss National Science Foundation. We would also like to thank Lars Lorch and Viacheslav Borovitskiy for their detailed and valuable feedback in writing the paper.
2305.11136
Design of the Impulsive Goodwin's Oscillator: A Case Study
The impulsive Goodwin's oscillator (IGO) is a hybrid model composed of a third-order continuous linear part and a pulse-modulated feedback. This paper introduces a design problem of the IGO to admit a desired periodic solution. The dynamics of the continuous states represent the plant to be controlled, whereas the parameters of the impulsive feedback constitute design degrees of freedom. The design objective is to select the free parameters so that the IGO exhibits a stable 1-cycle with desired characteristics. The impulse-to-impulse map of the oscillator is demonstrated to always possess a positive fixed point that corresponds to the desired periodic solution; the closed-form expressions to evaluate this fixed point are provided. Necessary and sufficient conditions for orbital stability of the 1-cycle are presented in terms of the oscillator parameters and exhibit similarity to the problem of static output control. An IGO design procedure is proposed and validated by simulation. The nonlinear dynamics of the designed IGO are reviewed by means of bifurcation analysis. Applications of the design procedure to dosing problems in chemical industry and biomedicine are envisioned.
Alexander Medvedev, Anton V. Proskurnikov, Zhanybai T. Zhusubaliyev
2023-05-18T17:32:23Z
http://arxiv.org/abs/2305.11136v1
# Design of the Impulsive Goodwin's Oscillator: A Case Study+ ###### Abstract The impulsive Goodwin's oscillator (IGO) is a hybrid model composed of a third-order continuous linear part and a pulse-modulated feedback. This paper introduces a design problem of the IGO to admit a desired periodic solution. The dynamics of the continuous states represent the plant to be controlled, whereas the parameters of the impulsive feedback constitute design degrees of freedom. The design objective is to select the free parameters so that the IGO exhibits a stable I-cycle with desired characteristics. The impulse-to-impulse map of the oscillator is demonstrated to always possess a positive fixed point that corresponds to the desired periodic solution; the closed-form expressions to evaluate this fixed point are provided. Necessary and sufficient conditions for orbital stability of the 1-cycle are presented in terms of the oscillator parameters and exhibit similarity to the problem of static output control. An IGO design procedure is proposed and validated by simulation. The nonlinear dynamics of the designed IGO are reviewed by means of bifurcation analysis. Applications of the design procedure to dosing problems in chemical industry and biomedicine are envisioned. ## I Introduction In control of engineered systems, the objective is normally to keep the controlled variable in a vicinity of a predefined setpoint or to make it follow a certain trajectory. In contact, the purpose of physiological control is, arguably, to maintain the involved biological quantities within a certain domain, and to achieve this with minimal energy. Impulsive feedback control is one of the most widespread strategies applied by nature in physiological, especially in neuroendocrine, systems. In particular, the hypothalamic-pituitary adrenal and gonadal axes employ pulse-modulated control and encode information to target cells by manipulating both the amplitude and frequency of the hormone concentration pulses [2]. The problem of exerting a periodic control action that maintains a certain predefined level of effect in a dynamical plant often arises in process control and medicine. For instance, adding doses of chemicals to a reactor is typically done by means of logical (discrete) open-loop control [3]. Similarly, pharmaceuticals, in a tablet or an injection form, are predominantly administered according to a regimen that is prescribed by a physician. When the plant is dissipative and no feedback is involved, the resulting control system is simple and safe. However, the open-loop control cannot attenuate disturbances and handle plant uncertainty. Provided the actuators can be continuously manipulated and real-time measurements of the controlled variable are available, feedback control is routinely employed to achieve robust closed-loop stability or performance. When the control signal is however restricted to _impulsive_ action, the only currently available feedback strategy is Model Predictive Control (MPC) [4]. The utility and physiological coherence of impulsive MPC in drug delivery applications is readily recognized. A promising application of this control approach to insulin dosing in simulated diabetes patients is reported in e.g. [5]. In fact, impulsive insulin delivery mimics the physiological profile of secreting around ten major hormone pulses over 24 hours [6] with their temporal distribution related to meals. Impulsive feedback control is inherently nonlinear and adding an advanced control law to the closed-loop dynamics further complicates stability and performance analysis. Yet, simple pulse-modulated feedback solutions manipulating the amplitude and frequency of the control impulses are lacking at present. The Impulsive Goodwin's Oscillator (IGO) was proposed [7, 8] as a hybrid (continuous-discrete) model of testosterone regulation in males, generalizing the concept of the original (continuous) Goodwin's oscillator [9] to the case of pulsatile (non-basal) secretion. The IGO possesses a number of properties that are typically sought for in biomedical applications, e.g. positivity and boundedness of the solutions. By design, the IGO has no equilibria and can only exhibit periodic or non-periodic (chaotic and quasiperiodic) oscillations [10]. It is proven that the IGO always possesses a unique (stable or unstable) \(1\)-cycle, i.e. a periodic solution with only one firing of the pulse-modulated feedback on the least period [8]. Extensive bifurcation analysis of the IGO [10] suggests that the model, being equipped with the modulation functions of Hill's type, is monostable, even under a small delay present in the closed loop [11]. Thus, when in a stable periodic solution, the IGO is not likely to change to another type of solution due to a temporary exogenous disturbance. This paper addresses a novel problem of designing an IGO that exhibits a stable 1-cycle with desired characteristics. The main contributions of the paper are threefold: * the IGO design problem is formulated with respect to a desired solution, i.e. a 1-cycle; * necessary and sufficient orbital stability conditions of the 1-cycle in the IGO are provided; * bifurcation analysis of the nonlinear IGO dynamics in vicinity of the designed 1-cycle is performed. The paper is organized as follows. In Section II, known facts about the dynamics of the IGO are summarized to facilitate further reading. In Section III, the problem of designing an IGO that exhibits a stable predefined 1-cycle is formulated and solved. A numerical example is considered in Section IV to illustrate the proposed design concept. Section V provides bifurcation analysis of the designed IGO to discern nonlinear dynamics phenomena arising under deviations of the nominal parameter values. Finally, conclusions are drawn. ## II Background This section summarizes the facts pertaining to the IGO model and its behaviors that are used in the rest of the paper. ### _The Impulsive Goodwin's Oscillator_ The IGO is given by the following equations [7, 8] \[\dot{x}(t)=Ax(t),\quad z(t)=Cx(t), \tag{1}\] \[x(t_{n}^{+}) =x(t_{n}^{-})+\lambda_{n}B,\quad t_{n+1}=t_{n}+T_{n}, \tag{2}\] \[T_{n} =\Phi(z(t_{n})),\quad\lambda_{n}=F(z(t_{n})),\] where \(A,B,C\) are constant matrices, \(n=0,1,\ldots\), \[A=\left[\begin{smallmatrix}-a_{1}&0&0\\ g_{1}&-a_{2}&0\\ 0&g_{2}&-a_{3}\end{smallmatrix}\right],B=\left[\begin{smallmatrix}1\\ 0\\ 0\end{smallmatrix}\right],C=[0,0,1],\] \(z\) is the controlled output, and the state \(x=[x_{1},x_{2},x_{3}]^{\top}\) describes concentrations of some chemical substances. In continuous model part (1), \(a_{1},a_{2},a_{3}>0\) are distinct constants and \(g_{1},g_{2}>0\) are positive gains. It is readily observed that the matrix \(A\) is Hurwitz stable, also, \[CB=0,\,CAB=0,\,CA^{2}B\neq 0. \tag{3}\] The latter property implies, in particular, that \(z(t)\) is a smooth function despite the jumps in (2). The minus and plus in a superscript in (2) denote the left-sided and a right-sided limit, respectively. The amplitude modulation function \(F(\cdot)\) and frequency modulation function \(\Phi(\cdot)\) are continuous and monotonic for positive arguments; \(F(\cdot)\) is non-increasing and \(\Phi(\cdot)\) is non-decreasing, also, \[\Phi_{1}\leq\Phi(\cdot)\leq\Phi_{2},\quad 0<F_{1}\leq F(\cdot)\leq F_{2}, \tag{4}\] where \(\Phi_{1}\), \(\Phi_{2}\), \(F_{1}\), \(F_{2}\) are positive constants1. Then control law (2) constitutes a frequency and amplitude pulse modulation operator [12] implementing an output feedback over (1). The time instants \(t_{n}\) are called (impulse) firing times and \(\lambda_{n}\) represent the corresponding impulse weights. Footnote 1: Notably, with respect to dosing applications, the bounds \(F_{1}\) and \(F_{2}\) specify the least and largest dose that can be delivered by the control law, while \(\Phi_{1}\) and \(\Phi_{2}\) prescribe the shortest and longest interval between the administered doses. The explicit way of enforcing these safety limits is favorable in, e.g., healthcare applications. ### _Solution Properties_ The dynamics of the IGO are defined by differential equation (1) in between the feedback firing times and undergo jumps of the magnitude \(\lambda_{n}B\) at the times \(t_{n}\) in accordance with (2). Due to the positivity of \(F_{1}\), the IGO lacks equilibria and exhibits only oscillatory periodic or non-periodic (e.g. chaotic or quasiperiodic) solutions. The solutions of the IGO are positive under a positive initial condition \(x(t_{0}^{-})\), because \(A\) is Metzler2 and \(F(\cdot)\) is uniformly positive due to (4). It is proved in [8] that the solutions are bounded, because \(A\) is Hurwitz and the nonlinear characteristics \(F,\Phi\) are bounded. Footnote 2: A square matrix whose off-diagonal entries are all nonnegative is said to be Metzler. The exponential \(\mathrm{e}^{A},t\geq 0\) is nonnegative for a Metzler \(A\). Denoting \(X_{n}=x(t_{n}^{-})\), the evolution of the continuous state vector of the IGO from one firing time to the next one obeys the impulse-to-impulse map [8] \[X_{n+1} =Q(X_{n}), \tag{5}\] \[Q(\xi) =\mathrm{e}^{A\Phi(C\xi)}\left(\xi+F(C\xi)B\right).\] This paper focuses on periodic solutions of model (1),(2) that correspond to fixed points of the map \(Q\). A periodic solution with exactly \(m\) firings of the pulse-modulated feedback within the least period is called \(m\)-cycle. In particular, for a 1-cycle with the initial condition \(X\), it applies \[X=Q(X). \tag{6}\] Since all the solutions of (1), (2) are positive, it holds that \(X>0\), where the inequality is understood element-wise. **Proposition 1** ([8]): _System (1), (2) has one and only one (positive) \(1\)-cycle, that is, (6) has a unique solution \(X>0\). The cycle parameters \(\lambda\), \(T\), and \(z_{0}\) can be evaluated by solving the following system of algebraic equations_ \[z_{0} =\lambda g_{1}g_{2}\sum_{i=1}^{3}\frac{\alpha_{i}}{\mathrm{e}^{a_{ i}T}-1},\quad\alpha_{i}=\prod_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{3}\frac{1}{a_{j}-a_{i}}, \tag{7}\] \[\lambda =F(z_{0}),\quad T=\Phi(z_{0}). \tag{8}\] The key idea of proving Theorem 1 in [8] is to rewrite (6) in terms of the output variable \(z=CX=x_{3}\) as \[X=\mathrm{e}^{A\Phi(z)}\left(X+F(z)B\right),\,\,\,z=CX, \tag{9}\] which equation is subsequently reduced to the scalar equation \[z=C(\mathrm{e}^{-A\Phi(z)}-I)^{-1}BF(z).\] The right-hand side of this equation is a decreasing bounded function of \(z>0\), being strictly positive as \(z\to 0+\)[8], which entails the existence and uniqueness of the solution. The 1-cycle above is orbitally asymptotically stable [8] if and only if the fixed point \(X\) is asymptotically stable as the equilibrium of discrete-time dynamics (5), that is, the Jacobian matrix \(Q^{\prime}(X)\) is Schur stable3, where Footnote 3: A square matrix is said to be Schur (Schur stable) if all its eigenvalues \(\lambda_{j}\) belong to the unit disk \(|\lambda_{j}|<1\). \[Q^{\prime}(X)=\mathrm{e}^{A\Phi(z_{0})}\left(I+F^{\prime}(z_{0})BC\right)+ \Phi^{\prime}(z_{0})AXC. \tag{10}\] ## III Design The IGO _design_ problem treated here is formulated in the following way. Suppose that the dynamics of (1) given by the matrix \(A\) are known. In drug dosing, the elements of \(x(t)\) can belong to, e.g., a known pharmacokinetic-pharmacodynamic model [13]. Given the parameters of a 1-cycle, the IGO design task is to find the modulation functions that render, with orbital stability, the desired periodic solution. In terms of the model parameters, the problem in question can be summarized as follows. Given the parameters \(a_{1},a_{2},a_{3},g_{1}\), find \(\Phi(\cdot),F(\cdot)\) that provide the desired characteristics of a stable 1-cycle \(\lambda,T\). In the design procedure proposed below, \(g_{2}>0\) always appears in product with \(\lambda\) and can be selected as an arbitrary constant. From (8) and (10), the conditions for 1-cycle existence and stability in the IGO involve \(z_{0}\), i.e. the output value at the fixed point \(X\) in (6). Therefore, the modulation functions, as such, cannot be obtained in the design procedure, but only interpolation conditions that they and their derivatives have to satisfy to achieve the desired solution. ### _Divided differences and the Opitz formula_ To evaluate of a function \(f(\cdot)\) of the matrix \(A\), the so-called Opitz formula will be used in the analysis to follow. The complex-valued function \(f(\cdot)\) is assumed to be well-defined and complex-analytic in a vicinity of the matrix spectrum \(\sigma(A)=\{-a_{1},-a_{2},-a_{3}\}\) where the eigenvalues are pairwise different; See [14] for a more general case. The first divided difference (1-DD) of a function \(f\) is introduced [15, 16] as a function of two variables \[f[z_{0},z_{1}]\triangleq\frac{f(z_{1})-f(z_{0})}{z_{1}-z_{0}},\] which expression is well defined if and only if \(f(z_{1})\), \(f(z_{0})\) exist and \(z_{0}\neq z_{1}\). The second divided difference (2-DD) is a function of three variables and is defined by \[f[z_{0},z_{1},z_{2}]\triangleq\frac{f[z_{1},z_{2}]-f[z_{0},z_{1}]}{z_{2}-z_{0}},\] where \(f(z_{0}),f(z_{1}),f(z_{2})\) exist and \(z_{0},z_{1},z_{2}\) are pairwise different. Remarkably, both 1-DD and 2-DD are _symmetric_ functions. Furthermore, for a scalar \(\xi\neq 0\) and \(f_{\xi}(z)\triangleq f(z\xi)\), it holds \[f_{\xi}[z_{0},z_{1}]=\xi f[z_{0}\xi,z_{1}\xi],\;f_{\xi}[z_{0},z_{1},z_{2}]= \xi^{2}f[z_{0}\xi,z_{1}\xi,z_{2}\xi].\] After some computations, it can be shown that \[f[z_{0},z_{1},z_{2}]=\sum_{i=0}^{2}\beta_{i}f(z_{i}),\quad\beta_{i}=\prod_{ \begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{3}\frac{1}{z_{j}-z_{i}}.\] The Lagrange mean value theorem implies that if \(f(\cdot)\) attains _real_ values on some real interval \(I=(\alpha,\beta)\), then, for each \(z_{0},z_{1}\in I\), \(z_{0}<z_{1}\) there exists \(\zeta\in[z_{0},z_{1}]\) such that \(f[z_{0},z_{1}]=f^{\prime}(\zeta)\). A similar result can be proved for the 2-DD [16, Corollary to Proposition 43]: for each triple \(z_{0},z_{1},z_{2}\in I\), one has \[f[z_{0},z_{1},z_{2}]=\frac{1}{2}f^{\prime\prime}(\zeta),\;\;\zeta\in[\min_{i} z_{i},\max_{i}z_{i}].\] For matrices of dimension three, a generalized4 Optima formula in [14] gives the closed-form representation of \(f(A)\) Footnote 4: Typically, the Opitz formula is considered for the situation where the second main diagonal contains ones, that is, \(g_{1}=g_{2}=1\), the general case can be derived by a simple similarity transformation. \[\exp(At)=\] \[\left[\begin{array}{c|c}\mathrm{e}^{-a_{1}t}&\cdot&0&0\\ g_{1}t\,\mathrm{e}[-a_{1}t,-a_{2}t]&\cdot&\mathrm{e}^{-a_{2}t}&0\\ g_{1}g_{2}t^{2}\,\mathrm{e}[-a_{1}t,-a_{2}t,-a_{3}t]&\cdot&g_{2}t\,\mathrm{e}[- a_{2}t,-a_{3}t]&\cdot&\mathrm{e}^{-a_{3}t}\end{array}\right].\] Here, following standard notation, we use \(\mathrm{e}[z_{0},z_{1}]\) to denote the 1-DD of the exponential function \(\mathrm{e}^{z}=\exp(z)\); the same applies to the 2-DD \(\mathrm{e}[z_{0},z_{1},z_{2}]\). By virtue of the mean value theorem, all divided differences of the exponential function are positive. Subsequently, all the elements of \(\exp At\) are non-negative. This is well in line with the fact of \(A\) being Metzler. The obtained expression for the transition matrix generalizes to higher dimensions of the continuous dynamics when the two-diagonal structure of the matrix \(A\) is preserved [14]. ### _Fixed point_ Proposition 1, combined with the Opitz formula, enables the calculation of the parameters of the unique 1-cycle for a given model of the IGO. The 1-cycle corresponds to a fixed point of the map \(Q(\cdot)\), according to (6). The following converse statement, yielding the fixed point for a set of 1-cycle parameters, can then be proven. Denote, for brevity, \[\mu(z)\triangleq\frac{1}{\mathrm{e}^{-z}\,-1}=\frac{\mathrm{e}^{z}}{1-\mathrm{ e}^{z}},\quad z\neq 0.\] **Proposition 2**: _Given the parameters of 1-cycle \(T>0\), \(\lambda>0\), the fixed point \(X>0\) of map \(Q\) from (5) is calculated as_ \[x_{1} =\lambda\mu(-a_{1}T)=\frac{\lambda\mathrm{e}^{-a_{1}T}}{1-\mathrm{ e}^{-a_{1}T}},\] \[x_{2} =\lambda g_{1}T\mu[-a_{1}T,-a_{2}T]=\] \[=\frac{\lambda g_{1}T\,\mathrm{e}[-a_{1}T,-a_{2}T]}{(1-\mathrm{e} ^{-a_{1}T})(1-\mathrm{e}^{-a_{2}T})}, \tag{11}\] \[x_{3} =\lambda g_{1}g_{2}T^{2}\mu[-a_{1}T,-a_{2}T,-a_{3}T]=\] \[=\frac{\lambda g_{1}g_{2}T^{2}}{(1-\mathrm{e}^{-a_{1}T})(1-\mathrm{ e}^{-a_{2}T})(1-\mathrm{e}^{-a_{3}T})}\times\] (12) \[\times\Big{(}\,\mathrm{e}[-a_{1}T,-a_{2}T,-a_{3}T]\] \[+\mathrm{e}[-(a_{1}+a_{2})T,-(a_{1}+a_{3})T,-(a_{2}+a_{3})T]\Big{)}.\] _Proof:_ For \(\Phi(CX)=\Phi(x_{03})=T\) and \(F(CX)=F(x_{03})=\lambda\), \(X\) is a given fixed point satisfying (9) if and only if \[X=\lambda(\mathrm{e}^{-AT}-I)^{-1}B=\lambda\mu(AT)B,\] that is, \(X\) is the first column of the matrix \(\mu(AT)\). The leftmost equalities in (11), relating \(x_{0i},i=1,2,3\) to the divided differences of \(\mu\), follow immediately from the Opitz formula. The rightmost equalities are validated by a straightforward computation, which is omitted here. Proposition 2 implies that \(z_{0}=x_{03}\) can be calculated for any choice of the distinct constants \(a_{1},a_{2},a_{3}\), which fact perfectly agrees with the result of Proposition 1. Then, for a given continuous part of the IGO in (1) and desired \(\lambda,T\), the value of \(z_{0}\) is obtained by specifying the values of the modulation functions at that point according to (8). Further, since a 1-cycle is uniquely defined by the fixed point, the elements of the matrix \(A\) and \(\lambda,T\) stipulate the periodic solution of the IGO. ### _Stability of \(1\)-cycle_ Proposition 2 specifies the fixed point corresponding to the desired periodic solution but does not guarantee its stability. Then, additionally, matrix (10) needs to be stable to ensure that the 1-cycle is relevant in feedback control context. In the design problem at hand, the slopes of the modulation functions \(F(\cdot)\), \(\Phi(\cdot)\) at the fixed point corresponding to the desired 1-cycle constitute the degrees of freedom that can be utilized for the stabilization of the periodic solution. As the result below explicates, the design problem is similar to what is known as static output feedback stabilization in linear time-invariant (LTI) systems [17]. **Proposition 3**: _Jacobian (10) at the fixed point \(X\) admits the parameterization_ \[Q^{\prime}(X)=\mathrm{e}^{A\Phi(z_{0})}+\left(F^{\prime}(z_{0})J+\Phi^{\prime }(z_{0})D\right)C,\] _where \(J,D\in\mathbb{R}^{3}\) and \(J=\mathrm{e}^{A\Phi(z_{0})}\,B>0\), \(D=AX<0\), \(z_{0}=CX=x_{03}\)._ The expression for \(Q^{\prime}(X_{0})\) and formulas for \(J,D\) are straightforward from (10). Furthermore, since \(g_{1},g_{2}>0\) and all divided differences of the exponential functions are positive, the formula for \(\mathrm{e}^{At}\) derived in Section III-A ensures that the vector \(J\), being the first column of the matrix \(\mathrm{e}^{A\Phi(z_{0})}\), is strictly positive. In order to prove that \(D=AX<0\), notice that \[D=A(\mathrm{e}^{-A\Phi(z_{0})}-I)^{-1}B.\] Introducing the function \[\nu(z)\triangleq z\mu(z)=\frac{z}{e^{-z}-1},\] one notices that \(D=T^{-1}\nu(TA)B\) is nothing else but the first column of the matrix \(T^{-1}\nu(TA)\). It can be demonstrated that the function \(\nu\) (see Fig. 1) is negative, decreasing, and strictly concave on the interval \(z\in[-\infty,0)\). Hence, in view of the mean value theorem, the divided differences \(\nu[-a_{1}T,-a_{2}T]\), \(\nu[-a_{2}T,-a_{3}T]\), \(\nu[-a_{1}T,-a_{2}T,-a_{3}T]\) are all negative, as well the values \(\nu(-a_{i}T)\). In virtue of the Opitz formula, \(\nu(TA)B<0\), entailing that \(D<0\) and concluding the proof of Proposition 3. From the result of Proposition 3, \(Q^{\prime}(X)\) can be rendered Schur stable by the feedback gain \(K\in\mathbb{R}^{3}\) \[Q^{\prime}(X)=\mathrm{e}^{A\Phi(z_{0})}+KC, \tag{13}\] subject to \[K=\begin{bmatrix}J&D\end{bmatrix}\begin{bmatrix}F^{\prime}(z_{0})\\ \Phi^{\prime}(z_{0})\end{bmatrix}. \tag{14}\] Since the pair \((\mathrm{e}^{A\Phi(z_{0})},C)\) is observable, an arbitrary eigenvalue spectrum of \(Q^{\prime}(X)\) can be achieved with an unrestricted gain \(K\). However, due to (14), \(K\) has to be a linear combination of \(J\) and \(D\) with the coefficients \(F^{\prime}(z_{0})\leq 0\) and \(\Phi^{\prime}(z_{0})\geq 0\), correspondingly. This feedback structure also appears in the classical problem of static output feedback design, see [17] for an overview. A crucial distinction between the static output feedback in an LTI system and the pulse-modulated feedback of the IGO is that the former operates around a (constant) output setpoint whereas the latter stabilizes an LTI along a periodic solution (a 1-cycle) expressed as a fixed point. **Remark 1**: _The last statement of Proposition 3 entails that \(JF^{\prime}(z_{0})+D\Phi^{\prime}(z_{0})\leq 0\), for all feasible values of \(F^{\prime}(z_{0}),\Phi^{\prime}(z_{0})\). Therefore, the feedback in the IGO is negative, despite the fact that all the involved quantities are positive. This property is natural given the underlying principle of the pulse-modulated feedback in the IGO where the impulses become of lower weight and sparser when the output values are higher than \(z_{0}\)._ It can also be noticed that the pair of slopes \(F^{\prime}(z_{0})=0,\Phi^{\prime}(z_{0})=0\) yields in the Schur stable matrix \(Q^{\prime}(X)\). Even though constant modulation functions formally produce a stable 1-cycle, the feedback in the IGO is essentially eliminated, and the impulsive sequence is independent of the measured output. Fig. 1: The plot of function \(\nu(x)\) for \(x<0\). **Lemma 1** (Theorem 3.1, [18]): _Let \(A=[a_{ij}]_{i,j=1}^{3}\) be a Step 3: real matrix. Denote \(M(A)=m_{11}(A)+m_{22}(A)+m_{33}(A)\), Step 4: where \(m_{ii}(A)\) stand for the principle minors_ \[m_{11}(A) =a_{22}a_{33}-a_{23}a_{32},\] \[m_{22}(A) =a_{11}a_{33}-a_{31}a_{13},\] \[m_{33}(A) =a_{11}a_{22}-a_{21}a_{12}.\] _Then, matrix \(A\) is Schur stable if and only if the following three conditions are satisfied:_ 1. \(|\det A|<1\)_,_ 2. \(|\operatorname{tr}A+\det A|<1+M(A)\)_,_ 3. \(|\operatorname{tr}A\det A-M(A)|<1-\det^{2}A\)_._ To analyse the Schur stability of matrix (10), one can find the characteristics employed by Lemma 1 as functions of \(\Phi^{\prime}(z_{0})\), \(F^{\prime}(z_{0})\). For instance, applying (13) and the well-known Schur complement formula \(\det(I+XY)=\det(I+YX)\), where \(XY,YX\) are square matrices, but \(X,Y\) need not be square, one has \[\det Q^{\prime}(X)=\det(\operatorname{e}^{AT})\det(I_{3}+ \operatorname{e}^{-AT}KC)=\] \[=\operatorname{e}^{-(a_{1}+a_{2}+a_{3})T}\left(1+C\operatorname{e }^{-AT}K\right)=\] \[=\operatorname{e}^{-(a_{1}+a_{2}+a_{3})T}\left(1+C \operatorname{e}^{-AT}[J,D]\left[\begin{smallmatrix}F^{\prime}(z_{0})\\ \Phi^{\prime}(z_{0})\end{smallmatrix}\right]\right)=\] \[\operatorname{e}^{-(a_{1}+a_{2}+a_{3})T}(1+Ce^{-AT}D\Phi^{\prime }(z_{0})).\] To derive the latter equality, one has to notice that \(C\operatorname{e}^{-AT}J=C\operatorname{e}^{-AT}\operatorname{e}^{AT}B=CB=0\). Similarly, after some computations, one can obtain two remaining characteristics. We formulate the following proposition. **Proposition 4**: _For \(Q^{\prime}(X)\) defined by (10), it applies_ \[\operatorname{tr}Q^{\prime}(X) =\operatorname{tr}\operatorname{e}^{AT}+C\begin{bmatrix} \operatorname{e}^{AT}B&AX\end{bmatrix}\begin{bmatrix}F^{\prime}(z_{0})\\ \Phi^{\prime}(z_{0})\end{bmatrix},\] \[\det Q^{\prime}(X) =\operatorname{e}^{-(a_{1}+a_{2}+a_{3})}T(1+C\operatorname{e}^{- AT}AX\Phi^{\prime}(z_{0})),\] \[M(Q^{\prime}(X)) =\operatorname{e}^{-(a_{1}+a_{2})T}+\operatorname{e}^{-(a_{1}+a_ {3})T}+\operatorname{e}^{-(a_{2}+a_{3})T}\] \[+\begin{bmatrix}\psi_{1}&\psi_{2}\end{bmatrix}\begin{bmatrix}F^{ \prime}(z_{0})\\ \Phi^{\prime}(z_{0})\end{bmatrix},\] \[\psi_{1} =(\operatorname{e}^{-a_{1}T}+\operatorname{e}^{-a_{2}T})j_{3}\] \[-g_{2}T\Big{(}\operatorname{e}[-a_{2}T,-a_{3}T]j_{2}\] \[+g_{1}T\operatorname{e}[-a_{1}T,-a_{2}T,-a_{3}T]j_{1}\Big{)},\] \[\psi_{2} =(\operatorname{e}^{-a_{1}T}+\operatorname{e}^{-a_{2}T})d_{3}\] \[-g_{2}T\Big{(}\operatorname{e}[-a_{2}T,-a_{3}T]d_{2}\] \[+g_{1}T\operatorname{e}[-a_{1}T,-a_{2}T,-a_{3}T]d_{1}\Big{)}.\] _Here \(j_{i},d_{i}\) are the elements of vectors \(J\) and \(D\)._ ### _Design algorithm_ The results of Section III can be summarized in the form of the following procedure rendering the desired solution to the IGO. 1. Select the desired 1-cycle's characteristics \(\lambda\) and \(T\). 2. From plant model (1), obtain the parameters \(a_{1}\), \(a_{2}\), \(a_{3}\) and \(g_{1}\); \(g_{2}>0\) can be selected arbitrarily. 3. Calculate the fixed point (and \(z_{0}\)) from (11). 4. Define the structure of the modulation functions \(F\), \(\Phi\) and calculate their derivatives \(F^{\prime}\), \(\Phi^{\prime}\). 5. Evaluate the three stability conditions specified in Lemma 1 with respect to the Jacobian \(Q^{\prime}(X)\) using the expressions of the matrix functions in Proposition 4. 6. By selecting the parameters of the modulation functions, ensure that \(F^{\prime}(z_{0})\), \(\Phi^{\prime}(z_{0})\) satisfy the stability conditions of Step 5. 7. By scaling the modulation functions, ensure the equalities \(F(z_{0})=\lambda\), \(\Phi(z_{0})=T\). ## IV Design Example This section illustrates the use of the design algorithm outlined in Section III-D by a numerical example worked out step-by-step. Consider the design of a 1-cycle with \(\lambda=4.66\), \(T=66.75\) (Step 1) in the IGO given by (1), (2), where \(g_{1}=2.0\), \(g_{2}=0.5s\), \(a_{1}=0.08\), \(a_{2}=0.15\), \(p=2\), and \(a_{3}=0.12\), (Step 2). The corresponding fixed point (Step 3) \[X=\begin{bmatrix}0.0225&0.6360&6.8330\end{bmatrix}^{\top},\] thus \(z_{0}=6.8330\). Following [8], define the structure of the modulation functions (Step 4) as the Hill functions \[\Phi(z)=k_{1}+k_{2}\;\frac{(z/h_{\Phi})^{p_{\Phi}}}{1+(z/h_{\Phi})^{p_{\Phi}}}, \tag{15}\] \[F(z)=k_{3}+\frac{k_{4}}{1+(z/h_{F})^{p_{F}}}.\] The coefficients \(k_{i},i=1,\ldots,4\) explicitly specify the values on the minimal and maximal dose as well as the minimal and maximal time interval between the doses \[k_{3}<F(z)<k_{3}+k_{4},\quad k_{1}<\Phi(z)<k_{1}+k_{2}.\] The parameters \(k_{2}\) and \(k_{4}\) also influence the derivatives of the modulation functions \[\Phi^{\prime}(z)=\frac{k_{2}p_{\Phi}z^{p_{\Phi}-1}h_{\Phi}^{-p_{\Phi}}}{(1+(z/h_{ \Phi})^{p_{\Phi}})^{2}},\;F^{\prime}(z)=-\frac{k_{4}p_{F}z^{p_{F}-1}h^{-p_{F}}}{ (1+(z/h_{F})^{p_{F}})^{2}}.\] Therefore, besides \(k_{2}\), \(k_{4}\), the derivatives are also defined by \(h_{\Phi}\), \(p_{\Phi}\), \(h_{F}\), \(p_{F}\). From the parameters of continuous part (1) and \(T\), the stability conditions of the fixed point \(X\) are evaluated. Then the involved functions of the Jacobian amount to \[\operatorname{tr}Q^{\prime}(X) =0.0052+1.4574F^{\prime}(z_{0})-0.5020\Phi^{\prime}(z_{0}),\] \[\det Q^{\prime}(X) =7.1410\cdot 10^{-11}-0.172\cdot 10^{-14}\Phi^{\prime}(z_{0}),\] \[M(Q^{\prime}(X)) =2.1528\cdot 10^{-7}-0.1251\cdot 10^{-4}F^{\prime}(z_{0})\] \[+0.1460\cdot 10^{-4}\Phi^{\prime}(z_{0}).\] Notice that \(M(Q^{\prime}(X))>0\) for all admissible values of \(F^{\prime}(z_{0})\), \(\Phi^{\prime}(z_{0})\). Given the orders of the coefficients in the matrix functions of \(Q^{\prime}(X)\), stability of \(X\) is guaranteed if \[|\operatorname{tr}Q^{\prime}(X)| <1+M(Q^{\prime}(X)),\] \[M(Q^{\prime}(X)) <1,\] or, due to the positivity of \(M(Q^{\prime}(X))\), \[|\operatorname{tr}Q^{\prime}(X)|<1. \tag{16}\] The inequality above is satisfied for \(F^{\prime}(z_{0})=-0.1143\), \(\Phi^{\prime}(z_{0})=2.2852\). As expected, stability condition (16) limits the derivatives of the modulation functions that act as feedback gains, cf. (14). Also \(h\), \(p\) have to obey certain inequalities imposed by the parametrization in (15). Introduce the notation \[\eta_{\Phi}=\left(\frac{z_{0}}{h_{\Phi}}\right)^{p_{\Phi}},\quad\theta_{\Phi}= \frac{k_{2}p_{\Phi}}{2z_{0}\Phi^{\prime}(z_{0})}.\] Then, for \(\Phi^{\prime}(z)\) to take the desired value in \(z_{0}\), it applies \[\eta_{\Phi}^{2}+2(1-\theta_{\Phi})\eta_{\Phi}+1=0,\] and, therefore, \[\eta_{\Phi,1,2}=\theta_{\Phi}-1\pm\sqrt{\theta_{\Phi}(\theta_{\Phi}-2)}.\] When it is guaranteed that \[p_{\Phi}>\frac{4z_{0}\Phi^{\prime}(z_{0})}{k_{2}}>0, \tag{17}\] both \(\eta_{\Phi,1}\) and \(\eta_{\Phi,2}\) are positive, then \[h_{\Phi,1,2}=\frac{z_{0}}{r\sqrt[p]{\eta_{\Phi,1,2}}}.\] Similarly, with \[\eta_{F}=\left(\frac{z_{0}}{h_{F}}\right)^{p_{F}},\quad\theta_{F}=\frac{k_{4}p _{F}}{2z_{0}F^{\prime}(z_{0})},\] one has \[\eta_{F}^{2}+2(1+\theta_{F})\eta_{F}+1=0,\] and then \[\eta_{F,1,2}=-(\theta_{F}+1)\pm\sqrt{\theta_{F}(\theta_{F}+2)}.\] When it is guaranteed that \[0<p_{F}<-\frac{4z_{0}F^{\prime}(z_{0})}{k_{4}}, \tag{18}\] both roots are positive. Notice that the condition \[\theta_{F}+1<0\] results in a weaker inequality \[0<p_{F}<-\frac{2z_{0}F^{\prime}(z_{0})}{k_{4}}.\] Conditions (17) and (18) are satisfied (Step 5) for \(p_{\Phi}=p_{F}=2\), thus yielding \(h_{\Phi}=h_{F}=h=4.112\), \(k_{2}=40\), \(k_{4}=2.0\). Now, \(k_{1}=60\) and \(k_{3}=3.0\) ensures (Step 6) that \[F(z_{0})=\lambda,\quad\Phi(z_{0})=T.\] The closed orbit of the designed 1-cycle is depicted in Fig. 2, along with trajectories resulting from deviations in initial conditions for continuous part of the IGO (1). The evolution of the impulse weight (dose) sequence \(\lambda_{k}\) (see (2)) to the pre-defined 1-cycle amplitude \(\lambda\) is depicted in Fig. 3. A series of interchanging overdosing and underdosing events asymptotically converges to the desired value. This behavior could not be predicted from the design procedure since only a stable 1-cycle is sought. ## V Bifurcation Analysis To investigate the behaviors of the designed IGO under parameter variation, bifurcation analysis is performed. For an interval of the parameter values \(a_{3}\) and following the steps of the design procedure in Section III, the value of \(h=h_{\Phi}=h_{F}\) is found and the stability of a fixed point \(\mathcal{O}(a_{3},h)=X\) is evaluated. The condition \(h_{\Phi}=h_{F}\) is imposed to reduce the number of independent bifurcation parameters. From Section III-D, \(k_{1}=60\); \(k_{2}=40\); \(k_{3}=3.0\); \(k_{4}=2.0\); \(g_{1}=2.0\); \(g_{2}=0.5\). For each \(a_{3}\), the value of \(h\) is found by solving equations (15) with \(F(z_{0})=\lambda\), \(\Phi(z_{0})=T\) and the stability of a fixed point \(\mathcal{O}(a_{3},h)=X\) of mapping (5) given by Proposition 2 is analyzed. An example of such an analysis is shown in Fig. 4 (a),(b) for \(T=66.75\), \(\lambda=4.66\), and \(0.1505<a_{3}<0.54\). When the parameter \(a_{3}\) increases, the fixed point \(\mathcal{O}=X\) undergoes a period-doubling bifurcation: the maximal in absolute value multiplier \(\rho_{2}\) of the fixed point \(\mathcal{O}\) emerges from the unit circle though \(-1\) (see Fig. 4(b)). In these figures, the stability region of the fixed point \(\mathcal{O}\) is in yellow. Fig. 4(b) depicts the dependence of \(h\) on \(a_{3}\) in the transition shown in Fig. 4(a). Fig. 4(c),(d) presents the results of the bifurcation analysis for other values of the cycle parameters: \(T=65.45\), \(\lambda=4.73\) and \(0.1505<a_{3}<0.612\). As pointed out earlier, the stability of the fixed point \(\mathcal{O}\) (1-cycle) is determined by \(F^{\prime}(z_{0})\) and \(\Phi^{\prime}(z_{0})\). Introduce \(\tau\) as \[\tau=1/|\Lambda|,\quad\Lambda=\ln\ r_{0},\] \[r_{0}=\max_{1\leqslant i\leqslant 3}|\rho_{i}|.\] Fig. 3: The convergence of the sequence \(F(z_{k})\) to the desired \(\lambda\). Since all the multipliers are negative \(-1<\rho_{i}<0\), \(1\leqslant i\leqslant 3\), the convergence is non-monotonous. To highlight the evolution, the point \(F(z_{k-1})\) is connected to the next one \(F(z_{k})\) (blue lines). Fig. 2: The designed 1-cycle (\(\Gamma\), in red) corresponding to the fixed point (\(\mathcal{O}=X\)). Trajectories converging to \(\Gamma\) are in blue. The value of \(\tau\) characterizes the convergence time of the trajectory initiated a point in the basin of attraction of the stable fixed point \(\mathcal{O}\) to the corresponding orbit. Fig. 5(a),(b) show variation of the \(\tau\) and \(\rho_{2}\) in the intervals \(-0.6<F^{\prime}<0.0\) and \(\Phi^{\prime}=-\dfrac{k_{2}}{k_{4}}F^{\prime}\) for \(a_{3}=0.3005\) and \(a_{3}=0.2505\) (\(T=66.75\), \(\lambda=4.66\)), respectively. ## VI Conclusions A novel problem of designing the IGO to admit a pre-defined periodic solution is introduced. It is exemplified by a case of stable 1-cycle with pre-defined solution parameters. It is demonstrated that the 1-cycle specifications are translated into a unique positive fixed point of the impulse-to-impulse discrete map. This fixed point can be rendered stable by selecting the modulation functions of the IGO. Further analysis is needed to control the type (monotonous, non-monotonous) and the speed of convergence of the IGO solutions to the orbit corresponding to the obtained fixed point.
2303.03864
Evidence for four-top quark production in proton-proton collisions at $\sqrt{s}$ = 13 TeV
The production of four top quarks ($\mathrm{t\bar{t}t\bar{t}}$) is studied with LHC proton-proton collision data samples collected by the CMS experiment at a center-of-mass energy of 13 TeV, and corresponding to integrated luminosities of up to 138 fb$^{-1}$. Events that have no leptons (all-hadronic), one lepton, or two opposite-sign leptons (where lepton refers only to prompt electrons or prompt muons) are considered. This is the first $\mathrm{t\bar{t}t\bar{t}}$ measurement that includes the all-hadronic final state. The observed significance of the $\mathrm{t\bar{t}t\bar{t}}$ signal in these final states of 3.9 standard deviations (1.5 expected) provides evidence for $\mathrm{t\bar{t}t\bar{t}}$ production, with a measured cross section of 36 $^{+12}_{-11}$ fb. Combined with earlier CMS results in other final states, the signal significance is 4.0 standard deviations (3.2 expected). The combination returns an observed cross section of 17 $\pm$ 4 (stat) $\pm$ 3 (syst) fb, which is consistent with the standard model prediction.
CMS Collaboration
2023-03-07T13:07:57Z
http://arxiv.org/abs/2303.03864v2
# Evidence for four-top quark production in proton-proton collisions at \(\sqrt{s}=13\,\mathrm{TeV}\) ###### Abstract The production of four top quarks (\(\mathrm{t}\bar{\mathrm{t}}\mathrm{t}\bar{\mathrm{t}}\)) is studied with LHC proton-proton collision data samples collected by the CMS experiment at a center-of-mass energy of \(13\,\mathrm{TeV}\), and corresponding to integrated luminosities of up to \(138\,\mathrm{fb}^{-1}\). Events that have no leptons (all-hadronic), one lepton, or two opposite-sign leptons (where lepton refers only to prompt electrons or prompt muons) are considered. This is the first measurement that includes the all-hadronic final state. The observed significance of the \(\mathrm{t}\bar{\mathrm{t}}\mathrm{t}\bar{\mathrm{t}}\) signal in these final states of 3.9 standard deviations (1.5 expected) provides evidence for \(\mathrm{t}\bar{\mathrm{t}}\mathrm{t}\bar{\mathrm{t}}\) production, with a measured cross section of \(36^{+12}_{-11}\,\mathrm{fb}\). Combined with earlier CMS results in other final states, the signal significance is 4.0 standard deviations (3.2 expected). The combination returns an observed cross section of \(17\pm 4\,\mathrm{(stat)}\pm 3\,\mathrm{(syst)}\,\mathrm{fb}\), which is consistent with the standard model prediction. _We dedicate this publication to our friends and colleagues Meenakshi Narain and Stephen Wimpenny, who passed away unexpectedly while this paper was in preparation. This work would not have been possible without their guidance and contributions._ The CMS Collaboration The production of four top quarks (\(\mathrm{t}\bar{\mathrm{t}}\mathrm{t}\bar{\mathrm{t}}\)) is studied with LHC proton-proton collision data samples collected by the CMS experiment at a center-of-mass energy of \(13\,\mathrm{TeV}\), and corresponding to integrated luminosities of up to \(138\,\mathrm{fb}^{-1}\). Events that have no leptons (all-hadronic), one lepton, or two opposite-sign leptons (where lepton refers only to prompt electrons or prompt muons) are considered. This is the first measurement that includes the all-hadronic final state. The observed significance of the \(\mathrm{t}\bar{\mathrm{t}}\mathrm{t}\bar{\mathrm{t}}\) signal in these final states of 3.9 standard deviations (1.5 expected) provides evidence for \(\mathrm{t}\bar{\mathrm{t}}\mathrm{t}\bar{\mathrm{t}}\) production, with a measured cross section of \(36^{+12}_{-11}\,\mathrm{fb}\). Combined with earlier CMS results in other final states, the signal significance is 4.0 standard deviations (3.2 expected). The combination returns an observed cross section of \(17\pm 4\,\mathrm{(stat)}\pm 3\,\mathrm{(syst)}\,\mathrm{fb}\), which is consistent with the standard model prediction. _We dedicate this publication to our friends and colleagues Meenakshi Narain and Stephen Wimpenny, who passed away unexpectedly while this paper was in preparation. This work would not have been possible without their guidance and contributions._ _Submitted to Physics Letters B_ [MISSING_PAGE_POST] Introduction The production of four top quarks (\(\PQt\PQt\PQt\PQt\PQt\)) is predicted to occur very rarely in the standard model (SM). In proton-proton (\(\Pp\Pp\)) collisions at \(\sqrt{s}=13\TeV\), the production cross section has been calculated to be \(12.0^{+2.2}_{-2.5}\fb\) at next-to-leading order (NLO) in quantum chromodynamics (QCD) and electroweak (EW) contributions [1, 2, 3, 4, 5]. Examples of the SM lowest order contributions to \(\PQt\PQt\PQt\) production in \(\Pp\Pp\) collisions are shown in Fig. 1. Deviations from the predicted value occur in many proposed models of physics beyond the SM, such as supersymmetry [6, 7], composite models [8], top quark compositeness [9], two Higgs doublet models [10, 11, 12], and models with extra spatial dimensions [13, 14]. Measurements of \(\PQt\PQt\PQt\PQt\) production can also be used to constrain the top quark Yukawa coupling, \(CP\)-related parameters, and effective field theory operators [15]. In the SM, the top quark dominantly decays to a bottom quark and a boson. Each boson decays to either leptons or quarks, so the \(\PQt\PQt\PQt\PQt\) final state consists of four bottom quarks and up to four leptonic boson decays. No analyses described or cited in this Letter attempt to explicitly identify \(\tau\) lepton products and hence hereafter "lepton" will refer only to \(\Pe\) and \(\mu\), whether produced directly in \(\PW\to\Pe/\mu+\nu\) decays or via \(\PW\to\tau+\nu\) with \(\tau\to\Pe/\mu+2\nu\). The production of \(\PQt\PQt\PQt\PQt\) has been searched for by both ATLAS [16, 17, 18] and CMS [19, 20, 21, 22] at the CERN LHC. The ATLAS Collaboration has reported evidence for \(\PQt\PQt\PQt\PQt\) production in final states with either two same-sign leptons or at least three leptons (referred to as "SSDL&ML") [18], with an observed significance of 4.3 standard deviations (2.4 expected) and a measured production cross section of \(24^{+7}_{-6}\fb\) (assuming SM branching ratios). The significance observed in searches for \(\PQt\PQt\PQt\PQt\) production by the CMS Collaboration in similar SSDL&ML final states is 2.6 standard deviations (2.7 expected), with a measured production cross section of \(12.6^{+5.8}_{-5.2}\fb\), using data collected in 2016-2018 with an integrated luminosity of \(138\fb^{-1}\)[21]. The CMS Collaboration has also reported an observed significance of 1.4 standard deviations and a measured production cross section of \(13^{+11}_{-9}\fb\) in final states with one lepton or two opposite-sign leptons, using data collected in 2016 with an integrated luminosity of \(38\fb^{-1}\)[22]. In this Letter, we present a search by the CMS Collaboration for the production of \(\PQt\PQt\PQt\PQt\) in final states with zero leptons (all-hadronic), one lepton (single-lepton), or two opposite-sign leptons (referred to as "opposite-sign dileptons" or "OSDL"). The search uses data samples of \(\Pp\Pp\) collision data collected in 2016-2018 at \(\sqrt{s}=13\TeV\), with integrated luminosities of up to \(138\fb^{-1}\)[23, 24, 25]. To discriminate the \(\PQt\PQt\PQt\PQt\) signal events from the dominant background of \(\PQt\PQt\PQt\) production we take advantage of the higher multiplicity of jets, particularly those produced by the hadronization of \(\PQb\) quarks. Events are required to have at least three candidate \(\PQb\) jets in the single-lepton and all-hadronic final states, and at least two candidate \(\PQb\) jets in the OSDL final state. Figure 1: Examples of Feynman diagrams for \(\PQt\PQt\PQt\) production at leading order in the SM. Introduction The CMS detector is a central feature of the CMS apparatus, composed of a barrel and endcap detectors, with a total luminosity of 13.8. The first level of the CMS trigger is composed of custom hardware processors, based on a farm of processors running the data storage system, and the full event reconstruction software optimized for fast processing, reduces the event rate to about 1kHz. The second level of the CMS trigger is a superconducting solenoid providing a magnetic field of 3.8 T, and enclosing a silicon pixel and strip tracker, a lead tungstate crystal electromagnetic calorimeter, and a brass and scintillator hadron calorimeter, each composed of a barrel and two endcap sections. Forward calorimeters extend the pseudorapidity coverage provided by the barrel and endcap detectors. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid. A two-level trigger system reduces the rate of events retained for further processing to around 1 kHz. The first-level trigger is composed of custom hardware processors, using information from the calorimeters and muon detectors [27]. The software-based high-level trigger [28] uses the full event information. A more detailed description of the CMS detector, together with a definition of the coordinate system used and the relevant kinematic variables, can be found in Ref. [29]. ## 0.2 The CMS detector The central feature of the CMS apparatus is a superconducting solenoid providing a magnetic field of 3.8 T, and enclosing a silicon pixel and strip tracker, a lead tungstate crystal electromagnetic calorimeter, and a brass and scintillator hadron calorimeter, each composed of a barrel and two endcap sections. Forward calorimeters extend the pseudorapidity coverage provided by the barrel and endcap detectors. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid. A two-level trigger system reduces the rate of events retained for further processing to around 1 kHz. The first-level trigger is composed of custom hardware processors, using information from the calorimeters and muon detectors [27]. The software-based high-level trigger [28] uses the full event information. A more detailed description of the CMS detector, together with a definition of the coordinate system used and the relevant kinematic variables, can be found in Ref. [29]. ## 0.3 Simulated event samples Signal and background processes are modeled using several Monte Carlo event generators. Multiple simulated minimum bias interactions within the same or nearby bunch crossings (pileup) are superimposed on the hard scattering process with the multiplicity distribution matched to the data. Signal events are generated using the MadGraph5_amc@nlo generator [1] versions 2.2.2 (2016) and 2.4.2 (2017-2018) in a tree-level approximation, with emission of up to two additional partons in matrix element calculations. Inclusive \(\mathrm{t\overline{t}}\) production is generated at NLO precision using powheg v2 [30, 31, 32]. Smaller contributions from \(\mathrm{t\overline{t}}\) production in association with one or two bosons (H, W, Z, WH, ZH, WW, WZ, ZZ), W and Z production, single top quark (tW) production, and Drell-Yan processes constitute the remaining backgrounds, particularly where additional hadronic jets are produced by QCD radiation [21]. Single top and \(\mathrm{t\overline{t}}\) + H processes are simulated using powheg at NLO, while the other smaller contributions are simulated using MadGraph5_amc@nlo versions 2.2.2 (2016) and 2.4.2 (2017-2018) at leading order (LO). Fragmentation and parton showering are modelled by pythia 8 [33] versions 8.212 (2016) and 8.230 (2017-2018). For samples generated with LO (NLO) precision, the MLM [34] (FxFx [35]) matrix element to parton shower matching scheme is used. For the 2016 simulation, the underlying event tunes CUETP8M1 [36] and CUETP8M2T4 [37] are applied to Drell-Yan and \(\mathrm{t\overline{t}}\) production in association with two bosons, respectively. The tune CP5 [38] is applied to the remaining processes in the 2016 simulation and all processes in the 2017-2018 simulation. Simulations for the 2016 data-taking conditions are generated using the NNPDF3.0 [39] parton distribution functions (PDFs) with either LO or NLO accuracy, while for 2017-2018 samples ## 0.3 Event reconstruction and data samples A particle-flow algorithm [44] is used to reconstruct and identify each particle in an event, with an optimized combination of information from various CMS subdetectors. The objects identified by the algorithm comprise candidate electrons, muons, photons, and charged or neutral hadrons. Muon and electron candidates are restricted to the ranges \(|\eta|<2.4\) and \(|\eta|<2.5\), respectively [45, 46, 47], and are required to be isolated from other objects [45, 47, 48]. Jets are reconstructed from particle flow objects using the anti-algorithm [49, 50] with distance parameters of 0.4 (AK4) and 0.8 (AK8). Residual differences in the jet energy scale and resolution between data and simulation are corrected [51]. Charged particles identified as originating from pileup are discarded, and the measured energy is corrected to remove the estimated contribution from neutral pileup particles [52, 53]. The \(H_{\mathrm{T}}\) in an event is then defined as the scalar \(p_{\mathrm{T}}\) sum of all AK4 jets with \(p_{\mathrm{T}}>30\GeV\) and \(|\eta|<2.5\). The DeepCSV [54] and DeepJet algorithms [55] are used to discriminate (b tag) AK4 jets produced by the hadronization of bottom quarks from those produced by gluon and lighter quark hadronization. A misidentification probability of 1% and efficiencies of approximately 68% for DeepCSV and 75% for DeepJet are measured in simulated \(\mathrm{t\bar{t}}\) events [56]. Hadronic top quark decays with a small or moderate Lorentz boost will typically produce three separate AK4 jets. These decays are referred to as "resolved" and are identified using multivariate top quark tagging (t tagging) algorithms. For the analysis of the single-lepton final state we use the DNN-based DeepResolved t tagger [57, 58] while for the all-hadronic final state we use a custom t tagger [59] based on boosted decision trees (BDTs) [60, 61]. In contrast, hadronically decaying top quarks with a large boost can result in a single merged jet. Such decays are identified in the all-hadronic channel by applying the CMS DeepAK8 algorithm [62] to AK8 jets with \(p_{\mathrm{T}}>400\GeV\). In order to avoid double counting of objects, we require merged top quark jets to be separated from resolved top quark candidates by \(\Delta R\equiv\sqrt{(\Delta\eta)^{2}+(\Delta\phi)^{2}}>0.8\). The OSDL events are collected using electron-muon, dimuon, and dielectron triggers. The \(\mathrm{e}\mu\) selection uses a combination of triggers that require either an electron with \(p_{\mathrm{T}}>23\GeV\) and a muon with \(p_{\mathrm{T}}>12\GeV\), or vice-versa. For the dimuon channel, a trigger with \(p_{\mathrm{T}}\) thresholds of 17 and 8 for the two highest \(p_{\mathrm{T}}\) muons is used. Similarly the dielectron channel uses a trigger with \(p_{\mathrm{T}}\) thresholds of 23 and 12 for the two highest \(p_{\mathrm{T}}\) electrons. A prioritized trigger strategy of \(\mathrm{e}\mu\), \(\mu\mu\), and \(\mathrm{ee}\) is used to ensure that events satisfying multiple triggers exclusively enter the appropriate OSDL final state. For the single-lepton channel, two different triggers are used. The first requires events to contain an isolated electron (muon) with \(p_{\mathrm{T}}>35\GeV\). The second requires a very loosely isolated electron or muon with \(p_{\mathrm{T}}>15\GeV\) in addition to the event having \(H_{\mathrm{T}}>450\GeV\). All-hadronic events are selected with a variety of triggers that require at least six AK4 jets and \(H_{\mathrm{T}}\) greater than thresholds in the range 380-450. A minimum of either one or two b-tagged jets is required, depending on the trigger. Scale factors are applied to simulated samples in all final states to correct for the differences in trigger efficiencies between data and simulation. ## 0.5 Opposite sign dilepton final state The OSDL channel contains data corresponding to an integrated luminosity of 101fb collected in 2017-2018, with previously published results on 2016 data included in the final fit to all channels. Offline, events in the OSDL channel are required to have exactly two opposite-sign leptons, one with \(p_{\mathrm{T}}>25\GeV\) and the other with \(p_{\mathrm{T}}>15\GeV\), at least 4 jets with \(p_{\mathrm{T}}>30\GeV\), and \(H_{\mathrm{T}}>500\GeV\). Events with \(\Pe\Pe\) and \(\mu\mu\) invariant masses below 20 or within 15 of the boson mass are excluded. The distribution of \(H_{\mathrm{T}}\) is chosen as the input to the final fit, as this variable is sensitive to the presence of the extra two hadronic top quark decays in the signal as compared to the background. To increase sensitivity, events are categorized by lepton decay channel (\(\Pe\), \(\mu\mu\), \(\Pe\mu\)), the number of AK4 jets (\(N_{\mathrm{j}}=4\), 5, 6, 7, \(\geq\)8), and how many of them are b tagged (\(N_{\mathrm{b}}=2\), \(3\), \(\geq\)4). The signal regions (SRs) contain seven or more jets, three or more of which are b tagged by DeepJet, while categories containing fewer jets or exactly two b-tagged jets serve as CRs. The number of simulated events with no jets from the hadronization of additional b quarks is corrected to ensure consistency with the observed data. The correction is determined from CRs which contain exactly one b-tagged jet and hence are depleted of events with extra b jets. The required correction is determined to be a scaling factor of 0.78 \(\pm\) 0.05 (statistical uncertainty only), independent of lepton and jet kinematic properties. The OSDL signal and background normalizations are obtained from a simultaneous binned maximum likelihood fit to all the categories, where a template is created for each jet and b tag category, leptonic decay channel, and year. Figure 2 shows the jet multiplicity distributions for the \(\geq\)4 b tag categories after the fit to data. The dominant background contribution changes with tag multiplicity. With increasing \(N_{\mathrm{b}}\), backgrounds such as \(\ttbar+\geq\)1b, \(\ttbar+\) H, and \(\ttbar+\) V (V = W, \(Z\)) become more important. In the most sensitive categories, \(\ttbar+\geq\)1b becomes the dominant background. ## 0.6 Single-lepton final state Single-lepton events are required to have exactly one lepton with \(p_{\mathrm{T}}>20\GeV\), at least four AK4 jets, at least two of which must be b tagged using the DeepCSV algorithm, and \(H_{\mathrm{T}}>500\GeV\). A BDT is trained using the TMVA package [63] to discriminate between signal and background in regions with large \(N_{\mathrm{j}}\) and \(N_{\mathrm{b}}\). More than 70 input variables are constructed, based on kinematic variables such as \(p_{\mathrm{T}}\), b tagging discriminants, resolved tagging discriminants, mass, and angular separations of various objects and their combinations (e.g., jets, dijets, trijets, b-tagged jets, lepton-b pairs, and \(\ttbar\) pairs). Information about the event topology is incorporated via event shape variables, such as centrality, planarity, sphericity, and the second Fox-Wolfram moment [64] calculated using all AK4 jets. Variables are included in the BDT classifier only if they improve the discrimination and if they are well modeled in the simulation, based on multiple goodness of fit tests. Signal and background samples are randomly divided into three equally populated parts: for training, for testing the performance, and for evaluating the classifier for the maximum likelihood fit. The contributions from the dominant background and all other SM processes are included in the training. The training is optimized ## 0.4 Results The results of the \(\PW\) boson production are compared to the predictions of the \(\PW\) boson production in the \(\PW\) boson production. The \(\PW\) boson production is modeled using a \(\PW\) boson with mass \(m_{\PW}=1.25\pm 0. The \(\PQt\PAQt\) and \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) and \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) and \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as in the previous analysis. The \(\PQt\PAQt\) cross sections are measured in the same way as the previous analysis. Events in the SR are subdivided into 12 categories based on \(N_{\text{RT}}\), the number of boosted top quarks (\(N_{\text{BT}}\)), and \(H_{\text{T}}\). The categorization by top quark tags defines three groups: \(N_{\text{RT}}\geq 2\), \(N_{\text{RT}}=1\) and \(N_{\text{BT}}\geq 1\), and \(N_{\text{RT}}=1\) and \(N_{\text{BT}}=0\). The first two group are each further categorized into two ranges in \(H_{\text{T}}\): 700-1100 and \(>\)1100. For the third group, there are six equally spaced bins in the range \(700<H_{\text{T}}<1300\), and two additional bins with \(H_{\text{T}}\) in the ranges 1300-1500 and \(\geq\)1500. The SR categories were chosen to optimize sensitivity to \(\mathrm{t}\mathrm{t}\mathrm{t}\mathrm{t}\mathrm{t}\) production. An event-level BDT is trained using CatBoost[65] in each category of the SR to further distinguish between \(\mathrm{t}\mathrm{t}\mathrm{t}\mathrm{t}\) signal events and the dominant backgrounds originating from \(\mathrm{t}\mathrm{t}\mathrm{t}\) and QCD multijet production, by exploiting differences in kinematic distributions of reconstructed objects. The 20 optimized BDT input variables include \(N_{\text{j}}\) and \(N_{\text{b}}\); the kinematic distributions of jets, b-tagged jets, and t-tagged candidates; and variables related to the angular distributions of jets. Techniques using CRs in data are employed to estimate the dominant backgrounds from QCD multijet and \(\mathrm{t}\mathrm{t}\mathrm{t}\) + jet production, as described in the following text. The ratio of the QCD multijet to the \(\mathrm{t}\mathrm{t}\mathrm{t}\) + jet background is expected to be approximately 3:2 in the SR. Estimates of the absolute normalization of the background and the shape of the BDT distributions in the 12 SR categories are obtained from an extrapolation based on five CRs. While the SR categories have \(N_{\text{j}}\geq 9\) and \(N_{\text{b}}\geq 3\), these five CRs are defined to have \((N_{\text{j}},N_{\text{b}})=(7,2)\), \((8,2)\), \((\geq\)9,2), \((7,3)\), and \((8,3)\). Figure 4 illustrates how these control regions are related as a function of \(N_{\text{j}}\) and \(N_{\text{b}}\). The absolute normalization of the background is estimated using an "extended ABCD" method [66], where the number of events in the SR is derived from the number of events in several independent CRs. This method improves the accuracy of background yield estimates in cases where control variables are weakly correlated (such as \(N_{\text{j}}\) and \(N_{\text{b}}\)), compared to the traditional ABCD method in which only three CRs are used. Specifically, this method is used to predict the number of \(\mathrm{t}\mathrm{t}\mathrm{t}\) and QCD multijet events in the SR from the number of events observed in the five \((N_{\text{j}},N_{\text{b}})\) CRs, after subtracting the number of events from minor backgrounds. The shape of the BDT distribution in each SR is predicted using a DNN trained on the same five CRs that are used to estimate the absolute normalization. A normalizing autoregressive flow [67] is trained on the CRs to learn a bijective transformation of the \(H_{\text{T}}\) and BDT output distributions between a source and a target. The source used is a simulated \(\mathrm{t}\mathrm{t}\mathrm{t}\) sample, and the target is the total \(\mathrm{t}\mathrm{t}\mathrm{t}\) and multijet background (estimated by subtracting the other, simulated, background contributions from the data). The QCD simulation is not included as an input source due to the very small number of simulated events in the regions of interest. In each of the CRs, the normalizing flow algorithm learns a transformation from the source distribution to the target distribution, before applying that transformation to the source distribution in the SR. The algorithm is trained regressively, starting from the less SR-like CRs (in terms of \(N_{\text{j}}\) and \(N_{\text{b}}\)) and including weights from previous training cycles in subsequent more SR-like training cycles. For each data-taking period (2016, 2017, and 2018), the BDT output and \(H_{\text{T}}\) distributions are predicted simultaneously in the three SR \(N_{\text{RT}}\) and \(N_{\text{BT}}\) categories. The predicted BDT distributions are then split according to the predicted \(H_{\text{T}}\), resulting in predicted BDT output distributions for each of the 12 SR categories per data-taking period. The predictions are checked in validation regions (VRs) defined in parallel to those for the SR, but with \(N_{\text{j}}=8\), \(N_{\text{b}}\geq 3\) and extrapolating from CRs in which \((N_{\text{j}},N_{\text{b}})=(6,2)\), \((7,2)\), \((8,2)\), \((6,\geq\)3), and \((7,\geq\)3). Shape and normalization background modeling uncertainties are derived from discrepancies in the VRs. The \(\ttbar\) production cross section is calculated using the \(\ttbar\) and \(\ttbar\) production cross sections. The cross section is calculated using the \(\ttbar\) and \(\ttbar\) production cross sections. strength) and the uncertainty in the modeling of \(\mathrm{t\bar{t}}\) with additional b jets (3.7%). The background estimation in the all-hadronic final state contributes up to 2.7% per SR category, dominated by statistical fluctuations in the CRs. The jet energy scale contributes up to 2.4% (depending on the year and channel), renormalization and factorization scales 2.1%, and leptonic fake rates 1.9%. The largest components of the b tagging and light quark mistagging efficiency uncertainties each contribute 1.8%. Further nuisance parameters with smaller individual contributions include those relating to lepton reconstruction and trigger efficiencies, the input cross section for \(\mathrm{t\bar{t}}\) + additional b quark production, jet energy resolution, matrix element to parton shower matching, the modeling of the resolved \(\mathrm{t}\) tagging in the single-lepton channel, PDFs, pileup, the theoretical uncertainty in the \(\mathrm{t\bar{t}}\) cross section, the theoretical modeling of initial-state radiation in \(\mathrm{t\bar{t}}\) production, and the delivered luminosity. The total systematic uncertainty, considering the effects of all nuisance parameters, is 17%. ## 0.9 Results Table 1 shows the fitted values of the signal strength (the ratio of the measured cross section to the prediction), the measured cross section, and the expected and observed significance of \(\mathrm{t\bar{t}}\mathrm{t\bar{t}}\) production from 2017-2018 data in the OSDL channel, and 2016-2018 data in the single-lepton and all-hadronic channels. The signal strength is calculated with all systematic uncertainties, including all theoretical uncertainties. The cross section measurement is performed with all systematic uncertainties except theoretical uncertainties that affect the rate of the signal process. The table also includes the combination of these new results with the CMS OSDL analysis of 2016 data [22], and the same-sign dilepton and multilepton analysis of the 2016-2018 data [21], following the procedure described in Refs. [68, 69]. The expected and observed sig Figure 5: The distribution of the BDT discriminants for the full 2016–2018 data set in the all-hadronic channel. The two most sensitive SR categories are shown, defined by \(N_{\mathrm{RT}}=1\), \(N_{\mathrm{BT}}\geq 1\), \(H_{\mathrm{T}}>1400\GeV\) (left), and \(N_{\mathrm{BT}}\geq 2\), \(H_{\mathrm{T}}>1100\GeV\) (right). The background from QCD multijet and \(\mathrm{t\bar{t}}\) production is derived from control regions in the data. Estimates for the signal and other backgrounds are shown using simulated samples. The hatched bands correspond to the estimated total uncertainty after the fit. ## 0.9 Summary We have measured the cross section for the simultaneous production of four top quarks (\(\PQt\PQt\PQt\PQt\)) in proton-proton collisions. The data were collected by the CMS experiment at the LHC in 2016-2018, and correspond to an integrated luminosity of up to \(138\,\mathrm{fb}^{-1}\) at a center-of-mass energy of \(13\,\mathrm{TeV}\). The all-hadronic final state has been studied for the first time in a \(\PQt\PQt\PQt\) production analysis, using a background estimation strategy based on a deep neural network trained using control regions in data. Final states with one lepton (electron or muon), or two opposite-sign leptons have also been analyzed. The observed and expected significances obtained from the combination of the new analyses described here are 3.9 and 1.5 standard deviations, respectively. When combined with published CMS results in other final states, the significances increase to 4.0 (observed) and 3.2 (expected) standard deviations. This is a significant improvement compared to previous CMS results and the first CMS evidence for \(\PQt\PQt\PQt\PQt\) production with a significance above three standard deviations.
2307.00642
Multiclass Boosting: Simple and Intuitive Weak Learning Criteria
We study a generalization of boosting to the multiclass setting. We introduce a weak learning condition for multiclass classification that captures the original notion of weak learnability as being "slightly better than random guessing". We give a simple and efficient boosting algorithm, that does not require realizability assumptions and its sample and oracle complexity bounds are independent of the number of classes. In addition, we utilize our new boosting technique in several theoretical applications within the context of List PAC Learning. First, we establish an equivalence to weak PAC learning. Furthermore, we present a new result on boosting for list learners, as well as provide a novel proof for the characterization of multiclass PAC learning and List PAC learning. Notably, our technique gives rise to a simplified analysis, and also implies an improved error bound for large list sizes, compared to previous results.
Nataly Brukhim, Amit Daniely, Yishay Mansour, Shay Moran
2023-07-02T19:26:58Z
http://arxiv.org/abs/2307.00642v1
# Multiclass Boosting: ###### Abstract We study a generalization of boosting to the multiclass setting. We introduce a weak learning condition for multiclass classification that captures the original notion of weak learnability as being "slightly better than random guessing". We give a simple and efficient boosting algorithm, that does not require realizability assumptions and its sample and oracle complexity bounds are independent of the number of classes. In addition, we utilize our new boosting technique in several theoretical applications within the context of List PAC Learning. First, we establish an equivalence to weak PAC learning. Furthermore, we present a new result on boosting for list learners, as well as provide a novel proof for the characterization of multiclass PAC learning and List PAC learning. Notably, our technique gives rise to a simplified analysis, and also implies an improved error bound for large list sizes, compared to previous results. ## 1 Introduction Boosting is a powerful algorithmic approach used to boost the accuracy of weak learning models, transforming them into strong learners. Boosting was first studied in the context of binary classification in a line of seminal works which include the celebrated Adaboost algorithm, as well an many other algorithms with various applications (see e.g. [14; 22; 11; 12]). The fundamental assumption underlying boosting is that a method already exists for finding poor, yet not entirely trivial classifiers. Concretely, binary boosting assumes there exists a learning algorithm that, when presented with training examples, can find a classifier \(h:\mathcal{X}\mapsto\{0,1\}\) that has classification error less than \(1/2\). That is, it performs slightly better than random guessing. The intuition is that this is the most minimal assumption one can make about a learning algorithm, without it being impractical. This assumption is called the _weak learning assumption_, and it is central to the study of boosting. While binary boosting theory has been extensively studied, extending it to the multiclass setting has proven to be challenging. In particular, it turns out that the original notion of weak learnability as being "slightly better than a random guess", does not easily extend to the multiclass case. For example, perhaps the most natural extension is to assume that the learner has accuracy that is slightly better than \(1/k\), where \(\mathcal{Y}=\{1,...,k\}\). However, this naive extension is in fact known to be too weak for boosting (see Section 2 below for a detailed discussion). Instead, previous works [19; 7; 23] have formulated various complex weak learning assumptions with respect to carefully tailored loss functions, and rely on restrictive realizability assumptions, making them less useful in practice. A weak learning assumption.In this work, we generalize the classic formulation of boosting to the multiclass setting. We introduce a weak learning condition that captures the original intuition of weak learnability as "slightly better-than-random guessing". The key idea that renders this condition useful compared to previous attempts, is based on a "hint" given to the weak learner. The hint takes the form of a list of \(k\) labels per example, where \(k\) is possibly smaller than \(|\mathcal{Y}|\). Then, the assumption is that there exists a learner capable of producing not entirely trivial classifiers, if it was provided with a "good hint". In other words, if the list provided to the learner happens to contain the correct label, we expect the learner to perform slightly better than randomly guessing a label from the list. Specifically, the assumption is that for any \(k\geq 2\), if the given lists of size \(k\) contain the true labels, the learner will output a classifier \(h:\mathcal{X}\mapsto\mathcal{Y}\) with error slightly better than random guessing among the \(k\) labels. Notice that this encompasses both the binary case when \(k=2\), as well as the naive extension mentioned above, when \(k=|\mathcal{Y}|\). We call this new condition the "better-than-random guess", or _BRG_ condition. The BRG condition also generalizes the classic binary case condition in a practical sense. Previous methods on multiclass boosting are framed within the PAC (Probably Approximately Correct) setting, correspond to a _known_ hypothesis class \(\mathcal{H}\subseteq\mathcal{Y}^{\mathcal{X}}\), and assume that weak learning hold for every distribution \(\mathcal{D}\) over the _entire_ domain \(\mathcal{X}\). Practically, these requirements can be very difficult to check or guarantee. In contrast, as in binary boosting, the BRG condition can be relaxed to a more benign empirical weak learning assumption, that can be verified immediately in an actual learning setting. Recursive boosting.Our main contribution is a new boosting algorithm that is founded on the BRG condition. Our boosting methodology yields a simple and efficient algorithm. It is based on the key observation that even a naive weak learner can produce a useful hint. Recall that when given no hint at all, the naive weak learner can still find a hypothesis with a slight edge (\(\gamma>0\)), over random guessing among \(|\mathcal{Y}|\) labels. Although this may result in a poor predictor, we prove that it effectively reduces the label space per example to approximately \(1/\gamma\) labels. This initial hint serves as the starting point for subsequent iterations of the boosting algorithm. The process continues recursively until the label list per example is reduced to size \(2\), at which point any classic binary boosting can yield a strong classifier. Unlike previous methods, our boosting algorithm and guarantees do not rely on realizability assumptions nor do they scale with \(|\mathcal{Y}|\). In fact, we show that the sample and oracle-call complexity of our algorithm are entirely independent of \(|\mathcal{Y}|\), which implies our approach is effective even in cases where the label space \(\mathcal{Y}\) is possibly infinite. Moreover, the overall running time of our algorithm is polynomial in the size of its input. An important insight that underlies our approach is the link between the naive weak learning condition, which we term _weak-BRG_ learning, to that of _list learning_. In list learning [5; 8; 18], rather than predicting a single outcome for a given unseen input, the goal is to provide a short list of predictions. Here we use this technique as an intermediate goal of the algorithm, by which effectively reducing the size of the label space in each round. The generalization analysis relies on sample compression arguments which result in efficient bounds on the sample and oracle complexities. Perhaps surprisingly, the connection between weak learnability and list learnability is even more fundamental. We prove that there is an equivalence between these two notions. Specifically, we establish that a \(\gamma\)-weak learner is equivalent to an \((1/\gamma)\)-list learner. Lastly, we demonstrate the strength of our boosting framework. First, we give a generalization of our boosting technique to hold for list PAC learning algorithms. Then, we showcase the effectiveness of the weak learning criteria in capturing learnability in two fundamental learning settings: PAC learning, and List PAC learning. Recently, [5] proved a characterization of multiclass PAC learning using the Daniely-Shwartz (DS) dimension. In a subsequent study, [8] gave a characterization of _list_ learnability using a natural extension of the DS dimension. Here we show that in both cases, assuming the appropriate dimension is bounded, one can devise a simple weak learning algorithm. Thus, it is also amenable to a boosting method similarly to our approach, leading to a novel and alternative proof of the characterization of learnability. We note that for cases where the dimension is much smaller than the list size, we have an improved result over previous bound. Moreover, our approach offers a simpler algorithm and analysis technique, potentially benefiting future applications as well. ### Main result The main contributions in this work are as follows. 1. **Multiclass boosting framework.** Our main result is a boosting framework for the multi-class setting, which is a natural generalization of binary boosting theory. We give a simple weak learning assumption that retains the notion of weak learnability as "slightly-better-than-random-guess" from the binary case. Furthermore, we give an efficient multiclass boosting algorithm, as formally stated in Theorem 1 below. Our boosting algorithm is given in Section 3 (Algorithm 3). 2. **Applications: List PAC learning.** First, we establish an equivalence between List PAC learning and Weak PAC learning, demonstrating the strong ties between List PAC learning and multiclass boosting theory. Furthermore, we present a new result on boosting for list learners. Lastly, we give a novel and alternative proof for characterization of PAC learning and List PAC learning. In particular, the results imply a simplified algorithmic approach compared to previous works, and improved error bound for cases where the list size is larger than the appropriate dimension [5, 8]. We will now introduce the main weak learning assumption, which we call the "better-than-random guess", or BRG condition, and state our main result in Theorem 1 below. In its original form, the boosting question begins by assuming that a given hypothesis class \(\mathcal{H}\subseteq\{0,1\}^{\mathcal{X}}\) is _weakly-PAC_ learnable. Similarly, here we present the BRG condition framed as weak (multiclass) PAC setting, followed by a relaxation to an _empirical_ variant of the assumption. **Definition 1** (BRG condition).: _We say that an hypothesis \(h:\mathcal{X}\to\mathcal{Y}\) satisfies the \(\gamma\)-BRG condition with respect a list function \(\mu:\mathcal{X}\to\mathcal{Y}^{k}\) on a distribution \(\mathcal{D}\) over examples if_ \[\Pr_{(x,y)\sim\mathcal{D}}[h(x)=y]\geq\left(\frac{1}{k}+\gamma\right)\Pr_{(x,y)\sim\mathcal{D}}[y\in\mu(x)]. \tag{1}\] _We say that a learning rule \(\mathcal{W}\) satisfies the \(\gamma\)-BRG condition for a hypothesis class \(\mathcal{H}\) if for every \(\mathcal{H}\)-realizable distribution \(\mathcal{D}\), for every \(k\geq 2\), for every list function \(\mu:\mathcal{X}\to\mathcal{Y}^{k}\), the output hypothesis \(h\) outputted by \(\mathcal{W}\) satisfies Equation (1) with probability \(1-\delta\), when given \(m_{0}(\delta)\) i.i.d. examples from \(\mathcal{D}\), and given \(\mu\)._ In words, the condition determines that if \(y\) belongs to the set \(\mu(x)\), then \(h\) has a higher probability of correctly classifying \(x\) by an additional factor of \(\gamma\), compared to a random guess from the list \(\mu(x)\). However, requiring that the labels be deterministic according to a target function from a known class \(\mathcal{H}\), and that weak learning hold for every distribution \(\mathcal{D}\) over the entire domain \(\mathcal{X}\) are impractical, as they can be very difficult to check or guarantee. Instead, as in the binary boosting setting, our condition can be relaxed to a more benign _empirical_ weak learning assumption, as given next. **Definition 2** (Empirical BRG condition).: _Let \(S\in(\mathcal{X}\times\mathcal{Y})^{m}\). We say that a learning rule \(\mathcal{W}\) satisfies the empirical \(\gamma\)-BRG condition for \(S\) if there is an integer \(m_{0}\) such that for every distribution \(p\) over \([m]\), for every \(k\geq 2\), for every list function1\(\mu:\mathcal{X}|_{S}\to\mathcal{Y}^{k}\), when given \(m_{0}\) examples from \(S\) drawn i.i.d. according to \(p\), and given \(\mu\), it outputs a hypothesis \(h\) such that,_ Footnote 1: We denote \(\mathcal{X}|_{S}=\{x\in\mathcal{X}:\exists y\in\mathcal{Y}\text{ s.t. }(x,y)\in S\}\). \[\sum_{i=1}^{m}p_{i}\cdot\mathbbm{1}[h(x_{i})=y_{i}]\geq\left(\frac{1}{k}+ \gamma\right)\sum_{i=1}^{m}p_{i}\cdot\mathbbm{1}[y_{i}\in\mu(x_{i})]. \tag{2}\] Next, we give our main result of an efficient boosting algorithm, as stated in Theorem 1 below. **Theorem 1** (**Boosting** (Informal)).: _There exists a multiclass boosting algorithm \(\mathcal{B}\) such that for any \(\epsilon,\delta>0\), and any distribution \(\mathcal{D}\) over \(\mathcal{X}\times\mathcal{Y}\), when given a training set \(S\sim\mathcal{D}^{m}\) and oracle access to a learning rule \(\mathcal{W}\) where2\(m=\tilde{O}\left(\frac{m_{0}}{\gamma^{3}\epsilon}\right)\) and applying \(\mathcal{B}\) with a total of \(\tilde{O}\left(1/\gamma^{3}\right)\) oracle calls to \(\mathcal{W}\), it outputs a predictor \(\bar{H}:\mathcal{X}\mapsto\mathcal{Y}\) such that with probability at least \(1-\delta\), we get that if \(\mathcal{W}\) satisfies the empirical \(\gamma\)-BRG condition for \(S\) then,_ Footnote 2: The \(\tilde{O}\) notation conceals \(\operatorname{polylog}(m,1/\delta)\) factors. \[\Pr_{(x,y)\sim\mathcal{D}}\left[\bar{H}(x)\neq y\right]\leq\epsilon.\] ### Related work Boosting theory has been extensively studied, originally designed for binary classification (e.g., AdaBoost and similar variants) [23]. There are various extension of boosting to the multiclass setting. The early extensions include AdaBoost.MH, AdaBoost.MR, and approaches based on Error-Correcting Output Codes (ECOC) [24; 1]. These works often reduce the \(k\)-class task into a single binary task. The binary reduction can have various problems, including increased complexity, and lack of guarantees of an optimal joint predictor. Other works on multiclass boosting focus on practical considerations and demonstrate empirical performance improvements across various applications [29; 16; 15; 3; 6; 4; 21]. However, they lack a comprehensive theoretical framework for the multiclass boosting problem and often rely on earlier formulations such as one-versus-all reductions to the binary setting or multi-dimensional predictors and codewords. Notably, a work by [19] established a theoretical framework for multiclass boosting, which generalizes previous learning conditions. However, this requires the assumption that the weak learner minimizes a complicated loss function, that is significantly different from simple classification error. Moreover, it is based on a restrictive realizability assumption with respect to a _known_ hypothesis class. In contrast, we do not require realizability, and only consider the standard classification loss. More recently, [7] followed a formulation for multiclass boosting similar to that of [19]. They proved a hardness result showing that a broad, yet restricted, class of boosting algorithms must incur a cost which scales polynomially with \(|\mathcal{Y}|\). Our approach does not fall in this class of algorithms. Moreover, our algorithm has sample and oracle complexity bounds that are entirely independent of \(|\mathcal{Y}|\). ## 2 Warmup: _too-weak_ weak learning When there are only \(2\) labels, the weak learner must find a hypothesis that predicts the correct label a bit better than a random guess. That is, with a success probability that is slightly more than \(1/2\). When the number of labels \(k\) is more than \(2\), perhaps the most natural extension requires that the weak learner outputs hypotheses that predict the correct label a bit better than a random guess _among \(k\) labels_. That is, with a success probability that is slightly more than \(1/k\). However, this is in fact known to be too weak for boosting (see e.g., [23], Chapter 10). Here we first give a simple example that demonstrates that fact. However, we also show that all is not yet lost for the "better-than-random-guess" intuition. Specifically, we describe how this condition can still allow us to extract valuable knowledge about which labels are _incorrect_. This observation will serve as a foundation for our main results, which we will elaborate on in the next section. We start by defining the notion of better-than-random weak learner that we term weak-BRG learning. **Definition 3** (weak-BRG learning).: _A learning algorithm \(\mathcal{W}\) is a weak-BRG learner for a hypothesis class \(\mathcal{H}\subseteq[k]^{\mathcal{X}}\) if there is \(\gamma>0\) and \(m_{0}:(0,1)\mapsto\mathbb{N}\) such that for any \(\delta_{0}>0\), and any \(\mathcal{H}\)-realizable distribution \(\mathcal{D}\) over \(\mathcal{X}\times[k]\), when given \(m_{0}\geq m_{0}(\delta_{0})\) samples from \(\mathcal{D}\), it returns \(h:\mathcal{X}\to\mathcal{Y}\) such that with probability \(1-\delta_{0}\),_ \[\Pr_{(x,y)\sim D}[h(x)=y]\geq\frac{1}{k}+\gamma. \tag{3}\] To get an intuition for why this definition is indeed too weak for boosting, consider the following simple example. Suppose that \(\mathcal{X}=\{a,b,c\}\), \(\mathcal{Y}=\{1,2,3\}\), and that the training set consists of the three labeled examples \((a,1),(b,2)\), and \((c,3)\). Further, we suppose that we are using a weak learner which chooses weak classifiers that never distinguish between \(a\) and \(b\). In particular, the weak learner always chooses one of two weak classifiers: \(h_{1}\) and \(h_{2}\), defined as follows. For \(x\in\{a,b\}\) then \(h_{1}\) always returns \(1\) and \(h_{2}\) always returns \(2\). For \(x=c\) they both return \(3\). Then, notice that for any distribution over the training set, either \(h_{1}\) or \(h_{2}\) must achieve an accuracy of at least \(1/2\), which is significantly higher than the accuracy of \(1/k=1/3\). However, regardless of how weak classifiers are aggregated, any final classifier \(H\) that relies solely on the predictions of the weak hypotheses will unavoidably misclassify either \(a\) or \(b\). As a result, the training accuracy of \(H\) on the three examples can never exceed \(2/3\), making it impossible to achieve perfect accuracy through any boosting method. Furthermore, we note that this simple example can also be extended to a case where the data is realizable by a hypothesis class which is not learnable by any learning algorithm (let alone boosting). For example, consider the hypothesis class \(\mathcal{H}=\{1,2,3\}^{\mathcal{X}}\) for \(\mathcal{X}=\mathbb{N}\). Then, \(\mathcal{H}\) is not PAC learnable (e.g., via No-Free-Lunch ([26], Theorem 5.1)). However, similarly as above, one can construct a learning rule that returns a hypothesis with accuracy \(1/2>1/k\) over an \(\mathcal{H}\)-realizable distribution. Next, we will examine a useful observation that will form the basic building block of our algorithmic methodology. We demonstrate that the natural weak learner given in Definition 3, while weak, is nonetheless useful. This can be shown by examining the guarantees obtained through its application in boosting. Specifically, we consider the following classic variant of boosting via the Hedge algorithm. ``` 0: Training data \(S\in(\mathcal{X}\times[k])^{m}\), parameter \(\eta>0\). 0: A predictor \(H:\mathcal{X}\times\mathcal{Y}\mapsto\mathbb{R}\). 1: Initialize: \(w_{1}(i)=1\) for all \(i=1,...,m\). 2:for\(t=1,\ldots,T\)do 3: Denote by \(\mathcal{D}_{t}\) the distribution over \([m]\) obtained by normalizing \(w_{t}\). 4: Draw \(m_{0}\) examples from \(\mathcal{D}_{t}\) and pass to the weak learner. 5: Get weak hypothesis \(h_{t}:\mathcal{X}\mapsto\mathcal{Y}\), and update for \(i=1,...,m\): \[w_{t+1}(i)=w_{t}(i)e^{-\eta\cdot 1[h_{t}(x_{i})=y_{i}]}.\] 6:endfor 7: Output \(H\) such that for all \((x,y)\in\mathcal{X}\times[k]\), \[H(x,y)=\sum_{t=1}^{T}\mathbb{1}\left[h_{t}(x)=y\right].\] ``` **Algorithm 1** Boosting via Hedge Notice that the output of Algorithm 1 returns a predictor that is not a classifier, but a scoring function with the aim of predicting the likelihood of a given label candidate \(y\in[k]\) for some \(x\in\mathcal{X}\). Typically, boosting algorithms combine the weak hypothesis into such a scoring function yet their final output applies an _argmax_ over it, to yield a valid classifier. However, since the weak learning assumption is too weak as we have shown above, taking the argmax is useless in this setting. Instead, the following lemma shows that by boosting the "too-weak" learner, we can guarantee to eliminate one label for each example in the data. Towards that end, we consider a relaxed variant of the weak-BRG learner, to be defined over a data set \(S\), which we term the _empirical weak-BRG_ learner. Specifically, we say that a learner satisfies the empirical weak-BRG condition if there is an integer \(m_{0}\) such that for any distribution over the training examples, when given \(m_{0}\) examples drawn i.i.d from it, the learner outputs a hypothesis that satisfies Equation (3). Proofs are deferred to the appendix. **Lemma 1** (Remove one label).: _Let \(S\in(\mathcal{X}\times[k])^{m}\). Let \(\mathcal{W}\) be an empirical weak-BRG learner for \(S\) with respect to some \(\gamma\) and sample size \(m_{0}\). Then, the output \(H:(\mathcal{X}\times[k])\mapsto[0,T]\) obtained by running Algorithm 1 with \(T\geq\frac{8\log(m)}{\gamma^{2}}\) and \(\eta=\sqrt{\frac{\ln(m)}{2T}}\), guarantees that for all \((x,y)\in S\), \(\frac{H(x,y)}{T}\geq\frac{1}{k}+\frac{\gamma}{2}\). Moreover, for all \((x,y)\in S\) the minimally scored label \(\hat{\ell}=\arg\min_{\ell\in[k]}H(x,\ell)\) must be incorrect. That is \(\hat{\ell}\neq y\)._ Notice that if we were to take the argmax of \(H\) as is typically done in boosting, the guarantees given in Lemma 1 do not suggest this will result in the correct prediction. In fact, this approach might yield a rather bad classifier even for the set \(S\) on which it was trained. In other words, for any \((x,y)\in S\) it may be that there is some incorrect label \(y^{\prime}\neq y\) with \(H(x,y^{\prime})>H(x,y)\). However, notice that the lemma does suggest a good classifier of _incorrect_ labels. That is, the lowest scored label will always be an incorrect one, over the training data. This property can be shown to generalize via compression arguments, as discussed in Section 5. This allows us to effectively reduce the size of the label space by one, and is used as the basic building of our algorithm, as detailed in the next section. ## 3 Multiclass boosting results We start by introducing the notion of weak learnability that is assumed by our boosting algorithm. We note that it is a relaxation of the empirical BRG condition introduced in Definition 2 in the sense that it does not make any guarantees for the case that the given hint list does not contain the correct label. This may seem like significantly weakening the assumption, yet it turns out to be sufficient for our boosting approach to hold. In the resulting fully relaxed framework, no assumptions at all are made about the data. Although the BRG condition is not explicitly assumed to hold, when this is the case, our final bound given in Theorem 2 implies a high generalization accuracy. **Definition 4** (Relaxed Empirical \(\gamma\)-BRG learning).: _Let \(S\in(\mathcal{X}\times\mathcal{Y})^{m}\), \(\gamma>0\), and integer \(m_{0}\). Let \(M\) be a set of list functions of the form 3\(\mu:\mathcal{X}\mapsto\mathcal{Y}^{k}\) for any integer \(k\), such that for each \(\mu\in M\) and \(i\in[m]\), then \(y_{i}\in\mu(x_{i})\). A learning algorithm satisfies this condition with respect to \((S,\gamma,m_{0},M)\), if for any distribution \(p\) over \(S\) and any \(\mu:\mathcal{X}\mapsto\mathcal{Y}^{k}\) such that \(\mu\in M\), when given a sample \(S^{\prime}\sim p^{m_{0}}\) and access to \(\mu\), it returns \(h:\mathcal{X}\mapsto\mathcal{Y}\) such that,_ Footnote 3: We also allow lists that return an infinite subset of labels. In that case we simply have that the weak hypothesis satisfies \(\Pr_{(x,y)\sim p}[h(x)=y]\geq\gamma\). \[\Pr_{(x,y)\sim p}[h(x)=y]\geq\frac{1}{k}+\gamma. \tag{4}\] Notice that when the list \(\mu\) returns the set of all possible labels \(\mathcal{Y}\) and it is of size \(k\), this condition is essentially equivalent to the empirical weak-BRG condition, which as shown above is too weak for boosting. Requiring that the condition will hold for _any_ list size \(k\leq|\mathcal{Y}|\) is sufficient to facilitate boosting, as shown in Theorem 2. The starting point of our overall boosting algorithm (given in Algorirhm 3), is a simple learning procedure specified in Algorithm 4 that is used to effectively reduce the size of the label space. In particular, it is used to produce the initial "hint" function that is used by the boosting method. ``` 0:\(S\in(\mathcal{X}\times\mathcal{Y})^{m}\), parameters \(m_{0},p>0\). 0:A function \(\mu:\mathcal{X}\mapsto\mathcal{Y}^{p}\). 1:Set \(S_{1}:=S\). 2:for\(j=1,...,p\)do 3: Let \(\mathcal{U}_{j}\) denote the uniform distribution over \(S_{j}\). 4: Draw \(m_{0}\) examples from \(\mathcal{U}_{j}\) and pass to the weak learner, with \(\mu_{0}\equiv\mathcal{Y}\). 5: Get weak hypothesis \(h_{j}:\mathcal{X}\mapsto\mathcal{Y}\). 6: Set \(S_{i+1}\) to be all the points in \(S_{i}\) which \(h_{j}\) predicts incorrectly. 7:endfor 8:Output \(\mu\) defined by: \[\mu(x)=\big{\{}h_{1}(x),\ldots,h_{p}(x)\big{\}}.\] ``` **Algorithm 2** Initial hint We can now present our main boosting method in Algorithm 3, and state its guarantees in Theorem 2. The following theorem is formally stating the main result given in Theorem 1. **Theorem 2** (Boosting).: _Let \(\mathcal{W}\) denote a learning rule that when given any set of labeled examples and a list function, returns some hypothesis \(h:\mathcal{X}\mapsto\mathcal{Y}\). Let \(\epsilon,\delta,\gamma,m_{0}>0\), and let \(\mathcal{D}\) a distribution over \(\mathcal{X}\times\mathcal{Y}\). Then, when given a sample \(S\sim\mathcal{D}^{m}\) for \(m\geq\frac{10^{2}\ m_{0}\ \left(\ln^{2}(m)\ln(\frac{m}{\gamma})\right)}{\gamma^{3}\ \epsilon}\), oracle access to \(\mathcal{W}\) and \(T\geq\frac{8\ln(m)}{\gamma^{2}}\), \(p\geq\frac{2\ln(m)}{\gamma}\), and \(\eta=\sqrt{\frac{\ln(m)}{2T}}\), Algorithm 3 outputs a predictor \(\bar{H}\) such that the following holds. Denote by \(M\) the sets of list functions on which \(\mathcal{W}\) was trained throughout Algorithm 3. Then, with probability at least \(1-\delta\), we get that if \(\mathcal{W}\) satisfies the \(\gamma\)-BRG condition (as given in Definition 4) with respect to \((S,\gamma,m_{0},M)\) then,_ \[\Pr_{(x,y)\sim\mathcal{D}}\left[\bar{H}(x)\neq y\right]\leq\epsilon.\] Observe that Theorem 2 implicitly assumes that the sample complexity of the weak learner \(m_{0}\) is not strongly dependent on the overall sample size \(m\) and scales at most poly-logarithmically with \(m\). In other words, although the statement holds for any \(m_{0}\), the result becomes vacuous otherwise. In addition, notice that Theorem 2 is quite agnostic in the sense that we have made no prior assumptions about the data distribution. Concretely, Theorem 2 tells us that the generalization error will be small _if_ the given oracle learner \(\mathcal{W}\) happens to satisfy the \(\gamma\)-BRG condition with respect to the particular inputs it receives throughout our boosting procedure. Adaptive boostingBoosting algorithms typically do not assume knowing the value of \(\gamma\) and are adapted to it on the fly, as in the well-known Adaboost algorithm [23]. However, the boosting algorithm given in Algorithm 1, as well as our boosting method as a whole, requires feeding the algorithm with a value estimating \(\gamma\). If the estimation of \(\gamma\) provided to the algorithm is too large, the algorithm may fail. This can be resolved by a simple preliminary binary-search-type procedure, in which we guess gamma, and possible halve it based on the observed outcome. This procedure only increases the overall runtime by a logarithmic factor of \(O(\ln(1/\gamma))\), and has no affect on the sample complexity bounds. ``` 0: Training data \(S\in(\mathcal{X}\times\mathcal{Y})^{m}\), edge \(\gamma>0\), parameters \(T,\eta,p>0\). 0: A predictor \(\bar{H}:\mathcal{X}\mapsto\mathcal{Y}\). 1: Initialize: get \(\mu_{1}\) by applying Algorithm 2 over \(S\). 2:for\(j=1,\dots,p-1\)do 3: Call Hedge (Algorithm 1) with \(S\) and \(\mu_{j}\), and parameters \(\eta,T\) to get \(H_{j}:\mathcal{X}\times\mathcal{Y}\mapsto\mathbb{R}\). ``` 0: Training data \(S\in(\mathcal{X}\times\mathcal{Y})^{m}\), edge \(\gamma>0\), parameters \(T,\eta,p>0\). 0: A predictor \(\bar{H}:\mathcal{X}\mapsto\mathcal{Y}\). 1: Initialize: get \(\mu_{1}\) by applying Algorithm 2 over \(S\). 2:for\(j=1,\dots,p-1\)do 3: Call Hedge (Algorithm 1) with \(S\) and \(\mu_{j}\), and parameters \(\eta,T\) to get \(H_{j}:\mathcal{X}\times\mathcal{Y}\mapsto\mathbb{R}\). 4: Construct \(\mu_{j+1}:\mathcal{X}|_{S}\mapsto\mathcal{Y}^{p-j}\) such that for all \(x\), \[\mu_{j+1}(x)=\left\{\ y\ :\ y\in\mu_{j}(x)\ \land\ H_{j}(x,y)>\frac{T}{p-j+1}\right\},\] 5:endfor 6: Output the final hypothesis \(\bar{H}:=\mu_{p}\). ``` **Algorithm 3** Recursive Boosting ## 4 Applications to List PAC learning The applications given in this section are based on the framework of _List PAC learning_[5; 8], and demonstrate that it is in fact closely related to the multiclass boosting theory. First, we establish an equivalence between list learnability and weak learnability in the context of the PAC model. Furthermore, we present a new result on boosting for list PAC learners. Lastly, we give a novel and alternative proof for characterization of PAC learnability and List PAC learnability. In particular, these imply a simplified algorithmic approach compared to previous works [5; 8]. We start with introducing list learning in Definition 5, followed by the definition of weak PAC learning, similarly to the weak-BRG learning definition we give in this work. **Definition 5** (\(k\)-List PAC Learning).: _We say that a hypothesis class \(\mathcal{H}\subseteq\mathcal{Y}^{\mathcal{X}}\) is \(k\)-list PAC learnable, if there is an algorithm such that for every \(\mathcal{H}\)-realizable distribution \(\mathcal{D}\), and every \(\epsilon,\delta>0\), when given \(S\sim\mathcal{D}^{m}\) for \(m\geq m(\epsilon,\delta)\), it returns \(\mu_{S}:\mathcal{X}\to\mathcal{Y}^{k}\) such that with probability \(1-\delta\),_ \[\Pr_{(x,y)\sim\mathcal{D}}\bigl{[}y\in\mu_{S}(x)\bigr{]}\geq 1-\epsilon.\] **Definition 6** (\(\gamma\)-weak PAC Learning).: _We say that a hypothesis class \(\mathcal{H}\subseteq\mathcal{Y}^{\mathcal{X}}\) is \(\gamma\)-weak PAC learnable, if there is an algorithm such that for every \(\mathcal{H}\)-realizable distribution \(\mathcal{D}\), and every \(\delta>0\), when given \(S\sim\mathcal{D}^{m}\) for \(m\geq m(\delta)\), it returns \(h_{S}:\mathcal{X}\to\mathcal{Y}\) such that with probability \(1-\delta\),_ \[\Pr_{(x,y)\sim\mathcal{D}}\bigl{[}y=h_{S}(x)\bigr{]}\geq\gamma.\] Next, in the following lemmas we show the strong connection between these two notions. Specifically, we give an explicit construction of a list learner given oracle access to a weak learner, and vice versa. **Lemma 2** (Weak \(\Rightarrow\) List Learning).: _Let \(\mathcal{H}\subseteq\mathcal{Y}^{\mathcal{X}}\) be a hypothesis class. Assume \(\mathcal{W}\) is a \(\gamma\)-weak PAC learner for \(\mathcal{H}\) with sample complexity \(m_{w}:(0,1)\mapsto\mathbb{N}\). Let \(k\) be the smallest integer such that \(\frac{1}{k}<\gamma\), and denote \(\sigma=\gamma-\frac{1}{k}\). Then, there is an \((k-1)\)-List PAC learner with sample complexity \(m(\epsilon,\delta)=\tilde{O}\left(\frac{m_{w}(\delta/T)}{\sigma^{2}\epsilon}\right)\) where \(T=\tilde{O}(\frac{1}{\sigma^{2}})\) is the number of its oracle calls to \(\mathcal{W}\)._ **Lemma 3** (List \(\Rightarrow\) Weak Learning).: _Let \(\mathcal{H}\subseteq\mathcal{Y}^{\mathcal{X}}\) be a hypothesis class. Assume \(\mathcal{L}\) is a \(k\)-List PAC learner for \(\mathcal{H}\) with sample complexity \(m_{\ell}:(0,1)\mapsto\mathbb{N}\). Then, for any \(\epsilon>0\) there is an \(\gamma\)-Weak PAC learner where \(\gamma=\frac{1-2\epsilon}{k}\), with sample complexity \(m(\delta)=\tilde{O}\left(m_{\ell}(\epsilon,1/2)\cdot k+(k/\epsilon)^{2}\right)\) where \(q=2k\log(2/\delta)\) is the number of its oracle calls to \(\mathcal{L}\)._ Lastly, Theorem 3 concludes this section demonstrating the strong ties between weak and list learnability. Concretely, it combines the results of both Lemma 2 and Lemma 3 above to show that when the appropriate parameters \(\gamma\) and \(k\) are optimal, then \(\gamma\)-PAC learnability and \(k\)-list PAC learnability are in fact equivalent. **Theorem 3** (Optimal accuracy \(\iff\) Optimal list size).: _Let \(\mathcal{H}\subseteq\mathcal{Y}^{\mathcal{X}}\). Denote by \(k(\mathcal{H})\) the smallest integer \(k\) for which \(\mathcal{H}\) is \(k\)-list PAC learnable, assuming that \(k(\mathcal{H})<\infty\). Denote by \(\gamma(\mathcal{H})\) the supremum over \(\gamma\in[0,1]\) for which \(\mathcal{H}\) is \(\gamma\)-weak PAC learnable. Then, it holds that \(k(\mathcal{H})\cdot\gamma(\mathcal{H})=1\)._ ### List boosting and conformal learning List prediction rules naturally arise in the setting of _conformal learning_. In this model, algorithms make their predictions while also offering some indication of the level of reliable confidence in those predictions. For example in multiclass classification, given an unlabeled test point \(x\), the conformal learner might output a list of all possible classes along with scores which reflect the probability that \(x\) belongs to each class. This list can then be truncated to a shorter one which contains only the classes with the highest score. See the book by [28] and surveys by [25, 2] for more details. We now consider a closely related notion of List PAC learnability, that similarly to conformal learning allows the list size to depend on the desired confidence. This was also defined in [8], termed weak List PAC Learning, due to the dependence of the list size on the input parameter. Indeed, it is natural to expect that the list size will increase when we require a more refined accuracy, and perhaps that this is a weaker notion of learnability than that of List PAC learning, which corresponds to a fixed list size. Interestingly, it turns out that weak List PAC Learning is in fact equivalent to strong List PAC Learning. In other words, a list learner with a list size that varies with the desired accuracy parameter can be _boosted_ to a list learner with a fixed list size, and arbitrarily good accuracy. The proof is by way of a generalization of our boosting technique to lists, as stated in Theorem 4. **Theorem 4** (List boosting).: _Let \(\mathcal{H}\subseteq\mathcal{Y}^{\mathcal{X}}\). Let \(\epsilon_{0},\delta_{0}>0\), and assume that there exists an algorithm such that for every \(\mathcal{H}\)-realizable distribution \(\mathcal{D}\), and for some integer \(k_{0}:=k_{0}(\epsilon_{0})\), when given \(S\sim\mathcal{D}^{m}\) for \(m\geq m(\epsilon_{0},\delta_{0})\), it returns \(\mu_{S}:\mathcal{X}\to\mathcal{Y}^{k_{0}}\) such that with probability \(1-\delta_{0}\),_ \[\Pr_{(x,y)\sim\mathcal{D}}\bigl{[}y\in\mu_{S}(x)\bigr{]}\geq 1-\epsilon_{0}.\] _Then, there is a \(k\)-List PAC learning algorithm for \(\mathcal{H}\) for a fixed list size \(k=\left\lfloor\frac{k_{0}}{1-2\epsilon_{0}}\right\rfloor\)._ Observe that Theorem 4 indeed generalizes classic boosting. Specifically, consider the binary setting and notice that when \(k_{0}=1\), and \(\epsilon_{0}\) is slightly smaller than \(1/2\), Theorem 4 implies that weak learning with edge \(\approx\frac{1}{2}-\epsilon_{0}\), is equivalent to strong learning with arbitrarily small error. The following corollary shows that weak List PAC Learning implies strong List PAC Learning. **Corollary 1**.: _If a class \(\mathcal{H}\subseteq\mathcal{Y}^{\mathcal{X}}\) is weakly-List PAC learnable then it is also List PAC learnable._ ### Characterization of List PAC learnability We now focus on the characterization of List PAC learnability, which also implies the characterization of PAC learnability. Towards that end, we define the Daniely-Shwartz (DS) dimension [9]. Specifically, we give the natural generalization of it to \(k\)-sized lists, called the \(k\)-DS dimension. **Definition 7** (\(k\)-DS dimension [8]).: _Let \(\mathcal{H}\subseteq\mathcal{Y}^{\mathcal{X}}\) be a hypothesis class and let \(S\in\mathcal{X}^{d}\) be a sequence. We say that \(\mathcal{H}\)\(k\)-DS shatters \(S\) if there exists \(\mathcal{F}\subseteq\mathcal{H},|\mathcal{F}|<\infty\) such that \(\forall f\in\mathcal{F}|_{S},\ \forall i\in[d],\)\(f\) has at least \(k\)\(i\)-neighbors. The \(k\)-DS dimension of \(\mathcal{H}\), denoted as \(d^{k}_{DS}=d^{k}_{DS}(\mathcal{H})\), is the largest integer \(d\) such that \(\mathcal{H}\)\(k\)-DS shatters some sequence \(S\in\mathcal{X}^{d}\)._ We note that when \(k=1\), this captures the standard DS dimension [9]. We show that when the \(k\)-DS dimension is bounded, one can construct a simple weak learner which satisfies our BRG condition. Thus, it is also amenable to our boosting method, leading to a qualitatively similar results for characterization of learnability as in [5, 8]. The result is given in the next theorem. **Theorem 5** (PAC and List-PAC learnability).: _Let \(\mathcal{H}\subseteq\mathcal{Y}^{\mathcal{X}}\) be an hypothesis class with \(k\)-DS dimension \(d<\infty\). Then, \(\mathcal{H}\) is List PAC learnable. Furthermore, there is a learning algorithm \(A\) for \(\mathcal{H}\) with the following guarantees. For every \(\mathcal{H}\)-realizable distribution \(\mathcal{D}\), every \(\delta>0\) and every integer \(m\), given an input sample \(S\sim\mathcal{D}^{m}\), the algorithm \(A\) outputs \(\mu=A(S)\) such that4_ Footnote 4: The \(\tilde{O}\) notation conceals \(\operatorname{polylog}(m,1/\gamma)\) factors. \[\Pr_{(x,y)\sim\mathcal{D}}[\mu(x)\not\ni y]\leq\tilde{O}\Bigg{(}\frac{d^{5} k^{4}+\log(1/\delta)}{m}\Bigg{)},\] _with probability at least \(1-\delta\) over \(S\). In particular, if \(k=1\), then \(\mathcal{H}\) is PAC learnable._ We remark that for cases where \(d\ll k\) we have an improved result over the bound given by [8]. For comparison, the error bound given by [8], Theorem 2 is \(\tilde{O}\left(\frac{d^{1.5}k^{6}+\log(1/\delta)}{m}\right)\). Thus, Theorem 5 demonstrates that our boosting-based approach gives rise to an alternative proof for the characterization of PAC learnability and List PAC learnability. Moreover, our approach offers a simpler algorithm and analysis technique, that can perhaps be of use in future applications as well. ## 5 Generalization via compression This section is concerned with the analysis of our boosting method given in Algorithm 3, and the proof of our main result given in Theorem 1 (and formally in Theorem 2). The boosting algorithm given in this work is best thought of as a _sample compression scheme_[17]. A sample compression scheme (Definition 8) is an abstraction of a common property to many learning algorithms. It can be viewed as a two-party protocol between a _compresser_ and a _reconstructor_. The compressor gets as input a sample \(S\). The compressor picks a small subsample \(S^{\prime}\) of \(S\) and sends it to the reconstructor. The reconstructor outputs an hypothesis \(h\). The correctness criteria is that \(h\) needs to correctly classify _all_ examples in the input sample \(S\). We formally define it next. **Definition 8** (Sample Compression Scheme [17]).: _Let \(r\leq m\) be integers. An \(m\to r\) sample compression scheme consists of a reconstruction function_ \[\rho:(\mathcal{X}\times\mathcal{Y})^{r}\rightarrow\mathcal{Y}^{\mathcal{X}}\] _such that for every \(S\in(\mathcal{X}\times\mathcal{Y})^{m}\), there exists \(S^{\prime}\subseteq S\) of size \(r\), such that for all \((x,y)\in S\) it holds that \(h(x)=y\), where \(h=\rho(S^{\prime})\)._ We are now ready to prove the main result, given in Theorem 2. The next paragraph highlights the assumptions that were made, followed by the proof of the theorem. Specifically, we assume for simplicity that the learning algorithm does not employ internal randomization. Thus, it can be regarded as a fixed, deterministic mapping from a sequence of \(m_{0}\) unweighted examples, and a function \(\mu:\mathcal{X}\mapsto\mathcal{Y}^{k}\), to a hypothesis \(h:\mathcal{X}\mapsto\mathcal{Y}\). We note that our results remain valid for a randomized learner as well, yet we assume the above for ease of exposition. Proof of Theorem 2.: The proof is given via a sample compression scheme, demonstrating that if weak learnability holds, then the final predictor \(\bar{H}\) can be represented using a small number of training examples, and that it is consistent with the entire training set. First, we fix the sample \(S\) and assume that the \(\gamma\)-BRG condition holds for \(S\) as in the theorem statement. We will then show for each \(j=1...p\) that \(\mu_{j+1}\) satisfies the following 3 properties: (a) for each \(x\in\mathcal{X}\) it returns at most \(p-j\) labels, (b) for all \((x,y)\in S\), it holds that \(y\in\mu_{j+1}(x)\), and (c) it can be represented using only a small number of training examples. First, note that \(\mu_{1}\) is indeed a mapping to at most \(p\) labels by its construction in Algorithm 2. Moreover, recall that by the above assumption, the weak learner is a deterministic mapping from its input to a hypothesis. Therefore, any hypothesis produced by the weak learner within Algorithm 2 can be represented simply by the sequence of \(m_{0}\) examples on which it was trained. Lemma 4 implies that there is a subset \(S^{\prime}\subsetneq S\) of size at most \(m_{0}\cdot p\), where \(p=\lceil\log(m)/\gamma\rceil\) such that the following holds: There are \(p\) hypotheses \(h^{\prime}_{i}:\mathcal{X}\mapsto\mathcal{Y}\), that comprise the list \(\mu_{1}\), where each \(h^{\prime}_{i}\) can be represented by \(m_{0}\) examples in \(S^{\prime}\). It is also guaranteed by Lemma 4 that for all \((x,y)\in S\), it holds that \(y\in\mu_{1}(x)\). Next, we will show that the 3 properties (a)-(c) above holds for \(\mu_{2}\) (and similarly for all \(j\geq 2\)). Consider the first \(T\) weak hypotheses \(h^{(1)}_{1},...,h^{(1)}_{T}\) generated by Algorithm 3, within its first call to Algorithm 1. Notice that each \(h^{(1)}_{t}\) can now be represented by the sequence of \(m_{0}\) examples on which it was trained, as well as the same \(m_{0}\cdot p\) examples from above that correspond to \(\mu_{1}\). Therefore, we can represent the mapping \(\mu_{2}\) by a total of \(T\cdot m_{0}+m_{0}\cdot p\) examples. Next, we will show that (a) and (b) hold, by applying Lemma 1. Specifically, we use it to prove that for each \((x,y)\in S\), \(\mu_{2}(x)\) returns \(p-1\) labels, and also that \(y\in\mu_{2}(x)\). We first show that the conditions of Lemma 1 are met, by considering a simple conversion of all the labels according to \(\mu_{1}\). Specifically, since both Lemma 1 and Algorithm 1 assume the labels are in \([k]\), yet both Algorithm 3 and our weak learner assume the labels in \(S\) are in \(\mathcal{Y}\), we can think of mapping each \(y\in\mathcal{Y}\) to \([p+1]\) according to \(\mu_{1}\), getting its corresponding label \(\ell\in[p+1]\) in the mapped space, and then remapping back to the \(\mathcal{Y}\) space when returning to Algorithm 3. Concretely, for each pair \((x,y)\in S\), convert it to \((x,\ell)\in\mathcal{X}\times[p]\), such that the \(\ell\)-th entry of \(\mu_{1}(x)\) is \(y\), denoted \(\mu_{1}(x)_{\ell}=y\). By Definition 4 we obtain a hypothesis \(h:\mathcal{X}\mapsto\mathcal{Y}\). For its internal use in Algorithm 1, we convert it into a hypothesis \(h^{\prime}:\mathcal{X}\mapsto[p+1]\) such that if \(h(x)\in\mu(x)\), set \(h^{\prime}(x)=\ell\) for \(\ell\) that satisfies \(\mu_{1}(x)_{\ell}=h(x)\), or \(h^{\prime}(x)=p+1\) if there is no such \(\ell\in[p]\). Finally, we set the output \(H_{1}\) of Algorithm 1 to be defined with respect to the original, remapped, weak hypotheses \(h\). Now applying Lemma 1 with \(k:=p\), we get that for all \((x,y)\in S\), we have \(H_{1}(x,y)>T/p\). Therefore, it must hold that \(y\in\mu_{2}(x)\). Moreover, since \(\sum_{y^{\prime}\in\mu_{1}(x)}H_{1}(x,y^{\prime})\leq\sum_{y^{\prime}\in \mathcal{Y}}H_{1}(x,y^{\prime})\leq T\), Lemma 1 implies that there must be a label \(y^{\prime}\neq y\) such that \(y^{\prime}\in\mu_{1}(x)\) for which \(H_{1}(x,y^{\prime})<T/p\). Therefore, by construction of \(\mu_{2}\) we get that \(y^{\prime}\notin\mu_{2}(x)\), and \(|\mu_{2}(x)|\leq|\mu_{1}(x)\setminus\{y^{\prime}\}|=p-1\). Next, we continue in a similar fashion for all rounds \(j=3,...,p-1\). Namely, the same arguments as above show that by applying Lemma 1 with \(k:=p-j+2\), we get that \(\mu_{j+1}\) satisfies the above conditions over \(S\). Moreover, to represent each weak hypotheses \(h^{(j)}_{t}\) generated by Algorithm 3 within its \(j\)-th call to Algorithm 1, we use the sequence of \(m_{0}\) examples on which it was trained, as well as the same \((j-1)\cdot T\cdot m_{0}+m_{0}\cdot p\) examples from above that correspond to \(\mu_{j}\). Overall, we have shown that if \(\mathcal{W}\) satisfies the \(\gamma\)-BRG condition (as given in Definition 4) with respect to \((S,\gamma,m_{0},M)\) then the final predictor \(\bar{H}:=\mu_{p}\) is both consistent with the sample \(S\), and can be represented using only \(r\) examples, where, \[r=(p-1)\cdot T\cdot m_{0}+m_{0}\cdot p=O\left(\frac{p\cdot m_{0}\ln(m)}{\gamma^ {2}}\right)=O\left(\frac{m_{0}\ln^{2}(m)}{\gamma^{3}}\right). \tag{5}\] We can now apply a sample compression scheme bound to obtain the final result. Specifically, we apply Theorem 6 (for \(k=1\)), for a \(m\to r\) sample compression scheme algorithm \(\mathcal{A}\) equipped with a reconstruction function \(\rho\) (see Definition 8). We denote \(\text{err}_{\mathcal{D}}(\bar{H})=\Pr_{(x,y)\sim\mathcal{D}}[\bar{H}(x)\neq y]\). Then, by Theorem 6 we get that, \[\Pr_{S\sim\mathcal{D}^{m},\mathcal{A}}\left[\bar{H}\text{ consistent with }S\Rightarrow\text{err}_{\mathcal{D}}(\bar{H})>\frac{r\ln(m)+\ln(1/\delta)}{m-r} \right]\leq\delta,\] where the overall randomness of our algorithm is denoted by \(\mathcal{A}\). Plugging in \(r\) from Equation (5), and \(m\) given in the theorem statement, yields the desired bound.
2305.16659
Experimental demonstration of robotic active matter micellization
Active matter composed of self-propelled particles features a fascinating set of self-organization phenomena, spanning from motility-induced phase separation to phototaxis to topological excitations depending on the nature and parameters of the system. In the present Letter, we consider the formation of micelles from particles with a broken symmetry having a circular back and a sharpened nose and moving towards the cusp. As we demonstrate in experiments with robotic swarms, such particles can either remain in the isotropic phase or form micelles depending on the location of their center of inertia in accordance with a recent theoretical proposal [T. Kruglov, A. Borisov, Particles 2021 (2021)]. Crucially, the predicted micellization does not involve any charge asymmetry, in contrast to that observed in surfactants, and is governed by an interplay of activity and particle shape asymmetry. This renders the observed ordering reversible upon switching of the particles' activity and opens the route towards novel applications in tunable structuring of materials.
Anastasia A. Molodtsova, Mikhail K. Buzakov, Alina D. Rozenblit, Vyacheslav A. Smirnov, Daria V. Sennikova, Vadim A. Porvatov, Oleg I. Burmistrov, Ekaterina M. Puhtina, Alexey A. Dmitriev, Nikita A. Olekhno
2023-05-26T06:08:27Z
http://arxiv.org/abs/2305.16659v1
# Experimental demonstration of robotic active matter micellization ###### Abstract Active matter composed of self-propelled particles features a fascinating set of self-organization phenomena, spanning from motility-induced phase separation to phototaxis to topological excitations depending on the nature and parameters of the system. In the present Letter, we consider the formation of micelles from particles with a broken symmetry having a circular back and a sharpened nose and moving towards the cusp. As we demonstrate in experiments with robotic swarms, such particles can either remain in the isotropic phase or form micelles depending on the location of their center of inertia in accordance with a recent theoretical proposal [T. Kruglov, A. Borisov, Particles 2021 (2021)]. Crucially, the predicted micellization does not involve any charge asymmetry, in contrast to that observed in surfactants, and is governed by an interplay of activity and particle shape asymmetry. This renders the observed ordering reversible upon switching of the particles' activity and opens the route towards novel applications in tunable structuring of materials. Large assemblies of particles able to self-propel or self-rotate by converting either internal or ambient energy resources into a directed motion demonstrate the emergence of collective phenomena [1; 2] and are referred to as _active matter_[3]. Such systems span the entire range of condensed matter, from tissues [4] and bacterial colonies [5; 6] to colloidal particles [7] or even collections of simple moving robots [8; 9; 10; 11; 12]. There is a rich variety of self-organization phenomena and clustering effects in active matter systems, including motility-induced phase separation [13], formation of colloidal crystals [14; 15; 16], chiral edge states [17; 18; 19], and topological defects [20], to name a few. However, the formation of micelles - round- or spherical-shaped assemblies of elongated particles in which they orient one of their non-equivalent edges to the inner region of a cluster and the other one to the outer region - has been demonstrated only in Janus particles covered with surfactants [21]. This phenomenon seemed elusive in self-propelled particles without charge displacement, until the recent theoretical proposal of active matter micellization driven purely by particle shape asymmetry [22]. In the present Letter, we consider ensembles of self-propelled particles having the shape shown in Fig. 1(a) and dubbed _circularangles_[22] as they are formed by combining a circle and an angle. To demonstrate experimentally that such particles feature a micellization transition depending on the center of inertia location at the particle axis, we construct a swarm of circulangle-shaped vibrating robots (bristle-bots) shown in Fig. 1(b,c) and based on Swarmodroid 1.0 platform [23]. Such robots move with controlled activity and are placed in a shallow parabolic potential of a satellite dish to prevent condensation at the border characteristic of self-propelled particles [8; 24; 25], Fig. 1(a), as such condensation may lead to the formation of structures not related to actual bulk transitions in the active medium. In accordance with the predictions of Ref. [22], we report the emergence of micelles and address the efficiency of their formation de Figure 1: (a) Robotic swarm in a parabolic arena demonstrating the formation of a single micelle (shaded with purple). The inset shows a photograph of the Swarmodroid implementing a circulangle with a possible center of inertia at points \(O\) and \(S\). (b) 3D model of the circuit board with a vibration motor which enables the robot movement by elastic bristles. (c) Explosion diagram showing the geometry of the plastic parts composing a single robot. pending on the robot packing density, their center of inertia location, motion velocity, and the friction between side surfaces of robots. Experimental setup.In our experiments, we implement circulangles as self-propelled bristle-bots converting vibration to a directed motion, Fig. 1(b). Such robots consist of 3D-printed bodies with elastic bristles at the bottom and a printed circuit board carrying a vibration motor, a battery, and circuitry for infrared remote control, which allows turning the robots on and off simultaneously as well as varying their vibration activity (i.e., a self-propulsion velocity) by means of pulse width modulation (PWM) of the motor voltage [23]. The system dynamics is extracted using a tracking pipeline based on custom recognition software1 and ArUco markers. Footnote 1: [https://github.com/swarmtronics/ampy](https://github.com/swarmtronics/ampy) The dimensions of each robot are \(L=85.3\) mm, \(d=47.7\) mm, and the angle is \(\theta=45^{\circ}\) corresponding to \(M=8\) circulangles in a complete micelle. We choose such a value of \(M\) to clearly distinguish micellization from crystallization, as the \(C_{8}\) point symmetry is incompatible with crystalline order. The height of the robots including bristles is 26 mm. Circulangle's center of inertia is on its line of symmetry at a distance \(l\) from the angle point. By fastening an extra load near the angle point, we are able to move the center of inertia from \(l=45\pm 1\) mm (point \(O\)) for unloaded robot to \(l=35\pm 1\) mm (point \(S\)) for a robot with a load, Fig. 1(a). We study ensembles composed of \(N=15,30\) and \(45\) robots. The limited size of the system would likely result in the condensation of self-propelled particles on the boundary [24; 25; 8] and easily cloak phase transitions in the bulk. To prevent such a condensation, we place the robotic swarm in a parabolic dish having the dimensions of \(120\times 110\times 12\) cm\({}^{3}\) and creating a soft localizing potential in contrast to a hard wall boundary, Fig. 1(c). Individual robots in such harmonic traps either point outwards while moving by the angle with respect to the center of the potential or follow circular orbits [26]. Order parameter.To quantify the formation of micelles, we introduce the order parameter \[P_{\mathrm{m}}=\frac{1}{N}\sum_{i=1}^{N}\exp\left(-\left|\sum_{j=1}^{N}e^{-| \mathbf{r}_{\mathrm{i}}-\mathbf{r}_{\mathrm{j}}|/\lambda}-M\right|\right), \tag{1}\] where \(r_{\mathrm{i}}\) and \(r_{\mathrm{j}}\) are coordinates of particles' noses, \(M\) is the number of circulangles in a complete micelle, \(\lambda\) is a characteristic size of a particle (here, we choose \(\lambda=L/5\)) and \(N\) is the total number of particles in the system. Fig. 2(a) shows the values of the order parameter calculated for a single complete micelle, single non-complete micelles, an inverse micelle, a cluster consisting of two robots and a structure of 36 robots characterized by crystalline order. Terms in the sum by index \(i\) reach unity for a complete micelle while rendering lower values for other configurations. However, due to the exponential dependence on the distance between the robot noses, the order parameter yields lower values for experimentally observed micelles with most observed micelles corresponding to \(P_{\mathrm{m}}=0.2\ldots 0.4\). For incomplete micelles and non-micellar structures the order parameter \(P_{\mathrm{m}}\) yields exponentially lower values. The normalizing pre-factor \(1/N\) ensures the invariance of the order parameter with respect to the number of robots. For fully chaotic systems, the order parameter yields \(P_{\mathrm{m}}=(0.9\ldots 1.1)\cdot 10^{-3}\), independently of the number of robots in the system. Fig. 2(b) shows the values of the order parameter calculated for systems with numbers of particles reaching one hundred. Fully micellized system corresponds to robots being added to the system to sequentially form full micelles. In this case, the order parameter is at its maximum for the numbers of robots that are multiples of 8 with lower values corresponding to the presence of incomplete micelles. One micelle among chaos corresponds to adding the robots randomly around a complete micelle Figure 2: Order parameter verification for different robotic swarm configurations. (a) The values of order parameter \(P_{\mathrm{m}}\) Eq. 1 are extracted from the experimental setup configurations shown in the insets. (b) Dependence of the order parameter \(P_{\mathrm{m}}\) of a robotic ensemble on the number of robots. within a distance of several robot lengths. In this case, the order parameter decreases as \(1/N\) with the number of robots. Therefore, the introduced order parameter effectively corresponds to the fraction of micelles in the robotic swarm. We also study the dependence of the order parameter on the distance between the robots in a single micelle by introducing random shifts of the robots' noses from the center of a micelle. The components of the shift vectors have a normal distribution with a mean zero and a dispersion \(\Delta r\) which we take from \(0\) to \(5\) mm (\(1/10\) of the robot diameter). It is seen that the order parameter decreases exponentially depending on this distance, yielding the same value \(P_{\mathrm{m}}=0.1\) for a full micelle with \(5\) mm gaps between the robots and a dense micelle but without one robot. Experimental results.We start with implementing a system of \(N=45\) robots with the center of mass located at point \(O\) and vibrating with the intensity produced by the motor voltage pulse with modulation duty cycle set to \(10\%\) (denoted as \(\mathrm{PWM}=10\%\) for brevity), which corresponds to the mean self-propulsion speed of \(5\) cm/s, Fig. 3(a). It is seen that the order parameter \(P_{\mathrm{m}}\) fluctuates during the system evolution, but does not feature any pronounced growth characteristic of micellization. The robots in this case are predominantly directed away from the vertex of the parabolic potential [26]. Adding an abrasive to increase the friction between the robots does not lead to any increase in the order parameter. The situation changes qualitatively when the center of mass is moved to point \(S\), Fig. 3(b). In this cases, the are robots predominantly directed towards the center of the trap. Initially randomized, the system first demonstrates a low order parameter corresponding to a system without micelles with a robot distribution shown for \(t=36\) s, followed by an increase in the order parameter at \(t=173\) s corresponding to formation of partial micelles. Finally, at \(t=190\ldots 270\) s a plateau of the order parameter is observed, corresponding to formation of a stable micelle shown in the inset at \(t=267\) s, which is directly demonstrated in Supplementary Video [27]. Such a result supports the theoretical prediction [22] of micellization dependence on the circulating center of mass location even in a more complex experimental scenario with a low number of particles, the parabolic potential, and 3D nature of individual robots that can flip and differ in their motion characteristics from the idealized particles. Increasing the robot vibration activity to \(\mathrm{PWM}=20\%\) leads to transient formation of micelles that disassemble spontaneously. Next, we consider the dependencies on the density, by varying the number of robots \(N\) in the harmonic trap, as well as the dependence on vibration activity and the position of the center of inertia. For systems with \(\mathrm{PWM}=10\%\) and the center of inertia at point \(O\), shown in Fig. 4(a), the order parameter only fluctuates independently of density. Increasing \(\mathrm{PWM}\) to \(20\%\) leads to instability of robots at higher densities (\(N=30,45\)) due to the robots beginning to flip. The corresponding order parameter curves are therefore not shown, see Fig. 4(b). In contrast to Fig. 4(b), the system with center of mass located closer to the nose of a circulangle does not become unstable, Fig. 4(c,d), and formation of micelles at higher densities (\(N=45\)) is observed instead for both values of robot vibration activity. Next, we modify the system by covering the side surfaces of all robots with abrasive paper to consider how micellization changes in the case of high friction between the particles, which may render important for microscale implementations. As seen in Fig. 4(e,f), for the center of mass location in point \(O\) the behavior is similar to the case of circulangles with low side friction, Fig. 4(a,b). The order parameter slightly fluctuates indicating the absence of micellization, and the systems with \(\mathrm{PWM}=20\%\) and \(N=30,45\) become unstable. However, when the center of mass is located at point \(S\), the situation becomes different from the case of smooth circulangles. While for \(10\%\) activity level the micelliza Figure 3: Order parameter for robotic swarms composed of \(N=45\) robots with the center of mass at point \(O\) (a): \(\mathrm{PWM}=10\%\) with abrasive (blue curve) and \(\mathrm{PWM}=10\%\) without abrasive (gray curve); and with the center of mass at point \(S\) (b): \(\mathrm{PWM}=10\%\) and no abrasive (blue curve) and \(\mathrm{PWM}=20\%\) and no abrasive (gray curve). tion is observed only for \(N=45\) and less pronounced than for smooth particles, Fig. 4(g), for 20% activity the formation of micelles is considerably increased, as seen in Fig. 4(d), and becomes observable for lower densities with \(N=30,45\), in contrast to the absence of micellization in smooth circulangles at \(N=30\). Such a behavior facilitates an intriguing interplay of individual particles' symmetry breaking, activity, and friction in the considered system. Moreover, an increase in micellization efficiency for higher inter-particle friction points towards the possibility of microscale realizations, for example, with Janus particles [21; 28]. _Conclusion and Outlook_. In this Letter, we have demonstrated the emergence of micellization in swarms of self-propelled particles which is governed by an interplay of particles' activity and their symmetry breaking, in contrast to micellization phenomena caused by electrical polarization in surfactant systems. Considering microscale setups, Janus particles look the most promising as they allow to switch the activity of particles by an external illumination [14; 29]. Moreover, active particles with asymmetric shapes have been readily demonstrated [30; 31; 32], which makes the proposed design feasible. Finally, a considerable enhancement in the inter-particle friction even enhances the micellization, which may turn crucial in microscale setups featuring a set of potential applications [33; 34] but characterized with a dominating role of surface interactions. ###### Acknowledgements. We acknowledge fruitful discussions with Dr. Timofey Kruglov, who brought the idea of active matter micellization to our attention, Dr. Alexander Borisov, and Prof. Anton Souslov. The work is partially supported by Robert Bosch, Research and Technology Office Russian Federation, and by School of Physics and Engineering, ITMO University (RPMA grant).
2306.17017
Growth of the Higgs Field for Kapustin-Witten solutions on ALE and ALF gravitational instantons
The $\theta$-Kapustin-Witten equations are a family of equations for a connection $A$ on a principal $G$-bundle $E \to W^4$ and a one-form $\phi$, called the Higgs field, with values in the adjoint bundle $\operatorname{ad} E$. They give rise to second-order partial differential equations that can be studied more generally on Riemannian manifolds $W^n$ of dimension $n$. For $G=SU(2)$, we report a dichotomy that is satisfied by solutions of the second-order equations on Ricci-flat ALX spaces with sectional curvature bounded from below. This dichotomy was originally established by Taubes for $W^n=\mathbb{R}^n$; the alternatives are: either the asymptotic growth of the averaged norm of the Higgs field over geodesic spheres is larger than a positive power of the radius, or the commutator $[\phi\wedge\phi]$ vanishes everywhere. As a consequence, we are able to confirm a conjecture by Nagy and Oliveira, namely, that finite energy solutions of the $\theta$-Kapustin-Witten equations on ALE and ALF gravitational instantons with $\theta\neq 0$ are such that $[\phi\wedge\phi]=0$, $\nabla^A \phi=0$, and $A$ is flat.
Michael Bleher
2023-06-29T15:09:22Z
http://arxiv.org/abs/2306.17017v1
# Growth of the Higgs Field for Kapustin-Witten Solutions on ALE and ALF gravitational instantons ###### Abstract The \(\theta\)-Kapustin-Witten equations are a family of equations for a connection \(A\) on a principal \(G\)-bundle \(E\to W^{4}\) and a one-form \(\phi\), called the Higgs field, with values in the adjoint bundle \(\operatorname{ad}E\). They give rise to second-order partial differential equations that can be studied more generally on Riemannian manifolds \(W^{n}\) of dimension \(n\). For \(G=SU(2)\), we report a dichotomy that is satisfied by solutions of the second-order equations on Ricci-flat ALX spaces with sectional curvature bounded from below. This dichotomy was originally established by Taubes for \(W^{n}=\mathbb{R}^{n}\); the alternatives are: either the asymptotic growth of the averaged norm of the Higgs field over geodesic spheres is larger than a positive power of the radius, or the commutator \([\phi\wedge\phi]\) vanishes everywhere. As a consequence, we are able to confirm a conjecture by Nagy and Oliveira, namely, that finite energy solutions of the \(\theta\)-Kapustin-Witten equations on ALE and ALF gravitational instantons with \(\theta\neq 0\) are such that \([\phi\wedge\phi]=0\), \(\forall^{A}\phi=0\), and \(A\) is flat. ## 1 Introduction Let \(G=SU(2)\) and consider a principal \(G\)-bundle \(E\) over a complete Riemannian manifold \((W^{n},g)\) of dimenson \(n\). Throughout, we assume that \(W^{n}\) is an ALX manifold. Suffice it to say for now that we take this to mean \(W^{n}\) is a non-compact manifold with fibered ends such that the \(k\)-dimensional fibers have bounded volume. Consequently, the volume of geodesic balls asymptotically grows like \(r^{n-k}\). Denote by \((A,\phi)\in\mathcal{A}(E)\times\Omega^{1}(W^{n},\operatorname{ad}E)\) a pair consisting of a connection on \(E\) and an ad \(E\)-valued one-form. We write \(\star\) for the Hodge star operator and equip \(\Omega^{k}(W^{n},\operatorname{ad}E)\) with the density-valued inner product \(\langle a,b\rangle=\operatorname{Tr}a\wedge\star b\). Upon integration this provides the usual \(L^{2}\)-product \(\langle a,b\rangle_{L^{2}(W)}=\int_{W^{n}}\langle a,b\rangle\) on \(\Omega^{k}(W^{n},\operatorname{ad}E)\). Throughout, we assume that \(A\) and \(\phi\) have enough derivatives and are locally square-integrable. In this article we report on a property of the pair \((A,\phi)\) whenever it satisfies the following second order differential equation. \[\forall^{A^{\dagger}}\forall^{A}\phi+\frac{1}{2}\star[\star[\phi\wedge\phi] \wedge\phi]+\operatorname{Ric}\phi=0. \tag{1}\] Here \(\forall^{A^{\dagger}}\) is the formal adjoint of \(\forall^{A}\) with respect to the \(L^{2}\)-product and the Ricci curvature is viewed as an endomorphism of \(\Omega^{1}(W,\operatorname{ad}E)\). The differential equation (1) is of particular relevance in the context of the Kapustin-Witten equations. To see this, consider for the moment the case of a four-manifold \(W^{4}\) and define the Laplace-type differential operator on \(\Omega^{1}(W,\operatorname{ad}E)\): \[\tilde{\Delta}_{A}(\phi)=-d_{A}d_{A}^{\star}\phi+\star 2d_{A}(d_{A}\phi)^{-}\,\] where \(d^{*}_{A}=\star d_{A}\star\) is the usual codifferential and \(()^{\pm}\) denotes the (anti-)self-dual part of a given two-form. Compare this operator with the \(\theta\)-Kapustin-Witten equations for \((A,\phi)\), which are given by \[\big{(}\cos\tfrac{\theta}{2}\,(F_{A}-\tfrac{1}{2}[\phi\wedge\phi])- \sin\tfrac{\theta}{2}\,d_{A}\phi\,\big{)}^{+} =0\] \[\big{(}\sin\tfrac{\theta}{2}\,(F_{A}-\tfrac{1}{2}[\phi\wedge\phi] )+\cos\tfrac{\theta}{2}\,d_{A}\phi\,\big{)}^{-} =0\] \[d^{*}_{A}\phi =0\;.\] Clearly, if \((A,\phi)\) is a solution of the \(\theta=0\) version of the Kapustin-Witten equations, then \(\tilde{\Delta}_{A}\phi=0\). Moreover, using a Bochner-Weitzenbock identity that relates \(\tilde{\Delta}_{A}\) and the Bochner Laplacian \(\mathbb{V}^{A^{\dagger}}\mathbb{V}^{A}\), as well as the remaining part of the \(0\)-Kapustin-Witten equations \(F^{+}_{A}=[\phi\wedge\phi]^{+}\), one finds that harmonicity of \(\phi\) with respect to \(\tilde{\Delta}_{A}\) is equivalent to equation (1). In fact a very similar argument shows that the same is true if \((A,\phi)\) is a solution of the \(\theta\)-Kapustin-Witten equations [13, 14, 15]. The Kapustin-Witten equations arise from a family of topologically twisted, four-dimensional, supersymmetric gauge theories that exhibit surprisingly deep connections to several complementary areas of mathematics. They were first studied by Kapustin and Witten in the context of the geometric Langlands program [12]. Witten later realized that solutions of the Kapustin-Witten equations are also related to topological invariants of knots, specifically to Khovanov homology and its generalizations [16, 17]. Since then the moduli space of solutions to the Kapustin-Witten equations has been subject to extensive study [14, 13, 15, 16, 17, 18, 19, 20, 21]. _Remark_.: Note that we use a slightly different normalization than is otherwise common in the literature: our \(\theta\) coincides with \(2\theta_{GU}\) in [17]. Though this is mostly a matter of taste, when viewed as dimensional reductions of the five-dimensional Haydys-Witten equations, the normalization used here has a geometric interpretation as incidence angle between Haydys' preferred vector field and the hyperplane orthogonal to the reduction. Let us now return to general \(n\)-manifolds. In what follows, we are guided by the intuition that if \(\phi\) satisfies (1), then it is harmonic with respect to some well-behaved Laplace-type operator. In particular, one should expect that it satisfies an appropriate analogue of the mean-value principle. Hence, fix some point \(p\in W^{4}\) and denote by \(B_{r}\) the closed geodesic ball of radius \(r\) centered at \(p\). Consider the non-negative function \(\kappa\) defined on \([0,\infty)\) by \[\kappa^{2}(r)=\frac{1}{r^{n-k-1}}\int_{\partial B_{r}}\big{|}\phi\big{|}^{2}\;.\] As a consequence of the asymptotic volume growth on \(W^{n}\), \(\kappa(r)\) is related to the average value of \(\big{|}\phi\big{|}\) on geodesic spheres \(\partial B_{r}\) with large radius \(r\). The mean-value principle for Laplace-type differential operators then suggests that \(\phi\) should satisfy an inequality of the form \(\big{|}\phi(p)\big{|}\leq\kappa(r)\) for \(r>0\). Although contributions from non-trivial curvature in the interior of \(B_{r}\) in general preclude this naive mean-value inequality, the controlled asymptotics of ALX spaces retains enough control to deduce analogous bounds for points that are far away from \(p\). A classical consequence of the mean-value principle is a relation between the asymptotic behaviour of \(\kappa\) at large radius and the values of \(\phi\) in the interior of the ball. For example, if the naive mean-value inequality was satisfied at every point \(p\in W^{n}\) and \(\kappa(r)\to 0\) as \(r\to\infty\), then \(\phi\) would be identically zero everywhere. For \(W^{n}=\mathbb{R}^{n}\), a result by Taubes generalizes this kind of statement to a dichotomy between the growth of \(\kappa(r)\) at infinity and the vanishing of \([\phi\wedge\phi]\) on all of \(W^{n}\)[14]. Here we prove that this dichotomy holds more generally if \(W^{n}\) is an \(\mathrm{ALX}\) gravitational instanton (this is Theorem 8.1 below): **Theorem A**.: _Let \(W^{n}\) be a complete, Ricci flat \(\mathrm{ALX}\) manifold of dimension \(n\geq 2\) with asymptotic fibers of dimension \(k\leq n-1\) and sectional curvature bounded from below. Consider \((A,\phi)\) as above and assume the pair satisfies the second-order differential equation \((\ref{eq:1})\). Then either_ * _there is an_ \(a>0\) _such that_ \(\liminf_{r\to\infty}\frac{\kappa(r)}{r^{a}}>0\)_, or_ * \([\phi\wedge\phi]=0\)_._ If the fields \((A,\phi)\) are solutions of the \(\theta\)-Kapustin-Witten equations and have square-integrable field strength we can say slightly more (cf. Theorem 9.1). **Theorem B**.: _Let \(W^{4}\) be a complete, Ricci flat \(\mathrm{ALX}\) manifold of dimension \(4\) with asymptotic fibers of dimension \(k\leq 3\) and sectional curvature bounded from below. Assume \((A,\phi)\) are solutions of the \(\theta\)-Kapustin-Witten equations and satisfy \(\int_{W^{4}}|F_{A}|^{2}<\infty\), then either_ * _there is an_ \(a>0\) _such that_ \(\liminf_{r\to\infty}\frac{\kappa(r)}{r^{a}}>0\)_, or_ * \([\phi\wedge\phi]=0\)_,_ \(\forall^{A}\phi=0\)_, and_ \(A\) _is self-dual if_ \(\theta=0\)_, flat if_ \(\theta\in(0,\pi)\)_, and anti-self-dual if_ \(\theta=\pi\)_._ As an immediate consequence of Theorem B we are able to confirm a conjecture of Nagy and Oliveira. For this we introduce the Kapustin-Witten energy \[E_{\mathrm{KW}}=\int_{W^{4}}\left(\left|F_{A}\right|^{2}+\left|\left\langle {}^{A}\phi\right|\right|^{2}+\left|\left[\phi\wedge\phi\right]\right|^{2} \right)\.\] **Corollary** (Nagy-Oliveira Conjecture [13]).: _Let \((A,\phi)\) be a finite energy solution of the \(\theta\)-Kapustin-Witten equations with \(\theta\neq 0\pmod{\pi}\) on an ALE or ALF gravitational instanton and let \(G=SU(2)\). Then \(A\) is flat, \(\phi\) is \(\forall^{A}\)-parallel, and \([\phi\wedge\phi]=0\)._ Proof.: Under the given assumptions, the main result of Nagy and Oliveira [13, Main Theorem 1] states that \(\phi\) has bounded norm and thus, in particular, bounded average over spheres. It follows that \(\liminf_{r\to\infty}\frac{\kappa(r)}{r^{a}}\to 0\) for any \(a>0\), while the finite energy condition subsumes square-integrability of \(F_{A}\). Therefore, Theorem B implies that \([\phi\wedge\phi]=0\), \(\forall^{A}\phi=0\), and that \(A\) is flat. _Remark_.: The preceding argument is due to Nagy and Oliveira, who established the corollary for \(W^{4}=\mathbb{R}^{4}\) and \(S^{1}\times\mathbb{R}^{3}\)[13, Corollary 1.3]. Nagy and Oliveira relied on a version of Theorem B that applies to \(W^{4}=\mathbb{R}^{4}\) and was provided by Taubes alongside the original dichotomy [14]. Their conjecture stemmed from the expectation that Taubes' results can be extended to \(\mathrm{ALX}\) spaces in general. The main insight of this article is that Taubes' proof strategy for the special case \(W^{n}=\mathbb{R}^{n}\) carries over to general \(\mathrm{ALX}\) spaces. This is a consequence of the well-behaved asymptotic volume growth, where problems that arise from non-zero curvature in the interior can be excised. Hence, the proof of Theorem A closely follows the one provided by Taubes in [14]. We proceed as follows: In section 2 we collect the relevant definitions and recall several classical results that will be used throughout. Then, in section 3, we investigate the derivative of \(\kappa\) and introduce the relevant analogue of Almgren's frequency function, as well as a function that captures contributions from the mean curvature of the geodesic sphere. The key finding of that section is that \(\kappa\) is asymptotically almost non-decreasing, which is a prerequisite for most of the heavy lifting in subsequent sections. In section 4, we present a somewhat unusual version of unique continuation that is satisfied by \(\kappa\). The main insight is the content of section 5, where we explain that slow asymptotic growth of \(\kappa\) results in bounds for the frequency function. All these results are refined with respect to the components of the one-form \(\phi=\sum_{i}\phi_{i}dx^{i}\) by introducing in section 6 what we call the correlation tensor. Using a second line of arguments, we also determine a priori bounds of the type \(\left|\phi(x)\right|\leq\kappa(r)\) in section 7, which are the anticipated analogues of the mean value inequalities mentioned already above. Finally, all these ingredients are combined into a proof of Theorem A in section 8, while the proof of Theorem B occupies section 9. AcknowledgementsI thank Fabian Hahner for helpful comments on a draft of this article. This work is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy EXC 2181/1 - 390900948 (the Heidelberg STRUCTURENECallence Cluster). ## 2 Background For the purposes of this article, an ALX space is a non-compact complete Riemannian manifold that asymptotically looks like a fibration of closed manifolds, where the fibers have bounded volume. This is made precise in the following definition. **Definition 2.1** (\(\mathrm{ALX}_{k}\) Manifold).: Let \((W^{n},g)\) be a complete Riemannian manifold of dimension \(n\) and fix \(p\in W^{n}\). Let \(\pi_{Y}\,:\,Y^{n-1}\to B^{n-k-1}\) be a fibration over an \(n-k-1\)-dimensional closed Riemannian base \((B,g_{B})\) with \(k\)-dimensional closed Riemannian fibers \((X,g_{X})\). Equip \((0,\infty)_{r}\times Y\) with the model metric \(g_{\infty}=dr^{2}+g_{X}+r^{2}g_{B}\). We say \(W^{n}\) is an \(\mathrm{ALX}_{k}\) manifold if its end is modeled on \((0,\infty)\times Y\), that is, if there exists \(R>0\) such that there is a diffeomorphism \(\varphi\,:\,W^{n}\setminus B_{R}(p)\to(R,\infty)\times Y^{n-1}\) that satisfies for \(j=0,1\), and \(2\) \[\lim_{r\to\infty}r^{j}\left|(Y^{\mathrm{L}C})^{j}\,\left(g-\varphi^{*}g_{ \infty}\right)\right|_{L^{n}(\partial B_{r})}=0. \tag{2}\] **Proposition 2.2**.: _If \((W^{n},g)\) is an \(\mathrm{ALX}_{k}\) space then_ \[\mathrm{vol}\,B_{r}(p)\sim r^{n-k}\,\mathrm{vol}\,X\quad(r\to\infty)\.\] **Definition 2.3**.: We call an \(\mathrm{ALX}\) space \(W^{n}\) a _gravitational instanton_ if it is Ricci flat and its sectional curvature is bounded from below. _Remarks_.: * \(\mathrm{ALX}\) spaces are usually considered in the context of four-manifolds and the "X" is a place-holder for the following cases: \(\mathrm{ALE}\) or Asymptotically Locally Euclidean (\(k=0\)), \(\mathrm{ALF}\) or Asymptotically Locally Flat (\(k=1\)), \(\mathrm{ALG}\) (\(k=2\)), \(\mathrm{ALH}\) (\(k=3\)), where the last two are named by induction. * We do not demand that ALX gravitational instantons are hyper-Kahler, while we add the possibly non-standard condition of bounded sectional curvature. _Examples_.: * The prototypical example of an ALE manifold is Euclidean space \(\mathbb{R}^{n}\). In this case there is the obvious diffeomorphism \(\mathbb{R}^{n}\setminus\{0\}\simeq(0,\infty)\times S^{n-1}\) via spherical coordinates, the metric is \(\varphi_{*}g=dr^{2}+r^{2}g_{S^{n-1}}=g_{\infty}\), and the volume of balls grows with the radius as \(r^{n}\). * A prototypical example of an ALF space is \(S^{1}\times\mathbb{R}^{n-1}\) with the product metric \(g=dt^{2}+g_{\mathbb{R}^{n-1}}\). Again, spherical coordinates on the \(\mathbb{R}^{n-1}\) factor provide a diffeomorphism to \((0,\infty)\times S^{1}\times S^{n-2}\) with metric \(\varphi_{*}g=dr^{2}+dt^{2}+r^{2}g_{S^{n-2}_{r}}\), such that \(S^{1}\) has constant size, while \(S^{n-2}_{r}\) is the sphere of radius \(r\) centered at \(0\in\mathbb{R}^{n-1}\). Once the volume of \(S^{1}\) is "filled", the volume of a geodesic ball approaches a growth of order \(r^{n-1}\). * Famous examples of less trivial four-dimensional ALF gravitational instantons are (multi-centered) Taub-NUT spaces. These are \(S^{1}\)-fibrations over \(\mathbb{R}^{3}\), where the \(S^{1}\)-fiber has asymptotically finite volume. **Theorem 2.4** (Global Laplacian Comparison Theorem [1]).: _If \(\operatorname{Ric}\geq(n-1)K\) and \(r(x)=d(p,x)\) denotes the geodesic distance function based at a point \(p\), then_ \[\Delta r\leq(n-1)\Delta_{K}r\,\] _where on the right hand side \(\Delta_{K}\) is the Laplacian on the unique complete, \(n\)-dimensional, simply connected space of constant sectional curvature \(K\)._ _Remark_.: Since \(r\) is not necessarily differentiable the global Laplacian comparison must be understood in a weak sense, e.g. in the weak sense of barriers as in the work of Calabi [1]. However, for our purposes it is sufficient to consider the smooth locus of \(r\), where the inequality holds as stated. **Proposition 2.5** (Mean Curvature Comparison on ALX spaces).: _Let \(W^{n}\) be an \(\operatorname{ALX}_{k}\) space and fix a point \(p\in W^{n}\). The Laplacian of the distance function \(r(x)=d(p,x)\), or equivalently the mean curvature of the geodesic sphere of radius \(r\) based at \(x\), has the following asymptotic behaviour._ \[\Delta r\sim\left\{\begin{array}{cl}\frac{n-1}{r}&(r\to 0)\\ \frac{n-k-1}{r}&(r\to\infty)\end{array}\right.\] _Furthermore, if \(\operatorname{Ric}\geq 0\), then it is bounded from above by_ \[\Delta r\leq\frac{n-1}{r}\.\] Proof.: For a start, note that \(r(x)\) is smooth on \(M\setminus\{p,\operatorname{Cut}(p)\}\), where \(\operatorname{Cut}(p)\) is the cut locus of \(p\). It is a standard result that the cut locus on a complete Riemannian manifold has measure zero, so \(r\) is differentiable almost everywhere. The Gauss lemma tells us that \(\mathbb{V}^{LC}r=\partial_{r}\) is the radial vector field of unit norm and is normal to geodesic spheres. As an aside, note that \(\Delta r=\operatorname{tr}(\mathbb{V}^{LC})^{2}r\) is the trace of the second fundamental form of the geodesic sphere and as such is identical to its mean curvature. The asymptotic behaviour for \(r\to 0\) follows e.g. by a direct calculation in Riemann normal coordinates. In particular, use \(g_{ij}=\delta_{ij}+\mathcal{O}(r^{2})\) and \(\Gamma^{i}_{jk}=\mathcal{O}(r)\) and then observe that at leading order the result is identical to the Euclidean case, while higher order corrections are \(\mathcal{O}(r)\): \[\Delta r=\frac{n-1}{r}+\mathcal{O}(r)\.\] When \(r\to\infty\) the \(\mathrm{ALX}_{k}\) condition (2) implies that \(\Delta r\sim\Delta_{\infty}r\), where \(\Delta_{\infty}\) denotes the Laplacian associated to \(g_{\infty}\) on \((0,\infty)\times Y^{n-1}\). Under the diffeomorphism to \((0,\infty)\times Y^{n-1}\) the distance function is identified with the coordinate on the first factor. Since the model metric is block diagonal and only depends on \(r\) via the \(r^{2}\) factor in front of \(g_{B}\), we can calculate \(\Delta_{g_{\infty}}r\) explicitly. Let \((\epsilon_{i})_{i=1,...,n}\) be an orthonormal frame of \((0,\infty)\times Y\) such that \(e_{1}=\partial_{r},\,e_{2},...\,,e_{k+1}\) are tangent to the fibers, and \(e_{k+2},...\,,e_{n}\) are tangent to the base. Write \(\mathbb{V}\) for the Levi-Civita connection associated to \(g_{\infty}\). By a direct calculation \(\mathbb{V}_{e}\partial_{r}=\frac{1}{r}e_{i}\) for \(i=k+2,...\,,n-1\) and zero otherwise. Hence, \[\Delta_{g_{\infty}}r=\operatorname{tr}\mathbb{V}^{2}r=g^{ij}g(\mathbb{V}_{e} \partial_{r}\,,e_{j})=\frac{\operatorname{tr}g_{B}}{r}=\frac{n-k-1}{r}\.\] Finally, the upper bound in the case that \(\mathrm{Ric}\) is non-negative follows directly from the Laplacian Comparison Theorem (Theorem 2.4). **Theorem 2.6** (Bishop-Gromov's Volume Comparison).: _Let \((M,g)\) be a complete Riemannian manifold and assume \(\mathrm{Ric}\geq(n-1)K\). Denote by \(\operatorname{vol}B_{r}(p)\) the volume of the geodesic ball of radius \(r\) based at \(p\in M\). Similarly write \(\operatorname{vol}_{K}B_{r}(p_{K})\) for the volume of a geodesic ball with the same radius inside the unique complete, \(n\)-dimensional, simply connected space of constant sectional curvature \(K\) at an arbitrary point \(p_{K}\). Then the function defined by_ \[r\mapsto\frac{\operatorname{vol}B_{r}(p)}{\operatorname{vol}_{K}B_{r}(p_{K})}\] _is non-decreasing and approaches \(1\) as \(r\to 0\). In particular \(\operatorname{vol}B_{r}(p)\leq\operatorname{vol}_{K}B_{r}(p_{K})\) and \(\operatorname{vol}\partial B_{r}(p)\leq\operatorname{vol}_{K}\partial B_{r}(p _{K})\)._ **Lemma 2.7**.: _For any point \(x\) in the interior of \(B_{r}(p)\) there is a smooth, positive Green's function \(G_{x}\) for the Dirichlet-Laplace problem on \(B_{r}(p)\) with singularity at \(x\), i.e. \(\Delta G_{x}(y)=\delta_{x}(y)\) and \(G_{x}(\partial B_{r}(p))=0\). If \(W^{n}\) is a Ricci non-negative \(ALX_{k}\) space with effective dimension \(n-k>2\), then for any \(\epsilon>0\) there is a distance \(D\) such that whenever \(d(x,y)>D\) the Green's function is bounded by_ \[G_{x}(y)\leq\frac{(1+\epsilon)\,c}{\operatorname{vol}X\,d(x,y)^{n-k-2}}\,\] _where the constant \(c\) depends only on \(n\)._ Proof.: The existence of a positive Green's function on compact, connected manifolds with boundary is standard. The bound follows immediately from Theorem 5.2 in Li-Yau's seminal work [13]. Their theorem states \[G_{x}(y)\leq c\int_{r^{2}}^{\infty}\frac{1}{\operatorname{vol}B_{\sqrt[n]{t}} (x)}dt\,\] where \(r=d(x,y)\) denotes the Riemannian distance between \(x\) and \(y\) and the constant \(c\) depends only on \(n\). Let \(\epsilon>0\). By Proposition 2.2 there is a distance \(R\geq 0\), such that whenever \(r\geq R\) we find \[G_{x}(y)\leq c\int_{r^{2}}^{\infty}\frac{1+\epsilon}{\operatorname{vol}X\,t^{( n-k)/2}}dt=\frac{(1+\epsilon)\,c}{\operatorname{vol}X\,r^{n-k-2}}\.\] ## 3 The Frequency Function On our way to show that \(\kappa\) must have some minimal asymptotic growth, the first step is to realize that its decay rate becomes arbitrarily small at large radii. To see this we investigate the derivative of \(\kappa\), which is given in the upcoming proposition. The function \(N(r)\) that arises in that context is an analogue of the frequency function as introduced by Almgren [1] and we will refer to it by that name. The function \(D(r)\) captures the average deviation of the mean curvature of the geodesic sphere from its limit at infinity. **Proposition 3.1**.: _Assume the pair \((A,\phi)\) satisfies (1). Whenever \(\kappa\) is non-zero its derivative is_ \[\frac{d\kappa}{dr}=\frac{(N+D)\,\kappa}{r}\,\] _where \(N\) and \(D\) are given by_ \[N(r) =\frac{1}{r^{n-k-2}\kappa^{2}}\int_{B_{r}}\left(\left|Y^{A}\phi \right|^{2}+\left|\left[\phi\wedge\phi\right]\right|^{2}+\left(\operatorname{ Ric}\phi,\phi\right)\right)\] \[D(r) =\frac{1}{2r^{n-k-2}\kappa^{2}}\int_{\partial B_{r}}\left(\Delta r -\frac{n-k-1}{r}\right)\,\left|\phi\right|^{2}\.\] _Moreover, if \(\operatorname{Ric}\geq 0\), then \(N\) is non-negative, \(D\) is bounded from above by \(k/2\), \(\lim_{r\to 0}D=k/2\) and \(\lim_{r\to\infty}D=0\). As a consequence, if \(\kappa\) is not identically zero near \(r=0\), then it is increasing on small enough neighbourhoods of \(0\). Similarly, if \(\kappa\) is not asymptotically zero as \(r\to\infty\), then it is asymptotically almost non-decreasing in the sense that for any \(\epsilon>0\) there is some (large) radius \(R\), such that \(\frac{d\kappa}{dr}\geq-\frac{\epsilon\kappa}{r}\) for all \(r\geq R\)._ For notational convenience we will say that \(\kappa\) is \(\epsilon\)_-almost non-decreasing_ whenever its derivative is bounded below by \(\frac{d\kappa}{dr}\geq-\frac{\epsilon\kappa}{r}\). Proof.: Denote by \(X\) the radial unit vector field on \(B_{r}\) and observe that \[\kappa^{2}(r)=\frac{1}{r^{n-k-1}}\int_{\partial B_{r}}\left|\phi\right|^{2}= \frac{1}{r^{n-k-1}}\int_{B_{r}}\mathcal{L}_{X}\,\left|\phi\right|^{2}\.\] By the product and Leibniz' integral rule the derivative is then given by \[\frac{d}{dr}\kappa^{2}(r)=-\frac{n-k-1}{r}\kappa^{2}+\frac{1}{r^{n-k-1}}\int_ {B_{r}}\mathcal{L}_{X}\,{}^{\circ}\,\mathcal{L}_{X}\,\left|\phi\right|^{2}\.\] We can write the integral on the right hand side equivalently as an integral over the trace of the (asymmetric) second Lie derivative \(\mathcal{L}^{2}_{Y,Z}:=\mathcal{L}_{Y}\circ\mathcal{L}_{Z}\). To see this denote by \((r,\theta_{i})\) polar normal coordinates on \(B_{r}\) and note that in these coordinates the metric is block-diagonal, i.e. \(g=dr^{2}+g_{S^{n-1}}\). Since for any top-form \(\omega\) the pullback of \(\iota_{\partial_{\theta_{i}}}\omega\) to the boundary of the geodesic ball is zero, one finds \[\int_{B_{r}}\mathcal{L}^{2}_{X,X}\left|\phi\right|^{2}=\int_{B_{r}}(\mathcal{ L}^{2}_{X,X}+g_{S^{n-1}}^{ij}\mathcal{L}^{2}_{\partial_{\theta_{i}}\partial_{ \theta_{j}}})\left|\phi\right|^{2}=\int_{B_{r}}\operatorname{tr}_{TM}\mathcal{ L}^{2}\left|\phi\right|^{2}\.\] Next, for any vector field \(Y\) and top-form \(\omega\) we may express the action of the Lie derivative in terms of the Levi-Civita connection as \(\mathcal{L}_{Y}\omega=\mathbb{V}_{Y}\omega+\operatorname{div}Y\,\omega\). Using this we may write the second Lie derivative as \[\mathcal{L}^{2}_{Y,Z}\left|\phi\right|^{2}=(\mathbb{V}_{Y}+ \operatorname{div}Y)\,\mathbb{V}_{Z}\left|\phi\right|^{2}+\mathcal{L}_{Y}( \operatorname{div}Z\,\left|\phi\right|^{2})\.\] Furthermore, we use \(\operatorname{ad}\)-invariance and metric compatibility to write \(\mathbb{V}_{Y}\langle\phi,\phi\rangle=2\langle\phi,\mathbb{V}_{Y}^{A}\phi\rangle\), and use that the formal adjoint is given by \(\mathbb{V}_{Y}^{A}+\operatorname{div}Y=-(\mathbb{V}_{Y}^{A})^{\dagger}\). This leads to \[\int_{B_{r}}\operatorname{tr}_{TM}\mathcal{L}^{2}\left|\phi\right| ^{2} =2\int_{B_{r}}\left(\left|\mathbb{V}^{A}\phi\right|^{2}-\langle \phi,\mathbb{V}^{A\dagger}\mathbb{V}^{A}\phi\rangle\right)+\int_{B_{r}} \operatorname{tr}_{TM}\left(\mathcal{L}.\operatorname{div}(\cdot)\,\left|\phi \right|^{2}\right)\] \[=2\int_{B_{r}}\left(\left|\mathbb{V}^{A}\phi\right|^{2}+\left| \left|\phi\wedge\phi\right|\right|^{2}+\langle\phi,\operatorname{Ric}\phi \rangle\right)+\int_{\partial B_{r}}\Delta r\,\left|\phi\right|^{2}\,\] where we used the second order differential equation (1) in the first term and that the only non-zero contribution in the second term contains the mean curvature of the geodesic sphere since \(\operatorname{div}X=\Delta r\). All in all, as long as \(\kappa\neq 0\), the derivative is given by \[\frac{d\kappa}{dr}=\frac{1}{2\kappa}\frac{d}{dr}\kappa^{2}=\frac{1}{\kappa r^{ n-k-1}}\int_{B_{r}}\left(\left|\mathbb{V}^{A}\phi\right|^{2}+\left|\left[\phi \wedge\phi\right]\right|^{2}+\langle\phi,\operatorname{Ric}\phi\rangle\right)+ \frac{1}{2\kappa r^{n-k-1}}\int_{\partial B_{r}}\left(\Delta r-\frac{n-k-1}{r} \right)\,\left|\phi\right|^{2}\,\] which upon identifying the terms on the right hand side with \(N\) and \(D\) is the desired result. Now assume \(\operatorname{Ric}\geq 0\). On the one hand, \(N\) is then clearly non-negative. On the other hand, the results for \(\Delta r\) from Proposition 2.5 immediately provide both the global upper bound and the limits of \(D\). Combining these facts with the formula for \(\frac{d\kappa}{dr}\) leads to the conclusion that \(\kappa\) is (almost) non-decreasing at both ends: Since \(D\) is continuous and \(\lim_{r\to 0}D=k/2\), \(D\) must be positive on some small interval \([0,s)\). Thus, if \(\kappa\) is non-zero somewhere in that interval then it is increasing. The asymptotic bound for \(r\to\infty\) works out similarly. In that case there is an interval \([R,\infty)\) for any \(\delta>0\) on which \[D\geq-\frac{\delta}{1+\delta}\frac{n-k-1}{2}\.\] After a suitable choice of \(\delta\) this provides the desired bound \(\frac{d\kappa}{dr}\geq-\frac{\epsilon\kappa}{r}\) for any \(\epsilon>0\), which concludes the proof. In the preceding proposition we already encountered lower bounds for \(\frac{d\kappa}{dr}\) near \(r=0\) and \(r\to\infty\). But when we keep track of \(N\) it becomes clear that \(\frac{d\kappa}{dr}\) satisfies stronger bounds than recorded so far. This is the content of the following two corollaries. The first records a global growth limitation, while the second determines asymptotic lower and upper bounds, both in dependence of the frequency function \(N\). **Corollary 3.2**.: _Assume \(\kappa\neq 0\) on \([r_{0},r_{1}]\), then_ \[\kappa(r_{1})\;\leq\;\kappa(r_{0})\;\exp\int_{r_{0}}^{r_{1}}\frac{N(t)+k/2}{t}dt\;.\] Proof.: Recall from Proposition 3.1 that \(D\leq k/2\), so the derivative of \(\kappa\) is bounded by \(\frac{d\kappa}{dr}\leq\frac{(N+k/2)\kappa}{r}\). By Gronwall's inequality \(\kappa\) then can't become larger than a solution of the underlying differential equation, which is the stated bound. **Corollary 3.3**.: _Let \(\epsilon>0\) and \(R\) be such that \(|D|\leq\epsilon\) on \([R,\infty)\). If \(\kappa\) has no zeroes in \([r_{0},r_{1}]\subset[R,\infty)\) then it is bounded at \(r_{1}\) from both sides as follows_ \[\kappa(r_{0})\;\exp\int_{r_{0}}^{r_{1}}\frac{N(t)-\epsilon}{t}dt\;\leq\; \kappa(r_{1})\;\leq\;\kappa(r_{0})\;\exp\int_{r_{0}}^{r_{1}}\frac{N(t)+ \epsilon}{t}dt\;. \tag{3}\] _Consequently, bounds for the frequency function on \([r_{0},r_{1}]\) have the following effect:_ * _if_ \(a\leq N\) _then_ \(\kappa(r_{1})\geq\left(\frac{r_{1}}{r_{0}}\right)^{a-\epsilon}\kappa(r_{0})\)__ * _if_ \(N\leq b\) _then_ \(\kappa(r_{1})\leq\left(\frac{r_{1}}{r_{0}}\right)^{b+\epsilon}\kappa(r_{0})\)__ Proof.: Since \(|D|\leq\epsilon\) and \(\kappa\) is non-zero on \([r_{0},r_{1}]\), its derivative is bounded in both directions as follows \[\frac{(N-\epsilon)\;\kappa}{r}\leq\frac{d\kappa}{dr}\leq\frac{(N+\epsilon)\; \kappa}{r}\;.\] Gronwall's inequality states that \(\kappa\) is then bounded in either direction by the corresponding solutions of the underlying differential equations, which is exactly (3). Clearly, if the frequency function is bounded below by some \(a\geq 0\) the first inequality in (3) reduces to \[\kappa(r_{1})\geq\kappa(r_{0})\;\exp\int_{r_{0}}^{r_{1}}\frac{a-\epsilon}{s} \;ds=\left(\frac{r_{1}}{r_{0}}\right)^{a-\epsilon}\kappa(r_{0})\;.\] The same argument, but based on the second inequality, yields the corresponding upper bound if \(N\) is bounded from above. We will later also need the derivative of \(N\), which is given directly by the product and Leibniz' integral rule. \[\frac{d}{dr}N=\frac{1}{r^{n-k-2}\kappa^{2}}\;\int_{\partial B_{r}}\left(| \gamma^{A}\phi|^{2}+|\llbracket\phi\wedge\phi\rrbracket\right|^{2}+\langle \operatorname{Ric}\phi,\phi\rangle\right)-\left(n-k-2+2(N+D)\right)\frac{N}{r} \tag{4}\] As an immediate consequence, we see that if \(N\) is ever small, then it can't have been very much larger at nearby smaller radii \(s<r\). This observation is recorded more precisely in the following proposition. **Proposition 3.4**.: _Assume \(\operatorname{Ric}\geq 0\). If \(N\leq 1\) on some interval \([r_{0},r_{1}]\), then \(N(r_{0})\leq\left(\frac{r_{1}}{r_{0}}\right)^{n}N(r_{1})\). Moreover, whenever \(N(r)<1\) at some \(r\in(0,\infty)\) then \(N\leq 1\) on the interval \([N(r)^{1/n}\;r,r]\)._ Proof.: Since \(\operatorname{Ric}\) is non-negative the same is true for the first term in (4). Moreover, in that case \(D\leq\frac{k}{2}\) by Proposition 3.1. Assume now that \(N\leq 1\) on all of \([r_{0},r_{1}]\). Then \(N\) satisfies the following differential inequality for any \(r\in[r_{0},r_{1}]\) \[\frac{dN}{dr}\geq-\frac{n\,N}{r}\.\] Gronwall's inequality states that then for any pair \(s\leq r\) in \([r_{0},r_{1}]\) the following inequality holds \[N(r)\geq\left(\frac{s}{r}\right)^{n}N(s)\,\] which proves the first part of the statement. Now assume \(N(r)<1\) for some \(r\in(0,\infty)\). By continuity \(r\) must be contained in some interval \([r_{0},r_{1}]\) on which \(N\leq 1\), so we can use the preceding inequality in the form \(N(s)\leq(r/s)^{n}N(r)\). The right hand side is less than or equal to \(1\) as long as \(s\geq N(r)^{1/n}\,r\). ## 4 Asymptotically Unique Continuation Any function that is both non-negative and non-decreasing (trivially) has the following property: if it is non-zero at a particular point \(r_{0}\), it will remain non-zero for any subsequent point \(r>r_{0}\). Although \(\kappa\) is not non-decreasing, there is some (large) radius \(R\) beyond which it behaves in that same way. This is a consequence of the fact that the decay rate of \(\kappa\) becomes arbitrarily small at infinity. To make this precise, recall that \(\kappa\) is continuous, non-negative, and \(\epsilon\)-almost non-decreasing at large radius. With respect to the last property fix some \(\epsilon>0\) with associated radius \(R\geq 0\). Assume there is an \(r_{1}\in[R,\infty)\) at which \(\kappa(r_{1})\neq 0\). This is the case, for example, if \(\kappa\) is not asymptotically equivalent to the zero function. Since \(\kappa\) is \(\epsilon\)-almost non-decreasing, Corollary 3.3 provides a strictly positive lower bound for any larger radius \(r\geq r_{1}\) \[\kappa(r)\geq\left(\frac{r_{1}}{r}\right)^{\epsilon}\kappa(r_{1})\,\] which prevents \(\kappa\) from vanishing at any larger radius, at least as along as \(r_{1}\neq 0\). Note that, if \(D\) is bounded from below, then there is a choice of \(\epsilon\) for which \(R=0\) such that the conclusion holds for any \(r_{1}\in(0,\infty)\). In any case, the set on which \(\kappa\) is non-zero must include an interval \((r_{0},\infty)\) with some (possibly infinite) \(r_{0}\geq 0\). Let us now investigate the behaviour near \(r_{0}\), where the growth rate of \(\kappa\) is controlled by the frequency function \(N\) and the mean curvature deviation \(D\). Observe that if \(r_{0}\neq 0\) and \(N\) is bounded from above on \((r_{0},r_{1}]\), then the assumptions \(\kappa(r_{0})=0\) and \(\kappa(r_{1})\neq 0\) lead to a contradiction. To see this use Corollary 3.2 with the upper bound \(N\leq b\), which yields \[\kappa(r)\geq\left(\frac{r}{r_{1}}\right)^{b+k/2}\kappa(r_{1})\quad\text{for all }r\in(r_{0},r_{1}]\.\] The right hand side is strictly positive and thus prevents \(\kappa\) from vanishing at \(r_{0}\). This is an instance of Aronszajn's unique continuation theorem, which states that a non-trivial function that satisfies a second order, elliptic differential inequality cannot exhibit zeroes of infinite order [1]. A particular consequence of this is that a non-negative and non-decreasing such function on \((0,\infty)\) is then either identically zero or strictly positive across the entire domain. It was noted by Taubes that \(\kappa\) - in fact its defining integral \(\int_{aB_{\kappa}}\big{|}\phi\big{|}^{2}\) - satisfies Aronszajn's unique continuation theorem [14, Sec. 3]. We obtain the following, slightly weaker version of the preceding statement: **Lemma 4.1**.: _Assume \(\operatorname{Ric}\geq 0\) and that sectional curvature is bounded from below. There is a radius \(R\geq 0\), such that if \(\kappa\) is non-zero at any point in \([R,\infty)\), then it is strictly positive on all of \((0,\infty)\). Moreover, \(R=0\) if the mean curvature deviation \(D\) is bounded from below._ Here it is possible that \(\kappa\) is compactly supported, but only as long as its support is restricted to a region where the mean curvature of the geodesic spheres is (still) large. In view of the discussion above, the Lemma also follows directly from a proof that \(N\) is a priori bounded on any interval \((r_{0},r_{1}]\) in which \(\kappa\) does not have zeroes. ## 5 Slow Growth and Bounded Frequency We have previously seen that an upper bound for \(N\) leads to bounded growth of \(\kappa\). The goal of this section is to show that the converse is true when \(r\to\infty\). More precisely, we show that whenever \(\kappa\) grows slower than \(\mathcal{O}(r^{a})\) between two large radii, then \(N\) must have been bounded from above on an interval leading up to the violation. Note that such violations must occur for arbitrarily large radii when \(\kappa\) is not asymptotically bounded below by \(r^{a}\), which is the situation of the second alternative in Theorem A. Accordingly, the upcoming lemma and its refinement in section 6 play a crucial role in the proof of the main theorem. **Lemma 5.1**.: _Assume \(\kappa\) is not asymptotically zero. Fix an \(\epsilon>0\) and denote by \(R\) the radius beyond which \(|D|\leq\epsilon\). If there is a pair of radii \(r_{0}\leq r_{1}\) in \([R,\infty)\) for which \(\kappa(r_{1})\leq\left(\frac{r_{1}}{r_{0}}\right)^{a-\epsilon}\kappa(r_{0})\), then there exists a radius \(t\in[r_{0},r_{1}]\) such that_ 1. \(N(t)\leq a\)_._ _Moreover, if \(a<1\) the following holds on the interval \([\tilde{R},t]\), where \(\tilde{R}=\max\big{(}a^{\frac{1}{2\alpha}}\,t,\,\,R\big{)}\)._ 2. \(N<\sqrt{a}\)_,_ 3. \(\kappa\geq a^{\frac{a+\epsilon}{2\alpha}}\kappa(t)\)_._ Proof.: To see _(i)_ assume to the contrary that \(N>a\) on all of \([r_{0},r_{1}]\). Then the first bullet in Corollary 3.3 states that \[\kappa(r_{1})>\left(\frac{r_{1}}{r_{0}}\right)^{a-\epsilon}\kappa(r_{0})\,\] which violates the assumption that \(\kappa(r_{1})\) satisfies exactly the opposite inequality. Hence, there is a \(t\in[r_{0},r_{1}]\) at which \(N(t)\leq a\). Assume now that \(a<1\) and note that then the same is true for \(N(t)\). In that case we conclude via the second part of Proposition 3.4 that \(N\leq 1\) on \([N(r)^{1/n}\,t,t]\). Since \(N(t)\leq a<\sqrt{a}\), this interval contains as a subinterval \([a^{1/2n}\,t,t]\). Then the first part of Proposition 3.4 for \(s\in[a^{1/2n}\,t,t]\) yields \[N(s)\leq\left(\frac{r}{s}\right)^{n-k}N(t)\leq\frac{1}{\sqrt{a}}N(t)\leq\sqrt {a}\,\] which proves that the same is true on the (possibly smaller) interval \([\tilde{R},t]\) with \(\tilde{R}\,:=\max\big{(}a^{1/2n}\,t,R\big{)}\). Since \(N\leq\sqrt{a}\) on \([\tilde{R},t]\) the second bullet of Corollary 3.3 provides the bound \[\kappa(t)\leq\left(\frac{r}{t}\right)^{\sqrt{a}+\kappa}(r)\leq a^{-\frac{ \sqrt{a}+\kappa}{2n}}\kappa(r)\,\] where in the last step we used \(a^{1/2n}\,t\leq\tilde{R}\leq r\) for any \(r\in[\tilde{R},t]\). ## 6 The Correlation Tensor There is an \(\Omega^{1}_{p}\otimes\Omega^{1}_{p}\)-valued function \(T\) that refines \(\kappa^{2}\) to the effect that it resolves the behaviour of the components of \(\phi\). Note that, being a one-form, \(\phi\) can be evaluated in particular on the covariantly constant unit vector field on \(B_{r}(p)\) that is defined by parallel transport of a unit vector \(v\in T_{p}W\) along radial geodesics. The output is a smooth function \(\phi_{v}\) on \(B_{r}(p)\setminus\operatorname{Cut}(p)\) that captures the evolution of the \(v\)-component of \(\phi\) along the geodesics emanating from \(p\). This allows the definition of what we will call the correlation tensor \(T\,:\,W^{4}\times(0,\infty)\to\Omega^{1}_{p}\otimes\Omega^{1}_{p}\). Its value for \(v,w\in T_{p}W\) is defined by \[T(p,r)(v,w)=\frac{1}{r^{n-k-1}}\int_{\partial B_{r}(p)}\langle\phi_{v},\phi_{ w}\rangle\.\] Note, in particular, that \(\operatorname{tr}_{T_{p}W}T(p,r)=\kappa^{2}(p,r)\), while the induced quadratic form \(\kappa^{2}_{v}\,:=T(v,v)\) returns a version of the function \(\kappa^{2}\) that is based on the component \(\phi_{v}\). Just like \(\kappa_{v}(p,r)\) has an interpretation as the mean value of \(\phi_{v}\) on the geodesic sphere, the value of \(T(v,w)\) measures the correlation between the components \(\phi_{v}\) and \(\phi_{w}\) on the geodesic sphere; hence its name. An important observation is that \(\kappa_{v}\) satisfies essentially the same properties as \(\kappa\). As a short motivation of that fact observe that if \(\tilde{\Delta}_{A}\phi=0\) and \(v\) denotes the covariantly constant vector field described above, then also \(\iota_{v}\tilde{\Delta}_{A}\phi=\tilde{\Delta}_{A}\phi_{v}=0\). So it is reasonable to expect that \(\kappa_{v}\) behaves like a harmonic function in exactly the same way that \(\kappa\) does. To make this more precise, if \(\phi\) satisfies our main assumption (1) then \(\phi_{v}\) satisfies the following analogous second order equation \[\forall^{A^{\dagger}}\forall^{A}\phi_{v}+\frac{1}{2}\star[\star[\phi\wedge \phi_{v}]\wedge\phi]+\operatorname{Ric}(\phi)(v)=0\.\] As a consequence the derivative of \(\kappa_{v}\) comes with its own version of the frequency function and mean curvature deviation, denoted \(N_{v}\) and \(D_{v}\). \[\frac{d\kappa_{v}}{dr}=\frac{N_{v}+D_{v}}{r}\kappa_{v}\] The functions \(N_{v}\) and \(D_{V}\) are given by essentially the same expressions as before, but with \(\phi\) replaced by \(\phi_{v}\) as follows. \[N_{v} =\frac{1}{r^{n-k-2}\kappa_{v}^{2}}\int_{B_{v}}\left(\left|\nabla^{A} \phi_{v}\right|^{2}+\left|\left[\phi\wedge\phi_{v}\right]\right|^{2}+\left<\text {Ric}(\phi)(v),\phi_{v}\right>\right) \tag{5}\] \[D_{v} =\frac{1}{r^{n-k-1}\kappa_{v}^{2}}\int_{\partial B_{v}}\left( \Delta r-\frac{n-k-1}{r}\right)\left|\phi_{v}\right|^{2}\] **Proposition 6.1**.: _Let \(v\in T_{p}W\). All previous results hold verbatim when we replace \(\kappa\) and \(N\) by \(\kappa_{v}\) and \(N_{v}\), respectively._ There are in fact analogous results for the correlation tensor \(T\). To see this define the (Frobenius) norm of \(T\) with respect to the inner product on \(\Omega_{p}^{1}\otimes\Omega_{p}^{1}\) induced by the metric, i.e. \[\left|T\right|^{2}=g^{\mu\rho}g^{\nu\sigma}T_{\mu\nu}T_{\rho\sigma}\;.\] In this expression the metric is evaluated at the point \(p\). The norm of \(T\) satisfies \(\frac{1}{c}\kappa^{2}\leq\left|T\right|\leq\kappa^{2}\) for some constant \(c\). The tensor \(T\) is differentiable with respect to \(r\) and there is then a (possibly larger) \(c\) such that the following inequality holds. \[\left|\frac{dT}{dr}\right|\leq c\;\frac{N+D}{r}\left|T\right|\] Below we will also use the notation \(T^{\prime}\;:=\frac{dT}{dr}\). From now on view \(T\) as linear map from \(T_{p}W\to T_{p}W\). If \(T\) has a zero eigenvalue at some radius \(r\) and \(v\) denotes the associated eigenvector, then \(\kappa_{v}^{2}(r)=0\). As a consequence of the unique continuation property in Lemma 4.1, whenever \(r\) is too large \(\kappa_{v}\) must then be identically zero on an interval of the form \([R,\infty)\). This in turn implies that the component \(\phi_{v}\) vanishes on \(W^{n}\setminus B_{R}\). We will deal with such compactly supported components of \(\phi\) later. In the definition of \(T\) we restrict ourselves to the subspace of \(T_{p}W\) that is orthogonal to the zero eigenspace of \(T\) at infinity. The correlation tensor then has strictly positive eigenvalues on all of \((0,\infty)\), which will be assumed henceforth. Denote by \(\lambda\;:\;(0,\infty)\to\mathbb{R}\) the function that assigns to a radius \(r\) the smallest eigenvalue of \(T(r)\). By the preceding paragraph we may assume that this function is non-zero on all of \((0,\infty)\). Clearly \(\lambda\leq\kappa^{2}\) as the latter is the trace of \(T\), so it should be noted that under this assumption \(\kappa\) is necessarily everywhere non-zero. The following proposition states that \(\lambda\) is nearly differentiable. **Proposition 6.2**.: _The finite differences of the smallest-eigenvalue function \(\lambda\) at any \(r\in(0,\infty)\) satisfy the following bounds: Denote by \(v\) a unit length eigenvector of \(T(r)\) with eigenvalue \(\lambda(r)\) and let \(\Delta>0\)._ \[\lambda(r+\Delta)-\lambda(r) \leq\left<v,T^{\prime}(r)\;v\right>\Delta+\mathcal{O}(\Delta^{2})\] \[\lambda(r)-\lambda(r-\Delta) \geq\left<v,T^{\prime}(r)\;v\right>\Delta+\mathcal{O}(\Delta^{2})\] _Moreover, \(\lambda\) is locally Lipschitz continuous on \((0,\infty)\)._ Proof.: For any \(r\in(0,\infty)\) denote by \(v_{r}\) an eigenvector of \(T(r)\) with eigenvalue \(\lambda(r)\). For \(\Delta\geq 0\) we then have \[\lambda(r+\Delta)-\lambda(r)=\langle v_{r+\Delta},T(r+\Delta)v_{r+\Delta} \rangle-\langle v_{r},T(r)v_{r}\rangle\leq\langle v_{r},\big{(}T(r+\Delta)-T(r )\big{)}v_{r}\rangle\;.\] To get the upper bound on the right hand side note that the smallest eigenvalue at \(r+\Delta\) is \(\lambda(r+\Delta)\), so the evaluation of \(T(r+\Delta)\) on \(v_{r}\) can only ever result in the same or larger values. The upper bound for the forwards finite difference is then a consequence of Taylor's theorem. Up to a minus sign the lower bound for the backwards finite difference in the second inequality follows in exactly the same way. The Lipschitz property of \(\lambda\) follows by paying closer attention to the remainder in Taylor's theorem. Consider a pair \(s<r\in(0,\infty)\) and Taylor's theorem at zeroth order \[T(r)=T(s)+R_{0}(r)\;.\] A standard estimate for the remainder states \(\big{|}R_{0}(r)\big{|}\leq\sup_{(r,s)}\big{|}T^{\prime}\big{|}\;\;(r-s)\), as long as the derivative is bounded on the given interval. As mentioned before, the derivative of \(T\) at any radius \(\tilde{r}\) is bounded by a multiple of \(\frac{N+D}{\tilde{r}}\big{|}T(\tilde{r})\big{|}\). Recall from Proposition 3.1 that \(D\leq\frac{k}{2}\). Also, as was discussed in the context of the unique continuation property, \(N\) is bounded on any compact interval on which \(\kappa\) is non-zero. It follows that \(\lambda\) is Lipschitz on any compact \([s,t]\subset(0,\infty)\), since \[\big{|}\lambda(r)-\lambda(s)\big{|}\leq\left|\langle v_{s},\big{(}T(r)-T(s) \big{)}v_{s}\rangle\right|\leq c\left|r-s\right|\;.\] Equivalently (since \((0,\infty)\) is locally compact): \(T\) is locally Lipschitz. We can use the preceding proposition to show that, similar to \(\kappa\), the function \(\lambda\) is asymptotically almost non-decreasing. The idea here is that \(\lambda\) consists of piecewise smooth segments, each coinciding with a function \(\kappa_{v}^{2}\) for some \(v\). Any \(\kappa_{v}\) is eventually \(\epsilon\)-almost non-decreasing, so an analogues statement must be true for \(\lambda\). Observe that asymptotic bounds for the mean curvature deviation \(D_{v}\) only depend on the asymptotics of the mean curvature \(\Delta r\), which is independent of \(v\). Thus, if \(D\) is bounded from below by \(-\epsilon\), then the same is true for \(D_{v}\). Hence, let \(R\) be such that \(|D|\leq\epsilon\) for any larger radius and consider points \(r>s\geq R\). Define an equidistant partition of the interval \([s,r]=\bigcup_{k=0}^{M-1}[r_{k},\eta_{k+1}]\) where \(r_{k}=s+\frac{k}{M}(r-s)\). Then, denoting by \(v_{k}\) an eigenvector with eigenvalue \(\lambda(r_{k})\), the two estimates in Proposition 6.2 imply \[\lambda(r)=\lambda(s)+\sum_{k=1}^{M}\langle v_{k},T^{\prime}(r_{k})\;v_{k} \rangle\frac{r-s}{M}+\mathcal{O}\left(\frac{1}{M^{2}}\right)\;.\] This can be rewritten by using \(\langle v,T^{\prime}v\rangle=\frac{d}{dr}\kappa_{v}^{2}\) and the fact that each \(v_{k}\) is an eigenvector with the smallest eigenvalue at \(r_{k}\), i.e. \(\kappa_{v}^{2}(r_{k})=\lambda(r_{k})\). \[\lambda(r)=\lambda(s)+2\sum_{k=1}^{M}\lambda(r_{k})\frac{\big{(}N_{v_{k}}(r_{k })\;+D_{v_{k}}(r_{k})\big{)}}{\kappa_{k}}\;\frac{r-s}{M}+\mathcal{O}\left( \frac{1}{M^{2}}\right) \tag{6}\] Notably, since \(|D|\leq\epsilon\) we find that \(\lambda\) satisfies the following inequality on \([R,\infty)\) \[\frac{\lambda(r)-\lambda(s)}{r-s}\geq-\frac{2\epsilon}{M}\sum_{k=1}^{M}\frac{ \lambda(r_{k})}{r_{k}}+\mathcal{O}\left(\frac{1}{M^{2}}\right)\.\] This is the finite difference analogue of the differential inequality satisfied by \(\kappa\) when it is \(\epsilon\)-almost non-decreasing. To make this more apparent, observe that the right hand side of the last inequality contains the arithmetic mean of the function \(\lambda/r\) for the given partition of \([s,r]\). If one takes \(M\to\infty\) this expression approaches the mean value of \(\lambda/r\) on the given interval. Seeing that \(\lambda\) is continuous, it is then a consequence of the (integral) mean value theorem that there is a point \(t\in[s,r]\) at which the right hand side is given by \(\lambda(t)/t\), such that \[\frac{\lambda(r)-\lambda(s)}{r-s}\geq-2\epsilon\frac{\lambda(t)}{t}\.\] If \(\lambda\) is differentiable at \(r\) the limit \(s\to r\) exactly recovers the differential inequality, as promised. We will correspondingly say that \(\lambda\) is \(\epsilon\)-almost non-decreasing whenever it satisfies the finite difference inequality from above. The fact that \(\lambda\) is eventually \(\epsilon\)-almost non-decreasing allows us to extend Lemma 5.1 in such a way that it also provides bounds for \(N_{b}\) and \(\kappa_{b}\), where \(v\) is a unit eigenvector of \(T(t)\) associated to the smallest eigenvalue \(\lambda(t)\) at some distinguished \(t\in[r_{0},r_{1}]\). **Lemma 6.3**.: _Fix \(\epsilon>0\) and denote by \(R\) the radius beyond which \(|D|\leq\epsilon\). Let \(r_{0},r_{1}\in[R,\infty)\) be a pair of radii such that \(r_{1}\) is larger than any of \(4r_{0}\), \(\left(\kappa^{2}(r_{0})/\lambda(r_{0})\right)^{\frac{1}{2\epsilon}}r_{0}\), and \(r_{0}^{1/(1-100\sqrt{\epsilon})}\). If \(\kappa(r_{1})\leq\left(\frac{r_{1}}{r_{0}}\right)^{a-\epsilon}\kappa(r_{0})\) and \(a\) is sufficiently small (e.g. \(a^{1/4n}<0.1\)), then there exists a radius \(t\in[r_{1}^{1-100\sqrt{\epsilon}},r_{1}]\) of the following significance:_ _Let \(v\) be an eigenvector of \(T(t)\) associated to the smallest eigenvalue \(\lambda(t)\) and write \(\tilde{R}=\max\left(a^{\frac{1}{4n}t},R\right)\). On all of \([\tilde{R},t]\)_ 1. \(N\leq a^{1/4}\) _and_ \(\kappa\geq a^{\frac{a^{1/4}+\epsilon}{4n}}\kappa(t)\)__ 2. \(N_{b}\leq a^{1/4}\) _and_ \(\kappa_{b}\geq a^{\frac{a^{1/4}+\epsilon}{4n}}\kappa_{b}(t)\)__ Proof.: Let \(\epsilon>0\), denote by \(R\) the radius beyond which \(|D|\leq\epsilon\), and assume \(\sqrt{a}<1\). The proof proceeds by establishing the existence of regions in \([r_{0},r_{1}]\) where both \(N\) and \(N_{b}\) are less or equal to \(\sqrt{a}\) at the same time. The stated bounds then follow from Lemma 5.1 (with \(a\) replaced by \(\sqrt{a}\) everywhere). First, regarding the condition \(N\leq\sqrt{a}\), observe that the set \(I=\left\{\ r\in[\log r_{0},\log r_{1}]\ |\ N(\exp r)>\sqrt{a}\ \right\}\) makes up at most \(\sqrt{a}\) of the length of the surrounding interval. To see this, write \(I=\coprod(a_{k},b_{k})\) and then go from \(\kappa(r_{0})\) to \(\kappa(r_{1})\) by iteratively using Corollary 3.3, with bounds \(N>\sqrt{a}\) on each \((a_{k},b_{k})\) and \(N\geq 0\) on the intervals \([b_{k},a_{k+1}]\) in between. This leads to \[\kappa(r_{1})\geq\left(\frac{r_{1}}{r_{0}}\right)^{-\epsilon}\prod_{k}\left( \frac{b_{k}}{a_{k}}\right)^{\sqrt{a}}\kappa(r_{0})\.\] This inequality is only compatible with the assumption that \(\kappa(r_{1})\leq(r_{1}/r_{0})^{a-\epsilon}\kappa(r_{0})\) if \[\sum_{k}(\log b_{k}-\log a_{k})\leq\sqrt{a}\ (\log r_{1}-\log r_{0})\.\] Equivalently, if \(|l|\leq\sqrt{a}\big{|}\big{|}\log r_{0},\log r_{1}\big{|}\big{|}\). An analogous statement for points that satisfy the condition \(N_{b}\leq\sqrt{a}\) makes use of a slightly longer argument. As before, we set out to investigate the measure of the set on which \(N_{b}>\sqrt{a}\). Denote by \(L\) the largest integer such that \(2^{L}r_{0}<r_{1}\) and consider the sequence \(\{2^{\ell}r_{0}\}_{\ell=0,1,\ldots,L-1}\). For each pair of neighbours in this sequence we can apply equation (6) in the form \[\lambda(2s)\geq\lambda(s)+\frac{1}{M}\sum_{k=1}^{M}\lambda(s_{k})\left(N_{ \mathfrak{k}_{k}}(r_{k})-\epsilon\right)+\mathcal{O}\left(\frac{1}{M^{2}} \right)\, \tag{7}\] where \(s_{k}=(1+k/M)\ s\) and \(v_{k}\) denotes an eigenvector of \(T(s_{k})\) with eigenvalue \(\lambda(s_{k})\). Next, note that Corollary 3.3 with the lower bound \(N\geq 0\) provides \[\lambda(s_{k})=\kappa_{\mathfrak{m}}^{2}(s_{k})\geq\left(\frac{s_{k}}{s} \right)^{-2\epsilon}\kappa_{\mathfrak{m}}^{2}(s)\geq\frac{1}{4}\lambda(s)\.\] Here the last inequality holds by virtue of \(s_{k}/s<2\) and because \(\lambda(s)\) is the smallest eigenvalue of \(T\) at \(s\), such that \(\kappa_{\mathfrak{m}}^{2}(s)=T(s)(v_{k},v_{k})\) can't be smaller. Also introduce for the moment the notation \(\langle N_{b}\rangle_{\ell}\) for the average of \(\{N_{\mathfrak{k}_{k}}(s_{k})\}_{k=1,\ldots,M}\) in the interval \([2^{\ell}r_{0},2^{\ell+1}r_{0}]\). The inequality in (7) becomes \[\frac{\lambda(2^{\ell+1}r_{0})}{\lambda(2^{\ell}r_{0})}\geq 1+\frac{1}{4} \left(\langle N_{b}\rangle_{\ell}-\epsilon\right)+\mathcal{O}\left(\frac{1}{M ^{2}}\right)\.\] In the last of these pairings we can use \(r_{1}\) as endpoint of the interval instead of \(2^{L}r_{0}\) without changing the inequality, since \(r_{1}-2^{L-1}r_{0}>2^{L-1}r_{0}\) and \(r_{1}\leq 2^{L+1}r_{0}\). Hence, the product of all these ratios yields \[\frac{\lambda(r_{1})}{\lambda(r_{0})}\geq\prod_{\ell=0}^{L-1}\left(1+\frac{ \langle N_{b}\rangle_{\ell}-\epsilon}{4}+\mathcal{O}\left(\frac{1}{M^{2}} \right)\right)\.\] Let \(f\) denote the fraction of \(\{0,1,...,L-1\}\) for which \(\langle N_{b}\rangle_{\ell}+\mathcal{O}(1/M^{2})>\sqrt{a}\). Then the product simplifies to \[\frac{\lambda(r_{1})}{\lambda(r_{0})}\geq\left(1+\frac{\sqrt{a}-\epsilon}{4} \right)^{\frac{L}{L}}\left(1-\frac{\epsilon}{4}+\mathcal{O}\left(\frac{1}{M^{2 }}\right)\right)^{1-fL}\.\] On the left use that \(\lambda(r_{1})\leq\kappa^{2}(r_{1})\leq\left(\frac{r_{1}}{r_{0}}\right)^{2(a- \epsilon)}\kappa^{2}(r_{0})\) to replace \(\lambda(r_{1})\). Upon taking logarithms on both sides the resulting inequality reads \[2(a-\epsilon)\log\left(\frac{r_{1}}{r_{0}}\right)+\log\left(\frac{\kappa^{2}(r _{0})}{\lambda(r_{0})}\right)\geq fL\log\left(1+\frac{\sqrt{a}-\epsilon}{4} \right)+(1-f)L\log\left(1-\frac{\epsilon}{4}+\mathcal{O}\left(\frac{1}{M^{2}} \right)\right)\.\] This can be simplified in several ways: (1) Because \(2^{L+1}r_{0}>r_{1}\) and as long as we make sure that \(r_{1}\geq 4r_{0}\) we can assume \(L>\frac{1}{2}\log r_{1}/r_{0}\). (2) Since \(\sqrt{a}-\epsilon<1\) the term \(\log\left(1+(\sqrt{a}-\epsilon)/4\right)\) is no smaller than \((\sqrt{a}-\epsilon)/8\). (3) By taking \(M\) large, we can make sure that \(\log\left(1-\epsilon/4+\mathcal{O}(1/M^{2})\right)\) is larger than \(-\epsilon\). Plugging everything in and solving for \(f\) leads to the following upper bound \[f<32\sqrt{a}\left(1+\frac{\log\left(\kappa^{2}(r_{0})/\lambda(r_{0})\right)}{2a \log\left(r_{1}/r_{0}\right)}\right)\.\] In particular, if \(r_{1}>\left(\kappa^{2}(r_{0})/\lambda(r_{0})\right)^{\frac{1}{2n}}r_{0}\) we find that \(f<64\sqrt{a}\). Consider the subset \(J\subset[\log r_{0},\log r_{1}]\) that consists of those intervals \([\log 2^{r}r_{0},\log 2^{\ell+1}r_{0}]\) on which \(\langle N_{b}\rangle_{\ell}\) is larger than \(\sqrt{a}\). The inequality for \(f\) now tells us that the length of \(J\) can be at most \(64\sqrt{a}\ \left[\left[\log r_{1},\log r_{0}\right]\right]\). This means that the union of \(J\) with the interval \(I\) from the first part has bounded measure \[\left|I\cup J\right|<65\sqrt{a}\ \left|\left[\log r_{0},\log r_{1}\right]\right|\.\] We conclude that whenever \(\sqrt{a}\) is sufficiently small (for example \(\sqrt{a}<0.01\)), then there exists a point \(\log(t_{1})\) that is neither in \(I\) nor in \(J\). In fact, if \(r_{0}<r_{1}^{1-100\sqrt{a}}\), we can moreover find a \(t_{1}\) that is larger than \(r_{1}^{1-100\sqrt{a}}\), since the measure of \([r_{1}^{1-100\sqrt{a}},r_{1}]\) is still larger than \(I\) and \(J\) combined. Choose and fix a \(t_{1}\) with these properties. On the one hand this means that \(N(t_{1})\leq\sqrt{a}\) and Lemma 5.1 implies that on the interval \([\tilde{R}_{1},t_{1}]\) we have \(N<a^{1/4}\) and \(\kappa\geq a^{\frac{1/4}{4n}}\kappa(r)\). Here, \(\tilde{R}_{1}\) is the larger of \(a^{1/4n}t_{1}\) and \(R\). On the other hand it means that \(t_{1}\) is from an interval \([2^{t}r_{0},2^{\ell+1}r_{0}]\) on which the average value \(\langle N\rangle_{\ell}\) is less than \(\sqrt{a}\). Consequently, the partition of that interval contains a radius \(t_{2}\) at which \(N_{b}(t_{2})<\sqrt{a}\). We again deduce that \(N_{b}<a^{1/4}\) and \(\kappa_{v}\geq a^{\frac{a^{1/4}+a}{4n}}\kappa_{v}(r)\) on \([\tilde{R}_{2},t_{2}]\), where \(\tilde{R}_{2}\) is the larger of \(a^{1/4n}t_{2}\) and \(R\). Finally, \([\tilde{R}_{1},t_{1}]\) and \([\tilde{R}_{2},t_{2}]\) intersect whenever \(a\) is small enough. To see this explicitly, assume w.l.o.g that \(t_{1}>t_{2}\) and note that \(t_{1}/t_{2}\leq 2\) since both are contained in an interval of the form \([2^{t}r_{0},2^{\ell+1}r_{0}]\). It follows that \(\tilde{R}_{1}/t_{2}<2a^{1/4n}\), which is less than one if for example \(a^{1/4n}<0.1\). The upper bound of \(0.1\) is convenient, because we then also have \(\tilde{R}_{1}<a^{1/8n}t_{2}\), such that choosing \(t=t_{2}\) and \(\tilde{R}=\max(a^{1/8n}t,R)\) concludes the proof. ## 7 A Priori Bounds The final ingredient for the proof of Theorem A are pointwise a priori bounds for \(\left|\phi\right|\) in terms of \(\kappa\), as well as estimates for the volume of the subsets of \(B_{r}\) on which \(\left|\phi\right|\) is small compared to that upper bound. These are the promised analogues of the mean value inequality. The proof relies mainly on the standard approach to the mean-value inequality and the fact that according to Proposition 3.1, \(\kappa\) can't decrease too rapidly at infinity. However, due to the asymptotic nature of the latter statement and the related lack of control in the interior, these bounds only hold for points that lie in some large spherical geodesic shell surrounding \(p\). Below we use the notation \(S(r_{1},r_{2})\) for the closed spherical geodesic shell based at \(p\) with inner radius \(r_{1}\) and outer radius \(r_{2}\), i.e. all points that satisfy \(r_{1}\leq d(p,x)\leq r_{2}\). Moreover, we write \(f_{\Omega_{i}}\ :=\frac{\operatorname{vol}\Omega_{i}}{\operatorname{vol}\Omega}\) for the fraction that a subset \(\Omega_{i}\) occupies within its corresponding surrounding set \(\Omega\). The following result holds verbatim if we replace \(\phi\), \(\kappa\) and \(N\) by the versions \(\phi_{v}\), \(\kappa_{v}\) and \(N_{v}\) associated to a unit vector \(v\in T_{p}W\). **Lemma 7.1**.: _Let \(W^{n}\) be an \(\operatorname{ALX}_{k}\) space of dimension \(n\), \(p\in W^{n}\), \(\operatorname{Ric}\geq 0\), and assume \((A,\phi)\) is a solution of (1). For any \(\epsilon\in(0,1/2)\) there exists a radius \(R\geq 0\) and constants \(c_{i}>1\) that only depend on \(n\) and \(k\), such that for sufficiently large \(r\geq R\) the following holds._ 1. \(\big{|}\phi(x)\big{|}\leq c_{0}\ \frac{\kappa(r)}{\operatorname{vol}X}\) _for any_ \(x\in S\left(R+\frac{r-R}{8},\ r-\frac{r-R}{8}\right)\)_,_ 2. _Let_ \(t\in[R+\frac{6}{8}(r-R),R+\frac{7}{8}(r-R)]\) _and assume that_ \(N\leq 1\) _on_ \([t,r]\)_. Then the subset_ \(\Omega_{1}\subseteq\partial B_{t}\) _on which_ \(\big{|}\phi(x)\big{|}\leq\frac{1}{2}\frac{\kappa(r)}{\operatorname{vol}X}\) _has relative volume_ \(f_{\Omega_{1}}\leq c_{1}\epsilon+c_{2}\sqrt{N}\) _._ Proof.: Throughout the proof we fix some \(\epsilon\in(0,1/2)\) and choose \(R\) to be some large enough radius, such that on the one hand \(\kappa\) is \(\epsilon\)-almost non-decreasing as provided by Proposition 3.1 and on the other hand \(\operatorname{vol}B_{r}\leq(1+\epsilon)\operatorname{vol}X\ r^{n-k}\) for any \(r\geq R\) as in Proposition 2.2. Assume for now simply that \(r>R\) is some fixed outer radius. We will collect conditions on \(r\) as we go and will find that they can always be met by choosing some larger \(r\) to start with. _ad (i)_ In this part of the proof we use a specific bump function \(\beta\) on \(W^{4}\) with compact support inside the geodesic shell \(S(R,r)\), constructed as follows. Denote by \(\beta_{\text{R}}\) a non-increasing function on \(\text{R}\) that is equal to \(1\) on \(\left(-\infty,\frac{14}{16}\right)\) and equal to \(0\) on \(\left(\frac{15}{16},\infty\right)\). Use this to define a corresponding function on \(W^{4}\) by the rule \[\beta(x)=\beta_{\text{R}}\left(\left|\ \frac{d(p,x)-\frac{r+R}{2}}{\frac{r-R}{2}} \right|\ \right)\.\] The function \(\beta\) vanishes on the inner ball \(B_{R+\frac{r-R}{32}}(p)\), is equal to \(1\) on the geodesic shell with inner radius \(R+\frac{r-R}{16}\) and outer radius \(r-\frac{r-R}{16}\), and is zero again outside of \(B_{r-\frac{r-R}{32}}(p)\). Note in particular that \(\beta\) and its derivatives have compact support in the interior of \(S(R,r)\). Furthermore, the gradient and Laplacian of \(\beta\) are bounded as follows \[\big{|}d\beta\big{|} \leq\frac{32}{(r-R)} \big{|}\Delta\beta\big{|} \leq\frac{32}{(r-R)^{2}}\.\] Denote by \(G_{x}\) the positive Dirichlet Green's function of the Laplacian on \(B_{r}(p)\) based at \(x\in B_{r}(p)\). Recall from Lemma 2.7 that, due to the volume growth of \(W^{n}\), for large enough distances the Green's function and its derivative are bounded from above as follows \[G_{x}(y) \leq\frac{(1+\epsilon)c}{\operatorname{vol}X\ d(x,y)^{n-k-2}} \big{|}dG_{x}(y)\big{|} \leq\frac{(1+\epsilon)c}{\operatorname{vol}X\ d(x,y)^{n-k-1}}\,\] where \(c\) depends only on the dimension \(n\). With that in mind we now note that contracting equation (1) with \(\langle\cdot,\phi\rangle\) leads to \[\frac{1}{2}\Delta_{B}\big{|}\phi\big{|}^{2}+\big{|}\gamma^{A}\phi\big{|}^{2}+ \big{|}\big{|}\phi\wedge\phi\big{|}\big{|}^{2}+\big{\langle}\operatorname{Ric} \phi,\phi\big{\rangle}=0\, \tag{8}\] where \(\Delta_{B}=\mathbb{V}^{\dagger}\mathbb{V}\) is the Bochner Laplacian associated to the Levi-Civita connection. This equation implies \(\Delta_{B}\big{|}\phi\big{|}^{2}\leq 0\), so the function \(\big{|}\phi\big{|}^{2}\) is subharmonic1 and accordingly satisfies a version of the mean-value inequality. To see this directly, multiply (8) by \(\beta G_{x}\) and integrate over \(B_{r}(p)\) \[\int_{B_{r}(p)}\ \beta G_{x}\left(\Delta_{B}\left|\phi\right|^{2}+\left|\mathbb{V}^{A }\phi\right|^{2}+\left|\left[\phi\wedge\phi\right]\right|^{2}+\left\langle \operatorname{Ric}\phi,\phi\right\rangle\right)=0\.\] Upon integration by parts in the first term, using that \(\beta G_{x}=0\) on \(\partial B_{r}(p)\), and assuming that \(x\) is an element of the geodesic shell on which \(\beta=1\), we find \[\left|\phi(x)\right|^{2}+\int_{B_{r}(p)}\ \left(g^{-1}(d\beta,dG_{x})+(\Delta_{B} \beta)G_{x}\right)\left|\phi\right|^{2}+\int_{B_{r}(p)}\ \beta G_{x}\left(\left|\mathbb{V}^{A}\phi\right|^{2}+\left| \left[\phi\wedge\phi\right]\right|^{2}+\left\langle\operatorname{Ric}\phi, \phi\right\rangle\right)=0\.\] Since \(\operatorname{Ric}\geq 0\), the last term on the left hand side is non-negative, so the equation provides the following estimate. \[\left|\phi(x)\right|^{2}\leq\left|\int_{B_{r}(p)}\ \left(g^{-1}(dG_{x},d \beta)+G_{x}\Delta_{B}\beta\right)\left|\phi\right|^{2}\right|\] Assume \(x\in S(R+\frac{r-R}{8},r-\frac{r-R}{8})\), such that the distance from \(x\) to the support of \(d\beta\) and \(\Delta\beta\) is greater or equal to \(\frac{r-R}{16}\). Furthermore, make \(r\) large enough that the previously mentioned bounds on \(G_{x}(y)\) hold for all points with distance \(d(x,y)\geq\frac{r-R}{16}\). Then, using the Cauchy-Schwarz inequality on the first term and the estimates for \(G\), \(\left|dG\right|\), \(\left|d\beta\right|\) and \(\left|\Delta_{B}\beta\right|\), we arrive at \[\left|\phi(x)\right|^{2}\leq\frac{32(1+\epsilon)c}{\operatorname{vol}X\left(r -R\right)^{n-k}}\int_{S(R,r)}\left|\phi\right|^{2}\.\] As final step use that in the given domain of integration \(\kappa\) is \(\epsilon\)-almost non-decreasing. Hence, Corollary 3.3 with \(N\geq 0\) provides the estimate \(\kappa(t)\leq\left(\frac{r}{t}\right)^{\epsilon}\kappa(r)\) for all \(t\leq r\). The integral in the last equation is thus bounded from above by \[\int_{S(R,r)}\left|\phi\right|^{2}=\int_{R}^{r}dt\ t^{n-k-1}\kappa^{2}(t)\leq \frac{r^{n-k}}{n-k-2\epsilon}\ \kappa^{2}(r)\.\] Plugging this in, assuming \(r>2R\), and using \(\epsilon<1/2\) leads to \[\left|\phi(x)\right|^{2}\leq\frac{2^{n-k+6}c}{(n-k-1)\ \operatorname{vol}X} \kappa^{2}(r)\leq c_{0}^{2}\frac{\kappa^{2}(r)}{\operatorname{vol}X}\.\] We note that \(c_{0}\) only depends on the dimension of \(W^{n}\) and the dimension of the fibers at infinity. _ad (ii)_ Fix some radius \(t\in[R+\frac{6}{8}(r-R),R+\frac{7}{8}(r-R)]\) and consider the associated geodesic sphere \(\partial B_{t}\). We are interested in the volume of the subset of \(\partial B_{t}\) where \(\left|\phi\right|\) is small compared to the bound in _(i)_. Hence, write \(\Omega_{1}\subseteq\partial B_{t}\) for the points where \(\left|\phi(x)\right|\leq\frac{\kappa(r)}{2\operatorname{vol}X}\). Also introduce \(\Omega_{2}\subseteq\partial B_{t}\) to denote the set of points at which \(\left|\phi(x)\right|\leq(1+\sqrt{N(r)})\frac{\kappa(r)}{\sqrt{\operatorname{vol }X}}\). Since \(N\geq 0\), \(\Omega_{2}\) contains \(\Omega_{1}\). Split up the contributions to \(\kappa^{2}(t)\) that arise from integration over \(\Omega_{1}\), \(\Omega_{2}\setminus\Omega_{1}\), and their complement \(\Omega_{3}=\partial B_{t}\setminus\Omega_{2}\). \[\kappa^{2}(t)=\frac{1}{t^{n-k-1}}\left(\int_{\Omega_{1}}\big{|}\phi\big{|}^{2}+ \int_{\Omega_{2}\setminus\Omega_{1}}\big{|}\phi\big{|}^{2}+\int_{\Omega_{3}} \big{|}\phi\big{|}^{2}\right)\] On \(\Omega_{1}\) the integrand is bounded by \(\frac{\kappa^{2}(t)}{4\operatorname{vol}X}\), on \(\Omega_{2}\setminus\Omega_{1}\) we use \((1+\sqrt{N})^{2}\frac{\kappa^{2}(r)}{\operatorname{vol}X}\), and the integral over \(\Omega_{3}\) can't be larger than \(\kappa^{2}(t)\) in any case. With regard to the last of these bounds we now make use of the fact that \(\kappa\) is almost non-decreasing, which allows us to compare \(\kappa^{2}(t)\) to \(\kappa^{2}(r)\). To that end consider the derivative of \(\kappa^{2}\). \[\left.\frac{dk^{2}}{dr}\right|_{\tilde{r}}=2\frac{N+D}{\tilde{r}}\kappa^{2}( \tilde{r})\geq-2\epsilon\frac{\tilde{r}^{1+2\epsilon}}{r^{2+2\epsilon}}\kappa ^{2}(r)\] For the estimate on the right hand side we have used on the one hand that \(N\geq 0\) and \(D\geq-\epsilon\), and on the other hand that \(N\leq 1\) such that Corollary 3.3 implies that \(\kappa^{2}(\tilde{r})\geq(\tilde{r}/r)^{2+2\epsilon}\kappa^{2}(r)\). Integration from \(t\) to \(r\) leads to \[\kappa^{2}(t)\leq\frac{\epsilon}{1+\epsilon}\left(2-\left(\frac{t}{r}\right)^ {2+2\epsilon}\right)\kappa^{2}(r)\;.\] Since \(t\leq r\), we may as well use the somewhat simpler statement \(\kappa^{2}(t)\leq 2\epsilon\kappa^{2}(r)\). Plugging in the corresponding bounds on each of the \(\Omega_{t}\), writing \(\operatorname{vol}\Omega_{t}=f_{\Omega_{3}}\operatorname{vol}\partial B_{t}\), \(f_{\Omega_{2}}\leq 1\), and using \(\operatorname{vol}\partial B_{t}\leq(1+\epsilon)\operatorname{vol}Xt^{n-k-1}\) thus leads to \[\kappa^{2}(t) \leq\left(\frac{1}{4}f_{\Omega_{1}}+(1-f_{\Omega_{1}})\left(1+ \sqrt{N(r)}\ \right)^{2}+2\epsilon\right)(1+\epsilon)\kappa^{2}(r)\] \[\leq\left(1+2\epsilon-\frac{3}{4}f_{\Omega_{1}}\right)(1+\sqrt{N (r)}\ )^{2}(1+\epsilon)\kappa^{2}(r)\;,\] which can be rearranged to \[f_{\Omega_{1}}\leq\frac{4}{3}\left(1+2\epsilon-\frac{\kappa^{2}(t)}{\left(1+ \sqrt{N(r)}\ \right)^{2}(1+\epsilon)\kappa^{2}(r)}\right)\;. \tag{9}\] To make this expression more useful we'll now also need a lower bound for \(\kappa^{2}(t)\) in terms of \(\kappa^{2}(r)\). This can again be achieved by considering the derivative of \(\kappa^{2}\). In this case we observe that the following function is non-decreasing in \(\tilde{r}\). \[\tilde{r}\mapsto\tilde{r}^{n-k-2}\kappa^{2}(\tilde{r})N(\tilde{r})=\int_{B_{t }(p)}\llbracket\forall^{A}\phi\rrbracket^{2}+\big{\llbracket}[\phi\wedge\phi] \big{\rrbracket}^{2}+\langle\operatorname{Ric}(\phi),\phi\rangle\] Moreover, since \(\kappa\) is \(\epsilon\)-almost non-decreasing for \(\tilde{r}>R\), we have \(|D|\leq N+\epsilon\). This yields the following upper bound for the derivative of \(\kappa^{2}\): \[\left.\frac{dk^{2}}{dr}\right|_{\tilde{r}}=2\frac{N+D}{\tilde{r}}\kappa^{2}( \tilde{r})\leq 4\;\frac{r^{n-k-2}}{\tilde{r}^{n-k-1}}N(r)\kappa^{2}(r)+2 \epsilon\;\frac{r^{2\epsilon}}{\tilde{r}^{1+2\epsilon}}\kappa^{2}(r)\;.\] Integration2 from \(t\) to \(r\) now leads to Footnote 2: For notational simplicity we assume here that \(k\neq n-2\). However, the result holds similarly for the case \(k=n-2\), where the only difference is that upon integration the formulas contain logarithms. \[\kappa^{2}(r)-\kappa^{2}(t)\leq\frac{4}{n-k-2}\left(\frac{r}{t}\right)^{n-k-2}N (r)\kappa^{2}(r)+\left(\left(\frac{r}{t}\right)^{2\epsilon}-1\right)\kappa^{2 }(r)\;.\] Recall that \(r/t<4/3\) and observe that thus \((r/t)^{2\epsilon}-1<\epsilon\) for any choice of \(\epsilon\in(0,1/2)\). It follows that \[\kappa^{2}(t)\geq\left(1-\epsilon-cN(r)\right)\kappa^{2}(r)\;,\] where the constant \(c\) only depends on \(n\) and \(k\). Plugging this lower bound for \(\kappa^{2}(t)\) into (9) and using that \(\epsilon,N<1\), we conclude that \[f_{\Omega_{1}}\leq c_{2}\epsilon+c_{3}\sqrt{N(r)}\;,\] where \(c_{2}\) and \(c_{3}\) only depend on \(n\) and \(k\). ## 8 Proof of Taubes' Dichotomy on ALX spaces We are now in a position to prove Theorem A. As advertised before, the arguments are in complete analogy to Taubes' original proof [14]. **Theorem 8.1**.: _Let \(W^{n}\) be an \(\mathit{ALX}_{k}\) gravitational instanton of dimension \(n\geq 2\) with asymptotic fibers of dimension \(k\leq n-1\) and fix a point \(p\in W^{n}\). Assume \((A,\phi)\) satisfies the second-order differential equation (1). Then_ 1. _[label=()]_ 2. _either there is an_ \(a>0\) _such that_ \(\liminf_{r\to\infty}\frac{\kappa(r)}{r^{2}}>0\)_,_ 3. _or_ \([\phi\wedge\phi]=0\)_._ Proof.: It is sufficient to consider the case where \(\kappa\) is not asymptotically zero, since otherwise \(\kappa\) - and thus \(\phi\) - has compact support due to the (asymptotically) unique continuation property of Lemma 4.1 and in that case \([\phi\wedge\phi]\) must vanish. To see this, assume to the contrary that \([\phi\wedge\phi]\) is non-zero on some neighbourhood of a point \(p\). Since \(\phi\) is compactly supported, there is some radius \(R\) such that \(\phi\) vanishes on \(W^{n}\setminus B_{R}(p)\) and \(\kappa\) vanishes on \([R,\infty)\). As a consequence, there is some positive constant \(c\) such that for any \(r>R\) we have \[\int_{B_{\}(p)}\nu\geq\int_{B_{\}(p)}\left[\left[\phi\wedge\phi\right]\right] ^{2}=c>0\;.\] Here, \(\nu=\left|\mathbb{V}^{A}\phi\right|^{2}+\left[\left[\phi\wedge\phi\right] \right]^{2}+\left\langle\mathrm{Ric}(\phi),\phi\right\rangle\) denotes the integrand that defines \(N\). The derivative of \(\kappa^{2}\) at any \(r>R\) is given by \[\frac{d\kappa^{2}}{dr}=\frac{1}{r^{n-k-1}}\left(\int_{B_{\}(p)}\nu+\int_{ \partial B_{\}(p)}\left(\Delta r-\frac{n-k-1}{r}\right)\left|\phi\right|^{2} \right)\;.\] The second integral vanishes, since \(\phi=0\) at points with \(d(x,p)>R\). However, the first term is bounded from below by \(\frac{c}{r^{n-c-1}}>0\), which is in contradiction to the assumption that \(\kappa\) is constant (namely \(0\)) on all of \([R,\infty)\). By the same reasoning one also finds that \([\phi,\phi_{v}]=0\) whenever \(\kappa_{v}\) has compact support. As a consequence it is sufficient to consider the components of \(\phi\) that are in the complement of the zero eigenspace of \(T\) at infinity (see the related discussion in section 6). Note in particular that when only a single component \(\phi_{v}\) is non-zero at infinity, then \([\phi\wedge\phi]=0\) everywhere. Hence, assume from now on that \(T\) acts on a vector space of dimension at least \(2\) and that \(\lambda>0\) on all of \((0,\infty)\). To prove the dichotomy stated in the theorem, assume that \(\kappa\) is not asymptotically bounded below by any positive power of \(r\). This means that for any small \(a>0\) (say, for example, small enough that \(a^{1/8n}<0.1\)) we can do the following: Set \(\epsilon=a/2\) and let \(R>1\) be a radius beyond which \(|D|\leq\epsilon\) and \(\operatorname{vol}B_{r}(p)\leq(1+\epsilon)\operatorname{vol}Xr^{n-k}\). Then we can find an arbitrarily large \(r_{1}\in[R,\infty)\) such that \(\kappa(r_{1})\leq\left(\frac{n}{R}\right)^{a-\epsilon}\kappa(R)\). In particular we may choose some \(r_{1}\) that is larger than each of the four numbers \(4R\), \(\left(\kappa^{2}(R)/\lambda(R)\right)^{1/2a}R\), \(\kappa(R)^{-1/a}\), and \(R^{1/(1-100\sqrt{a})}\). We are then in the situation in which we can rely on Lemma 6.3. The arguments in the upcoming five parts show that this leads to a contradiction if \(N\neq 0\) and \(a\) is too small. Part 1Recall that Lemma 6.3 provides the existence of a distinguished radius \(t\in[r_{1}^{1-100\sqrt{a}},r_{1}]\leq[R,r_{1}]\). In this part we collect our previous results and slightly expand on our knowledge about the eigenvalues of \(T\) at and below \(t\). Write \(u\) and \(v\) for unit eigenvectors associated to the _largest_ and _smallest_ eigenvalue of \(T(t)\), respectively. Recall that these eigenvalues coincide at \(t\) with the values of \(\kappa_{u}^{2}(t)\) and \(\kappa_{v}^{2}(t)\), so they satisfy \(\kappa_{v}^{2}(t)\leq\kappa_{u}^{2}(t)\). If \(\kappa_{v}^{2}(t)\neq\kappa_{u}^{2}(t)\) the eigenvectors \(v\) and \(u\) are guaranteed to be orthogonal. Otherwise \(T(t)\) is a multiple of the identity matrix and we choose an arbitrary pair of orthonormal vectors. Denote by \(\tilde{R}\) the larger of \(a^{\frac{1}{8n}}t\) and \(R\). Lemma 6.3 establishes that on the interval \(I\ :=[\tilde{R},t]\) the frequency functions \(N\) and \(N_{v}\) are bounded from above by \(a^{1/4}\) and provides associated lower bounds for \(\kappa\) and \(\kappa_{v}\). As we show now, analogous estimates hold for \(N_{u}\) and \(\kappa_{u}\). First observe that the largest eigenvalue satisfies \(\kappa_{u}^{2}(t)\geq\frac{1}{n}\kappa^{2}(t)\) since \(\kappa^{2}\) is the trace of \(T\). From this and the definition of \(N\) and its \(N_{u}\) version (cf. Proposition 3.1 and (5), respectively) it follows that \(N_{u}(t)\leq nN(t)\). As a consequence \(N_{u}\leq na^{1/4}\) and, as long as we make sure that \(a^{1/4}<1/n\), a small variation of the second part of Proposition 3.4 finds that \(N_{u}\leq a^{1/8}\) on all of \(I\). Since \(N_{u}\) is bounded from above, we can now deduce bounds on \(\kappa_{u}\) as usual via Corollary 3.3. \[\kappa_{u}\geq a^{\frac{a^{1/8}\kappa_{v}}{8n}}\kappa_{u}(t)\;.\] In the current situation with \(\epsilon=a/2\) this bound and its analogue for \(\kappa_{v}\) can be simplified considerably. For that observe that \(a^{\epsilon}=a^{a/2}\) is certainly larger than \(1/2\), and similarly \(a^{\frac{a^{1/4}}{4n}}>(1/2)^{\frac{4}{4n}}>1/2\) and \(a^{\frac{a^{1/8}}{8n}}>1/2\). Applying these observations to the bounds for \(\kappa_{u}\) and \(\kappa_{v}\) results in the main conclusions of this part. The following estimates hold on all of \(I=[\tilde{R},t]\): * \(N_{v}\leq a^{1/4}\) and \(\kappa_{v}\geq\frac{1}{4}\kappa_{v}(t)\) * \(N_{u}\leq a^{1/8}\) and \(\kappa_{u}\geq\frac{1}{4}\kappa_{u}(t)\) Part 2The goal of this part is to show that there exist points in \(J=\left[\tilde{R}+\frac{6}{8}(t-\tilde{R}),\tilde{R}+\frac{7}{8}(t-\tilde{R})\right]\) for which the integrals that appear in \(N_{u}\) and \(N_{v}\) are both small. This interval is of significance, because it corresponds to radii that are contained in the geodesic shell that appears in item _(iii)_ of Lemma 7.1. Denote by \(v_{u}\,:=\,\left(\left|\mathbb{V}^{A}\phi_{u}\right|^{2}+\left|\left[\phi\wedge \phi_{u}\right]\right|^{2}\right)\) the integrand in \(N_{u}\) and analogously for \(v_{v}\). Consider the following sets of radii in \(J\). \[J_{u}\,:=\,\left\{\begin{array}{l}s\in J\,\left|\,\left|\, \int_{\partial B_{u}}v_{u}\leq\frac{1}{2\left|J\right|}\,t^{n-k-2}\,\kappa_{u} ^{2}(t)\;a^{1/16}\,\right.\,\right\}\\ J_{v}\,:=\,\left\{\begin{array}{l}s\in J\,\left|\,\left|\, \int_{\partial B_{u}}v_{v}\leq\frac{1}{2\left|J\right|}\,t^{n-k-2}\,\kappa_{v} ^{2}(t)\;a^{1/16}\,\right.\,\right\}\end{array}\right. \tag{11}\] Note for later that \(\left|J\right|=\left|I\right|/8\), where \(I=\left[\tilde{R},t\right]\), so the fact that \(a^{1/8n}t\leq\tilde{R}\) and using \(a^{1/8n}<0.5\) yields \(\left|J\right|\geq\frac{t}{16}\). The measure of \(J_{u}\) is bounded from below, since \[t^{n-k-2}\kappa_{u}^{2}(t)N_{u}(t)\geq\int_{J}ds\int_{\partial B_{u}}v_{u}\geq \left(\left|J\right|-\left|J_{u}\right|\right)\;\frac{1}{2\left|J\right|}\,t^{ n-k-2}\;\kappa_{u}^{2}(t)\;a^{1/16}\;,\] and similarly for \(J_{v}\). Since both \(N_{u}(t),N_{v}(t)\leq a^{1/8}\), we find \[\left|J_{u}\right|,\left|J_{v}\right|>\left(1-2a^{1/16}\right)\left|J\right|\;.\] Since \(2a^{1/16}<0.5\) (recall that \(a^{1/8n}<0.1\) and \(n\geq 2\) in any case), it follows that \(J_{u}\) and \(J_{v}\) must intersect. Hence, choose and fix from now on a point \(r\in J\) at which both (10) and (11) are satisfied. Part 3As a next step we establish an \(L^{2}\)-bound for the function \(\operatorname{Tr}\phi_{u}\phi_{v}\) on \(\partial B_{r}\), where \(r\) is the fixed radius we found at the end of Part 2. For this we view \(\partial B_{r}\) with the induced metric as a compact Riemannian manifold, where we can rely on a Holder-Sobolev inequality. First note that the derivative of \(\operatorname{Tr}\phi_{u}\phi_{v}\) is bounded by \[\left|\!\left|d\operatorname{Tr}\phi_{u}\phi_{v}\right|\leq\left|\!\left|Y^{A} \phi_{u}\right|\,\left|\phi_{v}\right|+\left|\!\left|\phi_{u}\right|\,\left| \mathbb{V}^{A}\phi_{v}\right|\,\right|\,.\] Because \(r\) lies inside the geodesic shell of Lemma 7.1 we know that \(\left|\!\left|\phi_{u}\right|\leq c_{0}\kappa_{u}(t)\) and \(\left|\!\left|\phi_{v}\right|\leq c_{0}\kappa_{v}(t)\). In light of the fact that both (10) and (11) are satisfied at \(r\) (and using the Cauchy-Schwarz inequality), we find that \[\int_{\partial B_{r}}\left|\!\left|d\operatorname{Tr}\phi_{u}\phi_{v}\right|\! \right|^{2}\leq 16c_{0}^{2}t^{n-k-3}\kappa_{u}^{2}(t)\kappa_{v}^{2}(t)a^{1/16}\;.\] On the \(n-1\) dimensional compact manifold \(\partial B_{r}\) the Gagliardo-Nirenberg-Sobolev inequality holds, such that \[\left|\!\left|\operatorname{Tr}\phi_{u}\phi_{v}\right|\!\right|_{L^{3}(\partial B _{r})}\leq C\,\left|\!\left|d\operatorname{Tr}\phi_{u}\phi_{v}\right|\!\right| _{L^{2}(\partial B_{r})}\;,\] where \(\frac{1}{q}=\frac{1}{2}-\frac{1}{n-1}\) and the constant \(C\) only depends on \(n\). On the other hand, since \(\partial B_{r}\) has finite measure we can also use Holder's inequality with \(\frac{1}{p}=\frac{1}{q}+\frac{1}{n-1}(=\frac{1}{2})\), such that we obtain \[\big{|}\mathrm{Tr}\,\phi_{u}\phi_{v}\big{|}_{L^{2}(\partial B_{r})}\leq\mathrm{ meas}(\partial B_{r})^{\frac{1}{n-1}}\big{|}\mathrm{Tr}\,\phi_{u}\phi_{v} \big{|}_{L^{3}(\partial B_{r})}\leq C_{HS}r\,\big{|}\big{|}\mathrm{Tr}\,\phi_{u }\phi_{v}\big{|}_{L^{3}(\partial B_{r})}\,\] where we have used Bishop-Gromov's volume comparison Theorem 2.6 to estimate the area of the geodesic sphere and absorbed any radius-independent contributions into the Holder-Sobolev constant \(C_{HS}\), which then still depends only on \(n\). More explicitly, we conclude that we have the following \(L^{2}\)-bound for \(\mathrm{Tr}\,\phi_{u}\phi_{v}\) over \(\partial B_{r}\): \[\int_{\partial B_{r}}\big{|}\mathrm{Tr}\,\phi_{u}\phi_{v}\big{|}^{2}\leq C_{ HS}\,c_{0}^{2}t^{n-k-1}\kappa_{u}^{2}(t)\kappa_{v}^{2}(t)a^{1/16}. \tag{12}\] Part 4Our next goal is to similarly determine an \(L^{2}\)-bound for the function \(\big{|}\phi_{u}\big{|}\cdot\big{|}\phi_{v}\big{|}\) on \(\partial B_{r}\), where \(\big{|}\phi_{u}\big{|}\) denotes pointwise application of the norm induced by the Killing form. It is a property of \(\mathfrak{su}(2)\) that \[\big{|}\big{|}\phi_{u},\phi_{v}\big{|}^{2}=4\,\big{|}\phi_{u}\big{|}^{2}\, \big{|}\phi_{v}\big{|}^{2}-4\,\mathrm{Tr}\,\left(\phi_{u}\phi_{v}\right)^{2}\.\] Moreover, since \(u\) and \(v\) are orthonormal \(\big{|}\big{|}\big{|}\phi\wedge\phi_{v}\big{|}\big{|}^{2}\geq\big{|}\big{|} \phi_{u},\phi_{v}\big{|}\big{|}^{2}\ \mu_{W^{u}}\), so we can bound the following integral with the help of (11). \[\int_{\partial B_{r}}\big{|}\phi_{u}\big{|}^{2}\,\big{|}\phi_{v} \big{|}^{2}-\int_{\partial B_{r}}\big{|}\mathrm{Tr}(\phi_{u}\phi_{v})\big{|}^ {2}=\frac{1}{4}\int_{\partial B_{r}}\big{|}\big{|}\phi_{u},\phi_{v}\big{|}^{2 }\leq\frac{1}{4}\int_{\partial B_{r}}v_{v}\leq 2\,t^{n-k-3}\,\kappa_{v}^{2}(t)\,a^{1/16}\] Together with the main result of Part 3 in equation (12) this leads to \[\int_{\partial B_{r}}\big{|}\phi_{u}\big{|}^{2}\,\big{|}\phi_{v} \big{|}^{2}\leq\Bigg{(}C_{HS}+\frac{2}{c_{0}^{2}t^{2}\kappa_{u}^{2}(t)}\Bigg{)} c_{0}^{2}t^{n-k-1}\ \kappa_{u}^{2}(t)\ \kappa_{v}^{2}(t)\ a^{1/16}\.\] Observe that things have been set up in such a way that \(t^{2}\kappa_{u}^{2}(t)>1\): First, \(\kappa_{u}\) is almost non-decreasing, so \(\kappa_{u}^{2}(t)\geq(t/R)^{2\epsilon}\kappa_{u}^{2}(R)\). Second, we know that \(\kappa_{u}^{2}(R)\geq\lambda(R)\) since the latter is the smallest eigenvalue of \(T(R)\). Third, we have previously chosen \(r_{1}\) large enough that \(\lambda(R)>(R/r_{1})^{2a}\kappa^{2}(R)\). Fourth, \(t\) is larger than \(r_{1}^{1-\sqrt{100a}}\) while \(R>1\). Plugging everything together yields \[t^{2}\kappa_{u}^{2}(t)\geq r_{1}^{2(1-100\sqrt{a}-4a)}\kappa^{2}(r_{1})>r_{1}^ {2a}\kappa^{2}(r_{1})\,\] where the last step follows via \(a^{1/4n}<0.1\) and \(n\geq 2\). Finally, recall that we have also explicitly chosen \(r_{1}\) to be large enough that the rightmost expression is larger than \(1\). The upshot of this part is that there exists a constant \(C>1\) that only depends on \(n\) and \(k\), such that \[\int_{\partial B_{r}}\big{|}\phi_{u}\big{|}^{2}\,\big{|}\phi_{v} \big{|}^{2}\leq Ct^{n-k-1}\ \kappa_{u}^{2}(t)\ \kappa_{v}^{2}(t)\ a^{1/16}. \tag{13}\] From now on we allow the value of \(C\) to increase from one equation to the next. Part 5In this final part, we combine the results of the previous four parts with item _(ii)_ of Lemma 7.1. Thus, write \(\Omega_{1}\) for the subset of \(\partial B_{r}\) where \(\big{|}\phi_{u}\big{|}\leq\frac{\kappa_{u}(t)}{2\,\mathrm{vol}\,X}\). The inequality in (13) remains true if we restrict the domain of integration to \(\partial B_{r}\setminus\Omega_{1}\), such that \[\frac{\kappa_{u}^{2}(t)}{4\operatorname{vol}X}\int_{\partial B_{r} \setminus\Omega_{1}}\big{|}\phi_{v}\big{|}^{2}\leq\int_{\partial B_{r} \setminus\Omega_{1}}\big{|}\phi_{u}\big{|}^{2}\big{|}\phi_{v}\big{|}^{2}\leq C^ {2}t^{n-k-1}\;\kappa_{u}^{2}(t)\;\kappa_{v}^{2}(t)\;a^{1/16}\;.\] More concisely, this leads to the inequality \[\int_{\partial B_{r}\setminus\Omega_{1}}\big{|}\phi_{v}\big{|} ^{2}\leq C\operatorname{vol}Xt^{n-k-1}\kappa_{v}^{2}(t)a^{1/16}\;.\] Meanwhile, item \((i)\) of Lemma 7.1 provides the upper bound \(\big{|}\phi_{v}\big{|}\leq c_{0}^{2}\kappa_{v}^{2}(t)\). Writing \(\operatorname{vol}\Omega_{1}=f_{\Omega_{1}}\operatorname{vol}B_{r}\) this leads to \[\int_{\Omega_{1}}\big{|}\phi_{v}\big{|}^{2}\leq f_{\Omega_{1}}( 1+\epsilon)\operatorname{vol}X\;r^{n-k-1}c_{0}^{2}\kappa_{v}^{2}(t)\;.\] Item \((ii)\) of Lemma 7.1 states that \(f_{\Omega_{1}}\leq c_{1}\epsilon+c_{2}\sqrt{N_{u}(t)}\). Recall from Part 1 that \(N_{u}(t)\leq a^{1/8}\) while \(\epsilon=a/2\), such that \(f_{\Omega_{1}}\) is bounded by some multiple of \(a^{1/16}\). It follows that the sum of the two integrals satisfies \[\int_{\partial B_{r}}\big{|}\phi_{v}\big{|}^{2}\leq Ct^{n-k-1} \kappa_{v}^{2}(t)a^{1/16}\;.\] This is equivalent to the statement that \(\kappa_{v}^{2}(r)\leq Ca^{1/16}\kappa_{v}^{2}(t)\). Finally, combining this with the bound \(\kappa_{v}^{2}(r)\geq\frac{1}{4}\kappa_{v}^{2}(t)\) from Part 1 culminates in the inequality \[\kappa_{v}^{2}(t)\leq 4Ca^{1/16}\kappa_{v}^{2}(t)\;,\] which is absurd, as we are free to choose \(a\) arbitrarily small and in particular such that \(a^{1/16}<\frac{1}{4C}\). ## 9 Proof of Taubes' Dichotomy for Kapustin-Witten Solutions In this section we prove Theorem B, which enhances Theorem 8.1 for solutions of the Kapustin-Witten equations on four-manifolds. We again closely follow Taubes' arguments, who proved an analogous statement in case the four-manifold is Euclidean space \(\mathbb{R}^{4}\). **Theorem 9.1**.: _Let \(W^{4}\) be an \(ALX\) gravitational instanton of dimension \(4\), with asymptotic fibers of dimension \(k\leq 3\), and such that sectional curvature is bounded from below. Assume \((A,\phi)\) are solutions of the \(\theta\)-Kapustin-Witten equations and if \(\theta\not\equiv 0,\pi\) also assume that \(\int_{W^{4}}\left|F_{A}\right|^{2}<\infty\), then_ 1. _[label=()]_ 2. _either there is an_ \(a>0\) _such that_ \(\liminf_{r\to\infty}\frac{\kappa(r)}{r^{n}}>0\)_,_ 3. _or_ \([\phi\wedge\phi]=0,\mathbb{V}^{A}\phi=0\)_, and_ \(A\) _is self-dual if_ \(\theta=0\)_, flat if_ \(\theta\in(0,\pi)\)_, and anti-self-dual if_ \(\theta=\pi\)_._ Proof.: Since solutions of the Kapustin-Witten equations also satisfy equation (1), the dichotomy of Theorem 8.1 holds. It remains to show that in the case where \([\phi\wedge\phi]\) is identically zero, also \(\mathbb{V}^{A}\phi\) vanishes and \(A\) is either (anti-)self-dual or flat as stated. Hence, assume from now on that \([\phi\wedge\phi]=0\) At points where \(\phi\) is non-zero the Higgs field can then be written as \(\phi=\omega\otimes\sigma\), where \(\omega\in\Omega^{1}(M)\) and \(\sigma\in\Gamma(M,\mathrm{ad}\,E)\), normalized such that \(|\sigma|=1\). Consider first the case where \(\theta=0\) (this also covers the case \(\theta=\pi\) by a reversal of orientation). The Kapustin-Witten equations then state on the one hand \(F_{A}^{+}=0\), so \(A\) is anti-self-dual, and on the other hand \((d_{A}\phi)^{-}=0\) and \(d_{A}\star\phi=0\). The latter two equations can only be satisfied if the constituents of \(\phi\) satisfy \(\mathbb{V}^{A}\sigma=0\) and \((d\omega)^{-}=0=d\star\omega\). In particular \(\sigma\) is guaranteed to be covariantly constant at points where \(\phi\neq 0\). The zero locus of \(\phi\) coincides with the zero locus of \(\omega\). Since the one-form \(\omega\) satisfies the first order differential equations from above, it is an example of what Taubes calls a \(\mathbb{Z}/2\) harmonic spinor in [14]. In that article he investigates the zero locus of such \(\mathbb{Z}/2\) harmonic spinors in general and Theorem 1.3 of [14] states that the zero locus has Hausdorff dimension \(2\). The relevance for us is that the complement of the zeroes of \(\omega\) in any given ball in \(W^{4}\) is path connected. This means that \(\sigma\) can be defined at points where \(\omega=0\) by parallel transport along paths where \(\omega\) is non-zero. Since \(A\) is smooth and \(\sigma\) is \(\mathbb{V}^{A}\)-parallel, parallel transport along two different paths results in the same value. Since \(\sigma\) is defined everywhere, the same is true for \(\omega=\frac{1}{2}\operatorname{Tr}(\phi\sigma)\). The elliptic differential equations \((d\omega)^{-}=0=d\star\omega\) imply that \(|\omega|^{2}\) is subharmonic. Thus, by a classical result of Yau [15, Theorem 3 & Appendix (ii)], either \(|\omega|^{2}\) is constant or \(\lim_{\tau\to\infty}r^{-1}\int_{B_{\tau}}|\omega|^{2}>0\). The latter is precluded by our assumptions and we conclude that \(\mathbb{V}^{A}\phi=0\). In the case that \(\theta\neq 0\pmod{\pi}\), neither of the terms \((d_{A}\phi)^{\pm}\) vanishes automatically. However, if we additionally assume that \(F_{A}\) is square-integrable, we can employ Uhlenbeck's compactness theorem for (anti-)self-dual connections to deduce that \(A\) must be flat. To see this, first note that we can find a coefficient \(c(\theta)\) such that the connection \(A+c(\theta)\phi\) satisfies the \(\pi/2\) version of the Kapustin-Witten equations, so it is sufficient to consider the case \(\theta=\pi/2\). With that in mind and since \([\phi\wedge\phi]=0\), the Kapustin-Witten equations state that \(F_{A}=\star d_{A}\phi\). As a consequence the connection \(\hat{A}:=A+\phi\) is self-dual. Since \(\int_{W^{4}}|F_{\hat{A}}|^{2}<\infty\), this connection \(\hat{A}\) is the pullback of a smooth, regular connection on the one-point compactification of \(W^{4}\), by Uhlenbeck's compactness theorem. It follows that at large radius the field strength falls off faster than the volume of geodesic balls grows, i.e. \(|F_{\hat{A}}|\leq\frac{c}{r^{4}-k}\). Keeping this in mind, note that \(\mathbb{V}^{\hat{A}}\phi=\mathbb{V}^{A}\phi\), such that \(F_{\hat{A}}=2(d_{\hat{A}}\phi)^{+}\) and \[\int_{B_{\tau}}|F_{\hat{A}}|^{2}=\int_{B_{\tau}}\operatorname{Tr}F_{\hat{A}} \wedge d_{\hat{A}}\phi=\int_{\partial B_{\tau}}\operatorname{Tr}F_{\hat{A}} \wedge\phi\,\] where we have used Stokes' theorem and the Bianchi identity in the last equality. With the bound on \(|F_{\hat{A}}|\) the integral on the right is bounded by a multiple of \(\frac{\kappa}{r}\), which approaches \(0\) for \(r\to\infty\), so \(\hat{A}\) is flat. From here we are back in the situation where \((d_{A}\phi)^{+}=0=d_{A}\star\phi\) and the same argument as before leads to \(\mathbb{V}^{A}\phi=0\).
2302.13786
Mathematical modeling for the synchronization of two interacting active rotors
We investigate the synchronization of active rotors. A rotor is composed of a free-rotating arm with a particle that releases a surface-active chemical compound. It exhibits self-rotation due to the surface tension gradient originating from the concentration field of the surface-active compound released from the rotor. In a system with two active rotors, they should interact through the concentration field. Thus, the interaction between them does not depend only on the instantaneous positions but also on the dynamics of the concentration field. By numerical simulations, we show that in-phase and anti-phase synchronizations occur depending on the distance between the two rotors. The stability of the synchronization mode is analyzed based on phase reduction theorem through the calculation of the concentration field in the co-rotating frame with the active rotor. We also confirm that the numerical results meet the prediction by theoretical analyses.
Hiroyuki Kitahata, Yuki Koyano
2023-02-27T14:06:53Z
http://arxiv.org/abs/2302.13786v2
# Mathematical modeling for the synchronization of interacting two active rotors ###### Abstract We investigate the synchronization of active rotors. A rotor is composed of a free-rotating arm with a particle that releases surface-active chemical compound. It exhibits self-rotation due to the surface tension gradient originating from the concentration field of the surface-active compound released from the rotor. In a system with two active rotors, they should interact through the concentration field. Thus, the interaction between them does not depend only on the instantaneous positions but also on the dynamics of the concentration field. By numerical simulations, we show that in- and anti-phase synchronizations occur depending on the distance between the two rotors. The stability of the synchronization mode is analyzed based on phase reduction theorem through the calculation of the concentration field in the co-rotating frame with the active rotor. We also confirm that the numerical results meet the prediction by theoretical analyses. ## I Introduction Self-propelled particles have been intensively investigated for decades both experimentally and theoretically [1; 2]. Motions of living organisms like cells, bacteria, fish, birds, and insects, attract much interest as examples of the self-propulsion. For such motions, several mechanisms on the motion are suggested; e.g., hydrodynamic interaction due to the surface deformation [3; 4], momentum exchange with the substrate [5; 6], and the tactic motion [7; 8; 9]. As for the tactic motions, they are classified into several types such as chemotaxis, phototaxis, mechanotaxis, geotaxis and so on. Here, we focus on the chemotactic motion, in which the direction of motion is determined by the concentration gradient. For positive and negative chemotaxes, the object moves in the positive and negative directions of the gradient of the concentration field, respectively. If the object releases chemical compound around itself and exhibits negative chemotaxis, then the rest state where the object stands still can become unstable since the object is likely to move away from the original position with higher concentration. The motion can be sustained since the particle motion can keep the anisotropy in concentration field around the particle. This is one of the mechanism for the self-propulsion with the taxis [10]. An experimental example for such self-propulsion with negative chemotaxis is a camphor particle floating at a water surface. The camphor particle releases the camphor molecules at the water surface, and the molecules reduce the surface tension. The object is pulled toward the region with higher surface tension reflecting lower camphor concentration, which can be understood as negative chemotaxis [11; 12; 13; 14; 15; 16]. Several types of active rotors, or self-propelled rotors, were recently reported using camphor or some other chemicals with surface activity [17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29]. For example, an elliptic camphor paper can rotate around an axis penetrating a hole at the center of the paper [17; 18]. We also reported a camphor rotor, in which a plastic plate with two or more camphor particles can freely rotate around an axis penetrating at the center of the plate [19; 20]. There have been reported some other types of rotors using camphor or other surface-active compounds. Sharma and coauthors investigated a camphor rotor in which a plastic plate with a camphor particle can rotate around the other tip pinned [21]. They investigated the interaction between the multiple rotors and reported many interesting states like synchronous, quasi-periodic, and chaotic states [22; 23; 24; 25]. As theoretical approaches for the camphor particle motion, the reaction-diffusion equation for camphor concentration coupled with the Newtonian equation for a camphor particle motion has often been adopted [14; 15; 30; 31]. We previously discussed the bifurcation of a camphor rotor by considering the time evolution for the camphor concentration coupled with the Newtonian equation for the rotation of the arm attached with camphor particles [19; 20]. We theoretically derived the simplified ordinary differential equation on the angle of the camphor rotor using the perturbation method, and showed that the self-propelled rotation emerges through supercritical pitchfork bifurcation by changing the friction coefficient as a bifurcation parameter. Sharma and coauthors recently reported the results of numerical simulation based on the simple mathematical model. In their model, they do not directly consider the time evolution of concentration field but assume that a camphor rotor has an intrinsic angular velocity, i.e., the multiple rotors have interaction described with a Yukawa-type potential depending on their relative position [21]. Their model could reproduce the experimental results to some extent, but the systematic analyses are missing. We suppose that the dynamics of concentration is important for the motion of the object composed of surface-active compound like camphor, and the instantaneous interaction is insufficient for describing the detailed dynamics. Thus, we here discuss the dynamics of camphor rotors including the time evolution of the concentration field. Our manuscript is constructed as follows: We first describe the mathematical model for the active rotors in Sec. II, and show the results of numerical simulation based on the model in Sec. III. Then, we perform the theoretical analyses using the phase reduction. The procedure and results of the analyses are described in Sec. IV. Then, we discuss the validity of the phase description by directly calculating the phase response function using numerical simulation in Sec. V. Finally, we summarize the results and show possible extension of our model in Sec. VI. ## II Mathematical model We construct a mathematical model for a system with symmetric active rotors, which can rotate in either clockwise or counterclockwise direction, based on the previous studies [14; 15; 31]. We mainly consider a two-rotor system but also consider a single-rotor one to clarify the characteristics of a composing rotor. Our model comprises the time evolution equation for the configurations of the rotors with particles (camphor particles) that release surface-active compound and that for the concentration of surface-active compound (camphor molecules). The \(i\)th particle can move along a circle with a radius of \(a\) and the center position of \(\mathbf{\ell}_{i}\). Therefore, the particle position can be described only by using one variable \(\phi_{i}\), which is called the phase of the \(i\)th particle. We define the origin and positive direction of each phase as schematically shown in Fig. 1. For a single-rotor system as in Fig. 1(a), we set \(\mathbf{\ell}_{1}=\mathbf{0}\) and thus we can express the position of the particle position \(\mathbf{r}_{1}\) as \[\mathbf{r}_{1}=\mathbf{\ell}_{1}+a\mathbf{e}(\phi_{1})=a\mathbf{e}(\phi_{1}). \tag{1}\] For a two-rotor system as in Figs. 1(b) and (c), we set \(\mathbf{\ell}_{1}=-(L/2)\mathbf{e}_{x}\) and \(\mathbf{\ell}_{2}=(L/2)\mathbf{e}_{x}\). For the symmetric expression between the first and second rotors on the time evolution of each phase, the origins of the phases are set so that \(\phi_{i}=0\) corresponds to the direction toward the center of the other rotor, and positive directions of the phase are set as the rotation direction of each rotor. That is to say, the positions of the two particles \(\mathbf{r}_{1}\) and \(\mathbf{r}_{2}\) are expressed using the phases \(\phi_{1}\) and \(\phi_{2}\) as \[\mathbf{r}_{1}=\mathbf{\ell}_{1}+a\mathbf{e}(\phi_{1})=-\frac{L}{2}\mathbf{e}_{x}+a\mathbf{e}(\phi _{1}), \tag{2}\] \[\mathbf{r}_{2}=\mathbf{\ell}_{2}-a\mathbf{e}(\pm\phi_{2})=\frac{L}{2}\mathbf{e}_{x}-a\mathbf{e}( \pm\phi_{2}). \tag{3}\] In Eq. (3), the positive and negative signs correspond to the rotation in the same direction (Fig. 1(b)) and in the opposite direction (Fig. 1(c)), respectively. Here, we set the Cartesian coordinates so that the origin meets the midpoint of the centers of the two rotors and the line connecting the centers of the two rotors meets the \(x\)-axis. The unit vector in the \(x\)- and \(y\)-axes are set as \(\mathbf{e}_{x}\) and \(\mathbf{e}_{y}\), respectively, and \(\mathbf{e}(\theta)\) is a unit vector in the direction of \(\theta\), i.e., \(\mathbf{e}(\theta)=\cos\theta\mathbf{e}_{x}+\sin\theta\mathbf{e}_{y}\). The time evolution equation for the rotor is obtained based on the Newtonian equation with the overdamped scheme. That is to say, the equation is written as \[\eta A\frac{d\mathbf{r}_{i}}{dt}=\eta A\mathbf{v}_{i}=\mathbf{F}_{u,i}+\mathbf{F}_{c,i}, \tag{4}\] where \(\mathbf{v}_{i}\) is the velocity of the particle composed of the \(i\)th rotor, \(\eta\) is the friction coefficient per area, and \(A\) is the area of the particle. The velocity \(\mathbf{v}_{i}\) is expressed using the phase \(\phi_{i}\) as \[\mathbf{v}_{1}=a\frac{d\phi_{1}}{dt}\mathbf{e}\left(\phi_{1}+\frac{\pi}{2}\right), \tag{5}\] \[\mathbf{v}_{2}=a\frac{d\phi_{2}}{dt}\mathbf{e}\left(\phi_{2}\pm\frac{\pi}{2}\right), \tag{6}\] where the positive and negative signs correspond to the cases with the same and opposite rotation directions, respectively. \(\mathbf{F}_{u,i}\) and \(\mathbf{F}_{c,i}\) are the force exerted on the \(i\)th particle due to the surface tension gradient and the constraint force in the direction of \(\mathbf{e}(\phi_{i})\). The force due to the surface tension gradient is expressed using area integration as \[\mathbf{F}_{u,i}= \int_{\partial\Omega_{i}}\left[-\Gamma u\mathbf{n}\right]d\ell\] \[= \iint_{\Omega_{i}}\left[-\Gamma\nabla u\right]dA\] \[= -\Gamma\iint_{\mathbb{R}^{2}}\left(\nabla u\right)S\left(\mathbf{r}- \mathbf{r}_{i}\right)dA, \tag{7}\] where \(u\) is the concentration field of the surface-active chemical compound, \(\Omega_{i}\) is the region of the particle composing the \(i\)th rotor, \(\partial\Omega_{i}\) is the periphery of \(\Omega_{i}\), and \(\mathbf{n}\) is the outward normal unit vector at the particle periphery. \(dA\) and \(d\ell\) are the area and line elements. The surface Figure 1: Schematic illustration for the active rotors and definition of phases. (a) A single rotor. (b) Two coupled rotors which rotate in the same direction (counterclockwise). (c) Two coupled rotors, which rotate in the different directions. The first (left) one and second (right) one rotate counterclockwise and clockwise, respectively. tension should be a decreasing function of the concentration of the surface-active compound. Here, we assume a linearity between surface tension and the concentration, where the proportionality constant is \(-\Gamma\). \(S(\cdot)\) is a smoothed (\(C^{1}\)-class) level function. In the numerical simulation, we adopt a smoothed step function which has values close to unity inside the particle and values close to zero outside of it, i.e., \[S(\mathbf{\ell})=\left\{\begin{array}{ll}1,&|\mathbf{\ell}|\leq R- \varepsilon,\\ 1-\left(|\mathbf{\ell}|-R+\varepsilon\right)^{2}/\left(2\varepsilon^{2}\right),&R -\varepsilon<|\mathbf{\ell}|\leq R,\\ \left(|\mathbf{\ell}|-R-\varepsilon\right)^{2}/\left(2\varepsilon^{2}\right),&R <|\mathbf{\ell}|<R+\varepsilon,\\ 0,&|\mathbf{\ell}|\geq R+\varepsilon,\end{array}\right. \tag{8}\] where \(R\) is the radius of a particle of the surface-active compound, and \(\varepsilon\) is a smoothing factor. Then, we obtain the evolution equation by calculating the vector product of Eq. (4) with \(a\mathbf{e}(\phi_{i})\) as \[\eta Aa^{2}\frac{d\phi_{1}}{dt}= \left(\mathbf{r}_{1}-\mathbf{\ell}_{1}\right)\times\mathbf{F}_{1}\] \[= a\mathbf{e}(\phi_{1})\times\left(\mathbf{F}_{u,1}+\mathbf{F}_{c,1}\right)\] \[= a\mathbf{e}\left(\phi_{1}+\frac{\pi}{2}\right)\cdot\mathbf{F}_{u,1}, \tag{9}\] and \[\eta Aa^{2}\frac{d\phi_{2}}{dt}= \pm\left(\mathbf{r}_{2}-\mathbf{\ell}_{2}\right)\times\mathbf{F}_{2}\] \[= \mp a\mathbf{e}(\pm\phi_{2})\times\left(\mathbf{F}_{u,2}+\mathbf{F}_{c,2}\right)\] \[= \mp a\mathbf{e}\left(\pm\phi_{2}+\frac{\pi}{2}\right)\cdot\mathbf{F}_{u,2}\] \[= a\mathbf{e}\left(\pm\left(\phi_{2}-\frac{\pi}{2}\right)\right)\cdot \mathbf{F}_{u,2}. \tag{10}\] Here, the upper and lower signs correspond to the rotation in the same rotation direction and the opposite rotation direction, respectively. The operator "\(\times\)" denotes the vector product in two dimension, i.e., \(\mathbf{\alpha}\times\mathbf{\beta}=\alpha_{x}\beta_{y}-\alpha_{y}\beta_{x}\) for \(\mathbf{\alpha}=\alpha_{x}\mathbf{e}_{x}+\alpha_{y}\mathbf{e}_{y}\) and \(\mathbf{\beta}=\beta_{x}\mathbf{e}_{x}+\beta_{y}\mathbf{e}_{y}\). In the calculation, we used that the constraint force \(\mathbf{F}_{c,1}\) is parallel to \(\mathbf{e}(\phi_{1})\) and thus \(\mathbf{F}_{c,1}\times\mathbf{e}(\phi_{1})=0\). We also used \(\mathbf{F}_{c,2}\) is parallel to \(\mathbf{e}(\pm\phi_{2})\) and thus \(\mathbf{F}_{c,2}\times\mathbf{e}(\pm\phi_{2})=0\). The dynamics of the concentration field is described as \[\frac{\partial u}{\partial t}=\nabla^{2}u-u+\frac{1}{A}\sum_{i=1}^{N}S(\mathbf{r} -\mathbf{r}_{i}), \tag{11}\] where the first, second, and third terms in the right side correspond to the diffusion, evaporation, and supply of the surface-active chemical compound. \(S(\mathbf{r}-\mathbf{r}_{i})/A\) denotes the supply of the surface-active compound from the \(i\)th particle located at \(\mathbf{r}_{i}\), and \(N\) is 1 for a single-rotor system and 2 for a two-coupled rotor system. It should be noted that we used the equations with dimensionless variables. The length, time, and concentration are normalized with the diffusion length \(\sqrt{D/\kappa}\), the characteristic time of sublimation \(1/\kappa\), and the ratio between the supply rate and sublimation rate, \(f/\kappa\). Here, \(D\) is the effective diffusion constant of the surface-active chemical compound [32; 33], \(\kappa\) is the sublimation rate, and \(f\) is the supply rate of the compound for each disk. ## III Numerical simulation Numerical simulation was performed based on the model in Sec. II. For the numerical simulation, we adopted the Euler method for the dynamics of the rotor in Eqs. (9) and (10) and with an explicit method for the dynamics of the concentration in Eq. (11). The calculation was performed in the region with \(-X/2\leq x\leq X/2\) and \(-Y/2\leq y\leq Y/2\). The Robin boundary condition \(\nabla u\cdot\mathbf{n}_{b}+u=0\) was adopted [34], where \(\mathbf{n}_{b}\) is the outward normal unit vector at the boundary. The initial condition was set as \(u=0\) in all the calculation region. In order to stabilize the rotation direction, we fixed \(d\phi_{i}/dt=20\) for \(0\leq t\leq 1\). \(\phi_{1}\) and \(\phi_{2}\) at \(t=0\) were set to be 0 and \(\pi/2\), respectively. The parameters were fixed as \(\Gamma=1\), \(R=0.1\), \(a=0.2\), and \(\varepsilon=0.025\). The spatial mesh and time step were \(\Delta x=0.025\) and \(\Delta t=0.0001\). The calculation region was \(X=16\) and \(Y=10\). First, we performed numerical simulation for a single rotor. In Fig. 2(a), we show the time series of \(d\phi_{1}/dt\) for \(\eta=0.05\), \(0.1\), \(0.15\), and \(0.2\). For \(\eta=0.05\), \(0.1\), and \(0.15\), \(d\phi_{1}/dt\) converged to a finite positive value, while \(d\phi_{1}/dt\) decayed to zero for \(\eta=0.2\). We also confirmed that at \(t=10\), \(d\phi_{1}/dt\) reached a steady value. Therefore, we defined a stable angular velocity \(\omega\) as \(d\phi_{1}/dt\) at \(t=100\). In Fig. 2(b), the plot of \(\omega\) against the friction coefficient \(\eta\) is shown. The results suggest that a single rotor exhibited a stable rotation at a finite constant angular velocity for \(\eta<\eta_{c}\), while it stopped for \(\eta>\eta_{c}\), where \(\eta_{c}\simeq 0.17\). This can be understood as a supercritical pitchfork bifurcation just as shown in the previous study [19]. Then, we fixed \(\eta=0.1\), and calculated the behavior of the coupled system. We demonstrated the two cases; (i) the two rotor rotate in the same direction (cf. Fig. 1(b)) and (ii) they do in the opposite directions (cf. Fig. 1(c)). Figure 2: Numerical results for a single rotor. (a) Time series of \(d\phi_{1}/dt\) for \(\eta=0.05\) (red), \(0.1\) (green), \(0.15\) (cyan) and \(0.2\) (blue). (b) The stable angular velocity \(\omega\) depending on the friction coefficient \(\eta\). A single rotor exhibited a rotation with a finite angular velocity for \(\eta<\eta_{c}\simeq 0.17\), while it stopped for \(\eta>\eta_{c}\). In order to clarify the mode of synchronization, we detected the time \(\tau_{\mu}^{(i)}\) at which the rotor \(i\) passes through the line segment connecting \(\mathbf{\ell}_{1}\) and \(\mathbf{\ell}_{2}\) for the \(\mu\)th time. Then, the phase difference is defined as \[\Delta\phi=2\pi\frac{\tau_{\mu}^{(2)}-\tau_{\nu}^{(1)}}{\tau_{\nu+1}^{(1)}-\tau _{\nu}^{(1)}}, \tag{12}\] where \(\mu\) and \(\nu\) holds \(\tau_{\nu}^{(1)}\leq\tau_{\mu}^{(2)}<\tau_{\nu+1}^{(1)}\). We changed \(L\) and calculated the dynamics for the coupled rotors until \(t=10000\). The snapshots after the synchronized state becomes stable are shown for \(L=2\) and \(3\) in each rotation direction in Fig. 3. The time evolution of \(\Delta\phi\) is shown in Fig. 4. In both the cases with the same and opposite rotation directions, the in-phase synchronization (\(\Delta\phi=0\)) was observed for \(L=2\) and \(4\), while the anti-phase synchronization (\(\Delta\phi=\pi\)) was observed for \(L=3\). The relaxation time to the synchronized state was intensively dependent on the distance \(L\) between the two rotors. In order to clearly show the dependence of the synchronization mode on the distance \(L\), we plotted \(\Delta\phi\) at \(t=5000\), \(10000\), \(15000\), and \(20000\) against \(L\) in Fig. 5. It clearly exhibits that the in-phase and anti-phase synchronization alternates with an increase in \(L\). The preferable synchronization mode was almost the same in both the two cases. In Fig. 5, the phase difference was not converged for the region greater than \(4\) and those close to the boundary between the in-phase and anti-phase synchronization. This should be because in these regions the interaction between the two rotors was so small that it takes much time to reach the stable synchronized state of the rotors. ## IV Theoretical analysis In order to discuss the mechanism of the alternation of the stable synchronization modes depending on \(L\), we perform the theoretical analysis to discuss the synchronization of the two active rotors based on the phase reduction method. The model in Eqs. (1)-(11) is used, but a point source is adopted, i.e. \(R\rightarrow+0\) as \[\frac{\partial u}{\partial t}=\nabla^{2}u-u+\sum_{i=1}^{N}\delta(\mathbf{r}-\mathbf{r }_{i}), \tag{13}\] in the place of Eq. (11), where \(\delta(\cdot)\) is the Dirac's delta function. Since the time evolution equation for the concentration field \(u\) in Eq. (11) is linear, \(u\) is described as the summation of \(u_{1}\) and \(u_{2}\), which originate from the supply from Figure 4: Numerical results for coupled rotors. Time series of \(\Delta\phi\) are plotted for (a) the case with the same rotation direction and for (b) the case with the opposite rotation direction. The distance between the two rotor centers \(L\) was (a) \(L=2\) (red), (b) \(L=3\) (green), and (c) \(L=4\) (cyan). In both the cases, in-phase synchronization was observed for \(L=2\) and \(4\), while anti-phase synchronization was observed for \(L=3\). Figure 5: Numerical results for coupled rotors. The phase differences \(\Delta\phi\) at \(t=5000\) (light green), \(10000\) (yellow), \(15000\) (orange), and \(20000\) (red) are plotted in (a) the case with the same rotation direction and in (b) the case with the opposite rotation direction. In-phase synchronization (\(\Delta\phi=0\)) and anti-synchronization (\(\Delta\phi=\pi\)) alternates with an increase in \(L\). Figure 3: Snapshots representing the particle position and camphor concentration at \(t=1000\), at which the coupled rotors for \(L=2\) and \(3\) reach the stable synchronization mode. (a) \(L=2\) and (b) \(L=3\) in the case with the same rotation direction. (c) \(L=2\) and (d)\(L=3\) in the opposite rotation direction. The yellow arrows show the rotation direction. The cross points of the the yellow dotted lines correspond to the centers of the rotors, \(\mathbf{\ell}_{1}\) and \(\mathbf{\ell}_{2}\). the particles 1 and 2, respectively. That is to say, \[u=u_{1}+u_{2}, \tag{14}\] where \[\frac{\partial u_{i}}{\partial t}=\nabla^{2}u_{i}-u_{i}+\delta(\mathbf{r}-\mathbf{r}_{i}), \tag{15}\] for \(i=1,2\). As for the time evolution of \(\phi_{i}\), Eqs. (9) and (10) give \[\eta a^{2}\frac{d\phi_{1}}{dt}= a\mathbf{e}\left(\phi_{1}+\frac{\pi}{2}\right)\cdot\frac{1}{A}\mathbf{F}_{u,1}, \tag{16}\] and \[\eta a^{2}\frac{d\phi_{2}}{dt}= a\mathbf{e}\left(\pm\left(\phi_{2}-\frac{\pi}{2}\right)\right)\cdot \frac{1}{A}\mathbf{F}_{u,2}. \tag{17}\] The force originating from the surface tension \(\mathbf{F}_{u,i}\) is also decomposed into two terms: \[\mathbf{F}_{u,i}=\mathbf{F}_{u,i,1}+\mathbf{F}_{u,i,2}. \tag{18}\] in the same way as in the concentration \(u\). Under the point-source approximation, \[\frac{1}{A}\mathbf{F}_{u,i,j}= -\frac{\Gamma}{A}\iint_{\mathbb{R}^{2}}\left(\nabla u_{j}\right) \delta\left(\mathbf{r}-\mathbf{r}_{i}\right)dA,\] \[\to -\Gamma\left.\nabla u_{j}\right|_{\mathbf{r}=\mathbf{r}_{i}}, \tag{19}\] for \(i\neq j\). It should be noted that the expression in Eq. (19) for \(i=j\) does not hold since the force \(\mathbf{F}_{u,i,i}/A\) shows the logarithmic divergence and we should introduce a small positive value corresponding to the particle radius [19; 35]. Nevertheless, from the physical point of view, the \(i\)th rotor should rotate with a constant angular velocity for \(t\to\infty\) when it is driven only by \(\mathbf{F}_{u,i,i}\). We define the terminal angular velocity to be \(\omega\) and then the following equations hold: \[\frac{1}{\eta Aa}\mathbf{e}\left(\phi_{1}+\frac{\pi}{2}\right)\cdot\mathbf{F}_{u,1,1} =\omega, \tag{20}\] for rotor 1, and \[\frac{1}{\eta Aa}\mathbf{e}\left(\pm\left(\phi_{2}-\frac{\pi}{2}\right)\right) \cdot\mathbf{F}_{u,2,2}=\omega, \tag{21}\] for rotor 2. The positive and negative signs correspond to the same and opposite rotation direction, respectively. Hereinafter, the effect of the concentration field of the surface-active compound released from the other rotor is treated as a perturbation. We first calculate the concentration field of the chemical compound released from one rotor and then the force originating from the concentration field exerting on the other particle. For the construction of the concentration field generated by one camphor rotor, we consider that a single rotor is rotating at a constant angular velocity \(\omega\), that is, the position is described as \(\mathbf{r}=a\mathbf{e}(\phi_{1})=a\mathbf{e}(\omega t+\phi_{0})\). We introduce a co-rotating frame with the rotor, where the variables in the frame are expressed with tildes. The single point source is located at \(\tilde{\mathbf{r}}=a\tilde{\mathbf{e}}_{x}\), and the concentration field \(\tilde{u}\) in the co-rotating frame should be a steady state. Here, \(\tilde{\mathbf{e}}_{x}\) is a unit vector in the \(\tilde{x}\)-direction in the co-rotating frame. Then, the steady-state concentration field should hold \[-\omega\frac{\partial\tilde{u}}{\partial\tilde{\theta}}=\tilde{\nabla}^{2} \tilde{u}-\tilde{u}+\delta\left(\tilde{\mathbf{r}}-a\tilde{\mathbf{e}}_{x}\right), \tag{22}\] where \(\tilde{\nabla}\) is the nabla operator in the co-rotating frame. After lengthy calculation, we obtain \[\tilde{u}(\tilde{r},\tilde{\theta})=\left\{\begin{array}{ll} \tilde{u}_{\text{in}}(\tilde{r},\tilde{\theta}),&\tilde{r}<a,\\ \tilde{u}_{\text{out}}(\tilde{r},\tilde{\theta}),&\tilde{r}\geq a,\end{array}\right. \tag{23}\] where \[\tilde{u}_{\text{in}}(\tilde{r},\tilde{\theta})\\ =\frac{1}{2\pi}\sum_{n=-\infty}^{\infty}\mathcal{K}_{n}\left(a \sqrt{1-in\omega}\right)\mathcal{I}_{n}\left(\tilde{r}\sqrt{1-in\omega}\right) e^{in\tilde{\theta}}, \tag{24}\] and \[\tilde{u}_{\text{out}}(\tilde{r},\tilde{\theta})\\ =\frac{1}{2\pi}\sum_{n=-\infty}^{\infty}\mathcal{I}_{n}\left(a \sqrt{1-in\omega}\right)\mathcal{K}_{n}\left(\tilde{r}\sqrt{1-in\omega}\right) e^{in\tilde{\theta}}. \tag{25}\] Here, \(\mathcal{I}_{n}(\cdot)\) and \(\mathcal{K}_{n}(\cdot)\) are the modified Bessel functions of the first and second kinds with the degree of \(n\), respectively. The detailed derivation and notes for the Bessel function with complex parameters are shown in Appendix A. Based on Eq. (25), we obtain the asymptotic form of the concentration field far from the rotor. Here we consider \(\tilde{r}\gg 1\gg a\). That is to say, we consider the case that the distance between two rotors is much greater than the diffusion length and the arm of the rotor is much less than the diffusion length. We take into account the order up to \(\mathcal{O}(a)\). Using Eq. (25), we obtain the asymptotic form as \[\tilde{u}(\tilde{r},\tilde{\theta})\simeq \frac{1}{2\sqrt{2\pi\tilde{r}}}e^{-\tilde{r}}+\frac{a}{2\sqrt{2 \pi\tilde{r}}}\rho^{1/4}e^{-\tilde{r}\sqrt{\rho}\cos(\chi/2)}\] \[\times\cos\left(\tilde{\theta}-\frac{\chi}{4}+\tilde{r}\sqrt{ \rho}\sin\frac{\chi}{2}\right)+\mathcal{O}\left(a^{2}\right). \tag{26}\] Here, we set \(1-i\omega=\rho e^{-i\chi}\), that is, \(\rho=\sqrt{1+\omega^{2}}\) and \(\chi=\arctan\omega\). The detailed calculation is found in Appendix B. In the laboratory frame, the asymptotic form of the concentration field generated by a rotor, which is located at the origin, whose phase is \(\phi\), and which rotates in the counterclockwise rotation, is described as \[u(r,\theta,\phi)\] \[\simeq\frac{1}{2\sqrt{2\pi r}}e^{-r}+\frac{a}{2\sqrt{2\pi r}}\rho^{1 /4}e^{-r\sqrt{\rho}\cos(\chi/2)}\] \[\qquad\times\cos\left(\theta-\phi-\frac{\chi}{4}+r\sqrt{\rho}\sin \frac{\chi}{2}\right)+\mathcal{O}\left(a^{2}\right), \tag{27}\] for \(r\gg 1\) and \(t\to\infty\). It should be noted that the first term with \(\mathcal{O}(1)\) does not depend on time and that the second term with \(\mathcal{O}(a)\) depends on time which can induce the synchronization between multiple rotors. We adopt the asymptotic form in Eq. (27) for \(u_{j}\) to calculate \(\mathbf{F}_{u,i,j}\) (\(i\neq j\)) and consider the interaction between two rotors in Eq. (19). Considering that the asymptotic form of \(u_{j}\) is described as a function of \(\phi_{j}\) and that the position \(\mathbf{r}_{i}\) is a function of \(\phi_{i}\), the force \(\mathbf{F}_{u,i,j}\) is a function of \(\phi_{i}\) and \(\phi_{j}\), i.e., \(\mathbf{F}_{u,i,j}(\phi_{i},\phi_{j})\). We assume that the interaction is so weak that the phase difference hardly changes in one period \(2\pi/\omega\) and we can adopt the averaging method in the phase description [36]. First, we calculate the time evolution of \(\phi_{1}\) from Eqs. (9) and (20) as \[\frac{d\phi_{1}}{dt}= \frac{1}{\eta Aa}\mathbf{e}\left(\phi_{1}+\frac{\pi}{2}\right)\cdot \mathbf{F}_{u,1},\] \[= \omega+\frac{1}{\eta Aa}\mathbf{e}\left(\phi_{1}+\frac{\pi}{2} \right)\cdot\mathbf{F}_{u,1,2}(\phi_{1},\phi_{2}),\] \[\simeq \omega+\frac{\omega}{2\pi\eta a}\int_{0}^{2\pi/\omega}\mathbf{e} \left(\phi_{1}+\frac{\pi}{2}\right)\cdot\mathbf{F}_{u,1,2}(\phi_{1},\phi_{2})dt\] \[= \omega+\frac{1}{2\pi\eta a}\] \[\times\int_{0}^{2\pi}\mathbf{e}\left(\phi_{1}+\frac{\pi}{2}\right) \cdot\mathbf{F}_{u,1,2}(\phi_{1},\phi_{1}+\Delta\phi)d\phi_{1}\] \[\equiv \left\{\begin{array}{ll}\omega+G_{s}(\Delta\phi),&\mbox{( same rotation direction)},\\ \omega+G_{o}(\Delta\phi),&\mbox{(opposite rotation direction)}.\end{array}\right. \tag{28}\] Here, we set \(\Delta\phi=\phi_{2}-\phi_{1}\) and calculate the integral under the assumption that \(\Delta\phi\) is constant. It should be noted that \(G_{s}(\Delta\phi)\) and \(G_{o}(\Delta\phi)\) are different since \(\mathbf{F}_{u,1,2}\) depends on the rotation direction of the rotor 2. In the same manner, we obtain from Eqs. (10) and (21) as \[\frac{d\phi_{2}}{dt}\simeq \omega+\frac{1}{2\pi\eta a}\] \[\times\int_{0}^{2\pi}\mathbf{e}\left(\pm\left(\phi_{2}-\frac{\pi}{2} \right)\right)\cdot\mathbf{F}_{u,2,1}(\phi_{2}-\Delta\phi,\phi_{2})d\phi_{2}\] \[= \left\{\begin{array}{ll}\omega+G_{s}(-\Delta\phi),&\mbox{( same rotation direction)},\\ \omega+G_{o}(-\Delta\phi),&\mbox{(opposite rotation direction)},\end{array}\right. \tag{29}\] where the positive and negative signs in the second term of the right side correspond to the cases with same and opposite rotation directions, respectively. \(G_{s}(\Delta\phi)\) and \(G_{o}(\Delta\phi)\) are the so-called phase response functions in terms of the coupled oscillators. In the case with the same rotation direction, the second term in the right side in Eq. (29) is calculated as \[G_{s}(\Delta\phi) \simeq\frac{\Gamma\rho^{1/4}e^{-\sqrt{\rho}L\cos(\chi/2)}}{4\eta \sqrt{2\pi L}}\] \[\times\left\{\frac{1}{2L}\sin\left[\Delta\phi+\frac{\chi}{4}-L \sqrt{\rho}\sin\left(\frac{\chi}{2}\right)\right]\right.\] \[\left.\qquad-\sqrt{\rho}\sin\left[\Delta\phi+\frac{3\chi}{4}-L \sqrt{\rho}\sin\left(\frac{\chi}{2}\right)\right]\right\}\] \[\equiv g_{s}(\Delta\phi), \tag{30}\] which is plotted in Fig. 6(a). The detailed calculation is shown in Appendix C. Then, we have \[\frac{d\phi_{1}}{dt}\simeq \omega+g_{s}(\Delta\phi), \tag{31}\] \[\frac{d\phi_{2}}{dt}\simeq \omega+g_{s}(-\Delta\phi). \tag{32}\] By calculating the difference between two equations, we obtain the time-evolution equations for the slow dynamics of \(\Delta\phi\) as \[\frac{d\Delta\phi}{dt}\simeq g_{s}(-\Delta\phi)-g_{s}(\Delta\phi)\equiv h_{s}(\Delta\phi), \tag{33}\] where \[h_{s}(\Delta\phi)= \frac{\Gamma\rho^{1/4}e^{-\sqrt{\rho}L\cos(\chi/2)}}{2\eta\sqrt{2 \pi L}}\sin\Delta\phi\] \[\times\left\{-\frac{1}{2L}\cos\left[-\frac{\chi}{4}+L\sqrt{\rho} \sin\left(\frac{\chi}{2}\right)\right]\right.\] \[\left.\qquad+\sqrt{\rho}\cos\left[-\frac{3\chi}{4}+L\sqrt{\rho} \sin\left(\frac{\chi}{2}\right)\right]\right\}\] \[\equiv \frac{\Gamma\rho^{1/4}e^{-\sqrt{\rho}L\cos(\chi/2)}}{2\eta\sqrt{2 \pi L}}C_{s}\sin\Delta\phi\] \[\equiv h_{s}^{(1)}\sin\Delta\phi. \tag{34}\] As shown in Fig. 6(c), \(\Delta\phi=0\) and \(\pi\) are fixed points of Eq. (33), and their stability is determined by the sign of \(C_{s}\) since the factors other than \(C_{s}\) are positive. That is to say, the fixed point at \(\Delta\phi=0\) is stable when \(C_{s}<0\) and it is unstable when \(C_{s}>0\). On the other hand, the fixed point at \(\Delta\phi=\pi\) is unstable when \(C_{s}<0\) and it is stable when \(C_{s}>0\). Thus, when \(C_{s}<0\) and \(C_{s}>0\), in-phase and anti-phase synchronization should occur, respectively. It should be noted that \(C_{s}\) depends only on \(\omega\) and \(L\), since \(\rho\) and \(\chi\) are functions of \(\omega\) as shown just below Eq. (26). \(C_{s}\) and \(h_{s}^{(1)}\) for \(\eta=0.1\) are plotted as a function of \(L\) in Fig. 7, where \(\omega\) is set to be constant at \(13.23\) from the numerical results in Fig 2. In the case with the opposite rotation direction, we obtain in the same manner as in the case with the same rotation direction: \[G_{o}(\Delta\phi) \simeq-\frac{\Gamma\rho^{1/4}e^{-\sqrt{\rho}L\cos(\chi/2)}}{4\eta \sqrt{2\pi L}}\] \[\quad\times\left\{\frac{3}{2L}\sin\left[\Delta\phi+\frac{\chi}{4}- L\sqrt{\rho}\sin\left(\frac{\chi}{2}\right)\right]\right.\] \[\quad\left.+\sqrt{\rho}\sin\left[\Delta\phi+\frac{3\chi}{4}-L \sqrt{\rho}\sin\left(\frac{\chi}{2}\right)\right]\right\}\] \[\equiv g_{o}(\Delta\phi), \tag{35}\] which leads \[\frac{d\Delta\phi}{dt}\simeq g_{o}(-\Delta\phi)-g_{o}(\Delta\phi)\equiv h_{o}( \Delta\phi). \tag{36}\] Here, we have calculated \(h_{o}(\Delta\phi)\) as \[h_{o}(\Delta\phi)= \frac{\Gamma\rho^{1/4}e^{-\sqrt{\rho}L\cos(\chi/2)}}{2\eta\sqrt{2 \pi L}}\sin\Delta\phi\] \[\quad\times\left\{\frac{3}{2L}\cos\left[-\frac{\chi}{4}+L\sqrt{ \rho}\sin\left(\frac{\chi}{2}\right)\right]\right.\] \[\quad\left.+\sqrt{\rho}\cos\left[-\frac{3\chi}{4}+L\sqrt{\rho} \sin\left(\frac{\chi}{2}\right)\right]\right\}\] \[\equiv \frac{\Gamma\rho^{1/4}e^{-\sqrt{\rho}L\cos(\chi/2)}}{2\eta\sqrt{2 \pi L}}C_{o}\sin\Delta\phi\] \[\equiv h_{o}^{(1)}\sin\Delta\phi. \tag{37}\] The plots of \(g_{o}(\Delta\phi)\) and \(h_{o}(\Delta\phi)\) are shown in Figs. 6(b) and (d). In this case, \(\Delta\phi=0\) and \(\pi\) are also the fixed points. We can discuss the stability of the synchronization mode in the parallel manner, and thus the sign of the coefficient \(C_{o}\) plays an important role. To exemplify the stable synchronization mode, \(C_{o}\) are also plotted against \(L\) in Fig. 7. The signs of \(C_{s}\) and \(C_{o}\) almost coincide for fixed \(L\), which means that the stable synchronization mode is the same in the cases with same and opposite rotation directions for each \(L\). ## V Discussion Here, we discuss the mechanism on the synchronization of the coupled rotors based on the theoretical results. The concentration field of the surface-active compound that one rotor releases is expressed in Eq. (27). The first term in the right side does not depend on the phase, but the second term does. The effect of the dynamics of the other rotor is approximately obtained by averaging the effect over one period. In such an averaging process, the effect of the first term is canceled out and only the second term matters. This means that the periodically changing concentration field only affects the stability of the synchronization mode. To visualize the time-dependent component of the concentration field, we numerically calculated the averaged concentration field \(\bar{u}\): \[\bar{u}(x,y)=\frac{1}{T}\int_{t_{0}}^{t_{0}+T}u(x,y)dt, \tag{38}\] where \(t_{0}\) was the time sufficiently after the behavior of the rotor reached the stationary state and corresponded to \(\phi_{1}\simeq 0\). Figure 8 shows the plot of \(\Delta u(x,y,t_{0})\), where \(\Delta u(x,y,t)=u(x,y,t)-\bar{u}(x,y)\). The time-dependent component \(\Delta u\) has a spiral structure, which is consistent with the second term in the right side of Eq. (27). The pitch of the spiral \(L_{0}\) is almost the double of the length \(L\) for which the stable synchronization mode changes. Figure 7: Plots of (a) \(C_{s}\), (b) \(C_{o}\), (c) \(h_{s}^{(1)}\), and (d) \(h_{o}^{(1)}\) against \(L\). Here, we adopt \(\omega=13.23\). Positive (colored with red) and negative (colored with cyan) signs represent the preference of the anti-phase and in-phase synchronization, respectively. Considering that such spiral structure mainly comes from the second term of the right-hand side in Eq. (27), the pitch \(L_{0}\) is estimated from \[L_{0}\sqrt{\rho}\sin\frac{\chi}{2}=2\pi. \tag{39}\] Actually, \(L_{0}\) is calculated as \(2.54\) with \(\omega=13.23\). This value for \(L_{0}\) well corresponds to both the results by numerical simulations and theoretical analyses, where the stability of synchronization mode changes every \(\simeq 1.2\) in \(L\). Next, we directly calculated the phase response function using the numerical simulation in order to justify the approximation adopted in the theoretical analysis. We directly calculated the functions \(G_{s}(\Delta\phi)\) and \(G_{o}(\Delta\phi)\) in Eqs. (28) and (29). \(H_{s}(\Delta\phi)\) and \(H_{o}(\Delta\phi)\) are defined as \[H_{i}(\Delta\phi)=G_{i}(-\Delta\phi)-G_{i}(\Delta\phi) \tag{40}\] where \(i\) denotes \(s\) or \(o\). The plots for \(G_{s}(\Delta\phi)\), \(G_{o}(\Delta\phi)\), \(H_{s}(\Delta\phi)\), and \(H_{o}(\Delta\phi)\) obtained from the numerical calculation are shown in Fig. 9. In the calculation, we first calculated the dynamics of the \(i\)th rotor only considering the concentration field released from itself until it reached a stationary angular velocity, and then calculated the force \(\mathbf{F}_{u,j,i}\) working on the \(j\)th particle rotating at the given angular velocity, which was the same as the \(i\)th rotor's. Using the obtained force, \(G_{s}(\Delta\phi)\) and \(G_{o}(\Delta\phi)\) were calculated by averaging over a period. The numerical simulation was performed in the same procedure as in Sec. II. We detected \(\tau_{\nu}^{(1)}\) just after \(t=100\) and calculated the average from \(t=\tau_{\nu}^{(1)}\) to \(\tau_{\nu+1}^{(1)}\). \(G_{i}\) and \(H_{i}\) were calculated for \(\Delta\phi=2\pi\lambda/360\) where \(\lambda=0,\cdots,359\), and the averaging was performed for each time step during one period. The functions \(H_{s}(\Delta\phi)\) and \(H_{o}(\Delta\phi)\) are odd functions, which take the value of zero at \(\Delta\phi=0,\pi\) and have one positive and one negative peaks. In order to discuss the stability in the synchronization mode, the slopes at \(\Delta\phi=0,\pi\) are important. Therefore, we consider the Fourier sine expansion of the functions as \[H_{i}(\Delta\phi)=\sum_{k=1}^{\infty}\hat{H}_{i}^{(k)}\sin k\Delta\phi. \tag{41}\] where \(i\) denotes \(s\) or \(o\). The first-mode coefficient determines the stability of the synchronization mode; the in-phase and anti-phase synchronization is stable for \(\hat{H}_{i}^{(1)}<0\) and \(\hat{H}_{i}^{(1)}>0\), respectively. In Fig. 10, \(\hat{H}_{s}^{(1)}\) and \(\hat{H}_{o}^{(1)}\) are plotted against \(L\), which is close to the plot in Figs. 7(c) and (d) obtained by theoretical analysis. In the calculation of the phase response function shown in the previous paragraph, we assume that the two rotors rotate at a constant angular velocity, i.e., intrinsic angu Figure 8: (a) Snapshot of the concentration field for a single rotor is rotating at a constant angular velocity at \(t=t_{0}\simeq 100.1436\), when \(\phi_{1}=0\) first holds after \(t=100\). (b) Averaged concentration field \(\bar{u}\) over a period. (c) Difference \(\Delta u\) between the concentration field \(u\) in (a) from the averaged field \(\bar{u}\) in (b). (d) Enhanced profile of (c), where the color range for concentration is magnified. The region with the size \(8\times 8\) is shown. lar velocity; however, the angular velocity of the rotor should be affected by the concentration field originating from the other rotor. Therefore, we also calculated the phase response function including the effect of the change in the angular velocity, and found that the time evolution of the phase difference is qualitatively the same as the one shown in Figs. 9 and 10. The details are shown in Appendix D. In the present system, the stable synchronization mode changes depending on the distance between the two rotors. There have been several studies on similar behaviors in the other systems such as the cell thickness pattern in slime mold [37] and the coupled system of flickering candle flame [38; 39]. The time-delay in the interaction plays an important role in the former case, while the nonlinear coupling manner is dominant in the latter case. In the coupling between the camphor rotor discussed in the present paper, the interaction between two rotors is through the concentration field that obeys the linear equation, and thus the time delay seems to play an important role in the present system. Actually, the spiral structure shown in Fig. 8 is the results of the supply from the rotating rotor and diffusion. Due to the linearity of the equation, we succeeded to write down the time-delay effect directly through the concentration field, and such time-delay effect is reduced to the interaction term in the phase dynamics. ## VI Conclusion We investigated the coupled active rotors, which spontaneously exhibit rotation due to the surface tension gradient originating from the surface active chemicals released from itself. It has been reported that by experiments such a coupled rotor system show both in-phase and anti-phase synchronization, but the mechanism was not fully clarified. We consider the mathematical model which include the time evolution of the concentration field and the motion of the rotors, and obtained the results that the stable synchronization mode alternates between in-phase and anti-phase synchronization with an increase in the distance between the two rotors. By adopting the phase description, which has been often used for the coupled oscillator systems, we derive the time evolution equation for the phase difference between the two rotors. The theoretical results suggest the alternate stable synchronization mode depending on \(L\), which well corresponds to the numerical calculation results. We also evaluated the phase response directly from the numerical calculation and confirmed that our theoretical approach works well. As for the extension of the present study, three-or-more rotor systems should be interesting since the system has many possible stable modes, and may exhibit chaotic behaviors. ###### Acknowledgements. This work was supported by JSPS KAKENHI Grants Nos. JP19K03765, JP20K14370, JP20H02712, and JP21H01004, and also the Cooperative Research Program of "Network Joint Research Center for Materials and Devices" (Nos. 20224003 and 20221173). This work was also supported by JSPS and PAN under the Japan-Poland Research Cooperative Program (No. JPJSBP120204602). ## Appendix A Derivation of concentration field in a co-rotating frame with a single rotor In this section, we derive a steady concentration field in a co-rotating frame with a single rotor with an angular velocity \(\omega\). The positions in the original system and in the co-rotating system are denoted as \(\mathbf{r}={}^{t}(r\cos\theta,r\sin\theta)\) and \(\tilde{\mathbf{r}}={}^{t}(\tilde{r}\cos\tilde{\theta},\tilde{r}\sin\tilde{\theta})\), respectively. Then, we have \[\tilde{\mathbf{r}}=\mathcal{R}(-\omega t)\mathbf{r}, \tag{10}\] where \(\mathcal{R}(\psi)\) is a matrix for rotation in a two-dimensional system, i.e., \[\mathcal{R}(\psi)=\left(\begin{array}{cc}\cos\psi&-\sin\psi\\ \sin\psi&\cos\psi\end{array}\right). \tag{11}\] In other words, we have the relation as \[\tilde{\theta}=\theta-\omega t. \tag{12}\] Then the operator for time derivative is rewritten in \(\tilde{\mathbf{r}}\)-system as \[\frac{\partial}{\partial t}+\omega\left(\begin{array}{cc}-\sin \omega t&\cos\omega t\\ -\cos\omega t&-\sin\omega t\end{array}\right)\mathbf{r}\cdot\tilde{\nabla}\] \[= \frac{\partial}{\partial t}+r\omega\left(\begin{array}{c}\sin( \theta-\omega t)\\ -\cos(\theta-\omega t)\end{array}\right)\cdot\tilde{\nabla}\] \[= \frac{\partial}{\partial t}+\tilde{r}\omega\left(\begin{array}{ c}\sin\tilde{\theta}\\ -\cos\tilde{\theta}\end{array}\right)\cdot\tilde{\nabla}\] \[= \frac{\partial}{\partial t}-\omega\tilde{\mathbf{r}}\tilde{\mathbf{e}}_{ \theta}\cdot\tilde{\nabla}. \tag{13}\] Here \(\tilde{\nabla}\) is the nabla operator in \(\tilde{\mathbf{r}}\)-system, and \(\tilde{\mathbf{e}}_{\theta}\) is a unit vector in the co-rotating system, \(\tilde{\mathbf{e}}_{\theta}={}^{t}(-\sin\theta,\cos\tilde{\theta})\). The position of the point source \(\tilde{\mathbf{a}}\) in the co-rotating frame is no longer dependent on time and written as \[\tilde{\mathbf{a}}=a\tilde{\mathbf{e}}_{x}, \tag{14}\] where \(\tilde{\mathbf{e}}_{x}\) is a unit vector along the \(\tilde{x}\)-axis in \(\tilde{\mathbf{r}}\)-system. Hereafter, we omit tilde(\(\tilde{\cdot}\)) for the variables in the co-moving frame. In order to consider the steady-state concentration field \(u(r,\theta)\) in the co-rotating system, we set the time derivative to be \(0\). Therefore, the equation to be considered is explicitly written as \[-\omega\frac{\partial u}{\partial\theta}=\nabla^{2}u-u+\delta\left(\mathbf{r}-\mathbf{a} \right). \tag{100}\] The homogeneous equation for (100) is expressed as \[-\omega\frac{\partial u}{\partial\theta}=\nabla^{2}u-u. \tag{101}\] By assuming that Eq. (101) has a solution in the form of \[u(r,\theta)=f_{n}(r)e^{in\theta}, \tag{102}\] we obtain \[\frac{d^{2}f_{n}}{dr^{2}}+\frac{1}{r}\frac{df_{n}}{dr}-\frac{n^{2}}{r^{2}}f_{n }-(1-in\omega)f_{n}=0. \tag{103}\] By setting \(\hat{r}=r\sqrt{1-in\omega}\), we get \[\frac{d^{2}f_{n}}{d\hat{r}^{2}}+\frac{1}{\hat{r}}\frac{df_{n}}{d\hat{r}}-\frac {n^{2}}{\hat{r}^{2}}f_{n}-f_{n}=0, \tag{104}\] which is the so-called modified Bessel equation and the solution is given as \[f_{n}(\hat{r})=A_{n}\mathcal{I}_{n}\left(\hat{r}\right)+B_{n}\mathcal{K}_{n} \left(\hat{r}\right). \tag{105}\] Thus, \(f_{n}(r)\) is described as \[f_{n}(r)= A_{n}\mathcal{I}_{n}\left(r\sqrt{1-in\omega}\right)+B_{n}\mathcal{K} _{n}\left(r\sqrt{1-in\omega}\right). \tag{106}\] Then, the general solution of Eq. (101) is given as \[u(r,\theta)= \sum_{n=-\infty}^{\infty}\left[A_{n}\mathcal{I}_{n}\left(r\sqrt{ 1-in\omega}\right)\right.\] \[\left.\qquad+B_{n}\mathcal{K}_{n}\left(r\sqrt{1-in\omega}\right) \right]e^{in\theta}, \tag{107}\] where \(A_{n}\) and \(B_{n}\) are complex constants. Considering that \(u\) is real, \(A_{n}=A_{-n}^{*}\) and \(B_{n}=B_{-n}^{*}\) should hold, where "\({}^{**}\) indicates complex conjugate. It is notable that the solution of Eq. (103) should be complex. If we set it to be \[f_{n}(r)=p_{n}(r)+iq_{n}(r), \tag{108}\] then we obtain \[\frac{d^{2}p_{n}}{dr^{2}}+\frac{1}{r}\frac{dp_{n}}{dr}-\frac{n^{2}}{r^{2}}p_{ n}-p_{n}-n\omega q_{n}=0, \tag{109}\] \[\frac{d^{2}q_{n}}{dr^{2}}+\frac{1}{r}\frac{dq_{n}}{dr}-\frac{n^{2}}{r^{2}}q_{ n}-q_{n}+n\omega p_{n}=0. \tag{110}\] Considering that the modified Bessel function of the first kind \(\mathcal{I}_{n}(\cdot)\) is analytic in \(\mathbb{C}\), and the modified Bessel function of the second kind \(\mathcal{K}_{n}(\cdot)\) is analytic in \(\mathbb{C}\) except negative real numbers, the following relations hold \[\mathcal{I}_{n}\left(z^{*}\right)=\left[\mathcal{I}_{n}(z)\right]^{*}, \tag{111}\] \[\mathcal{K}_{n}\left(z^{*}\right)=\left[\mathcal{K}_{n}(z)\right]^{*}. \tag{112}\] These expressions are derived from Eq. (107) considering that \(\mathcal{I}_{n}(z)\) does not diverge at \(|z|\to 0\) and that \(\mathcal{K}_{n}(z)\) does not diverge at \(|z|\to\infty\) for \(\Re(z)>0\). Now we set \[1-in\omega=\rho e^{-i\chi_{n}}, \tag{113}\] where \(\rho>0\) and \(-\pi/2<\chi_{n}<\pi/2\) holds, and define \[\sqrt{1-in\omega}=\sqrt{\rho}e^{-i\chi_{n}/2}. \tag{114}\] Then, we can show \(\mathcal{K}_{n}\left(r\sqrt{1-in\omega}\right)\) does not diverge for \(r\to\infty\). Thus, we can describe \[u_{\text{in}}(r,\theta)= A_{0}\mathcal{I}_{0}(r)\] \[+2\sum_{n=1}^{\infty}\Re\left[A_{n}\mathcal{I}_{n}\left(r\sqrt{1- in\omega}\right)e^{in\theta}\right], \tag{115}\] for \(r<a\), and \[u_{\text{out}}(r,\theta)= B_{0}\mathcal{K}_{0}(r)\] \[+2\sum_{n=1}^{\infty}\Re\left[B_{n}\mathcal{K}_{n}\left(r\sqrt{1- in\omega}\right)e^{in\theta}\right], \tag{116}\] for \(r>a\). The coefficients \(A_{n}\) and \(B_{n}\) are determined by the condition that \(u\) and \(\nabla u\) are continuous at \(r=a\) and that \(u\) satisfies the inhomogeneous equation in Eq. (100). From the continuity condition for \(u\), we obtain \[A_{n}\mathcal{I}_{n}\left(a\sqrt{1-in\omega}\right)=B_{n}\mathcal{K}_{n} \left(a\sqrt{1-in\omega}\right), \tag{117}\] and thus we newly set \(C_{n}\) so that it holds \[A_{n}=C_{n}\mathcal{K}_{n}\left(a\sqrt{1-in\omega}\right), \tag{118}\] and \[B_{n}=C_{n}\mathcal{I}_{n}\left(a\sqrt{1-in\omega}\right). \tag{119}\] Then, we calculate the difference in the derivative in \(r\) direction. Note that \[\lim_{r\to a-0}\frac{\partial u_{\text{in}}}{\partial r}=\hat{C}_{0} \mathcal{K}_{0}\left(\hat{a}_{0}\right)\mathcal{I}_{1}\left(\hat{a}_{0}\right)\] \[\qquad+2\sum_{n=1}^{\infty}\Re\left[\hat{C}_{n}\mathcal{K}_{n} \left(\hat{a}_{n}\right)\frac{\mathcal{I}_{n-1}\left(\hat{a}_{n}\right)+ \mathcal{I}_{n+1}\left(\hat{a}_{n}\right)}{2}e^{in\theta}\right], \tag{120}\] and \[\lim_{r\to a+0}\frac{\partial u_{\text{out}}}{\partial r}=-\hat{C}_{0} \mathcal{I}_{0}\left(\hat{a}_{n}\right)\mathcal{K}_{1}\left(\hat{a}_{n}\right)\] \[\qquad-2\sum_{n=1}^{\infty}\Re\left[\hat{C}_{n}\mathcal{I}_{n} \left(\hat{a}_{n}\right)\frac{\mathcal{K}_{n-1}\left(\hat{a}_{n}\right)+ \mathcal{K}_{n+1}\left(\hat{a}_{n}\right)}{2}e^{in\theta}\right], \tag{121}\] where \(\hat{a}_{n}=a\sqrt{1-in\omega}\) and \(\hat{C}_{n}=C_{n}\sqrt{1-in\omega}\). Considering that \(\hat{C}_{0}=C_{0}\) and \(\hat{a}_{0}=a\), we obtain \[\lim_{r\to a-0}\frac{\partial u_{\text{in}}}{\partial r}-\lim_{r\to a+0} \frac{\partial u_{\text{out}}}{\partial r}\] \[=C_{0}\left[\mathcal{K}_{0}\left(a\right)\mathcal{I}_{1}\left(a \right)+\mathcal{K}_{1}\left(a\right)\mathcal{I}_{0}\left(a\right)\right]\] \[\quad+2\sum_{n=1}^{\infty}\Re\bigg{[}\hat{C}_{n}\left[\mathcal{K} _{n}\left(\hat{a}_{n}\right)\mathcal{I}_{n+1}\left(\hat{a}_{n}\right)+ \mathcal{K}_{n+1}\left(\hat{a}_{n}\right)\mathcal{I}_{n}\left(\hat{a}_{n}\right)\] \[\quad\quad+\mathcal{K}_{n}\left(\hat{a}_{n}\right)\mathcal{I}_{n -1}\left(\hat{a}_{n}\right)+\mathcal{K}_{n-1}\left(\hat{a}_{n}\right) \mathcal{I}_{n}\left(\hat{a}_{n}\right)\big{]}\,e^{in\theta}\bigg{]}\] \[=\frac{C_{0}}{a}+2\sum_{n=1}^{\infty}\Re\left[\frac{\hat{C}_{n}} {\hat{a}_{n}}e^{in\theta}\right]\] \[=\frac{C_{0}}{a}+2\sum_{n=1}^{\infty}\Re\left[\frac{C_{n}}{\hat{a }}e^{in\theta}\right]. \tag{107}\] Therefore, we set \(C_{n}=1/(2\pi)\) for all \(n\), and we obtain \[\lim_{r\to a-0}\frac{\partial u_{\text{in}}}{\partial r}-\lim_{r\to a+0} \frac{\partial u_{\text{out}}}{\partial r}= \frac{1}{2\pi a}\sum_{n=-\infty}^{\infty}e^{in\theta}\] \[= \frac{1}{a}\delta(\theta), \tag{108}\] which corresponds to the considered situation. Considering that \[\int_{0}^{2\pi}\mathcal{I}_{n}\left(r\sqrt{1-in\omega}\right)e^{in\theta}d \theta=0, \tag{109}\] \[\int_{0}^{2\pi}\mathcal{K}_{n}\left(r\sqrt{1-in\omega}\right)e^{in\theta}d \theta=0, \tag{110}\] for any non-zero integer \(n\), and that \[\int \left[\omega\frac{\partial u}{\partial\theta}+\nabla^{2}u-u+ \delta\left(\mathbf{r}-a\mathbf{e}_{x}\right)\right]d\mathbf{r}\] \[=-\int u(\mathbf{r})d\mathbf{r}+1\] \[=0, \tag{111}\] we can explicitly execute the integration of \(u(\mathbf{r})\) as \[\int_{0}^{2\pi} \int_{0}^{\infty}u(r,\theta)d\mathbf{r}\] \[=2\pi\left[\int_{0}^{a}\frac{1}{2\pi}\mathcal{K}_{0}(a)\mathcal{I }_{0}(r)rdr\right.\] \[\quad\quad+\int_{a}^{\infty}\frac{1}{2\pi}\mathcal{I}_{0}(a) \mathcal{K}_{0}(r)rdr\right]\] \[=2\pi\left[\frac{1}{2\pi}\mathcal{K}_{0}(a)a\mathcal{I}_{1}(a)+ \frac{1}{2\pi}\mathcal{I}_{0}(a)a\mathcal{K}_{1}(a)\right]\] \[=a\left[\mathcal{K}_{0}(a)\mathcal{I}_{1}(a)+\mathcal{I}_{0}(a) \mathcal{K}_{1}(a)\right]\] \[=1, \tag{112}\] which satisfies Eq. (111). Therefore, the steady-state concentration field for a single point source at \(\tilde{\mathbf{r}}=a\tilde{\mathbf{e}}_{x}\) in the co-rotating frame is written as \[u_{\text{in}}(r,\theta)\] \[=\frac{1}{2\pi}\mathcal{K}_{0}(a)\mathcal{I}_{0}(r)\] \[\quad\quad+\sum_{n=1}^{\infty}\Re\left[\frac{1}{\pi}\mathcal{K}_{ n}\left(a\sqrt{1-in\omega}\right)\mathcal{I}_{n}\left(r\sqrt{1-in\omega}\right)e^{in \theta}\right]\] \[=\frac{1}{2\pi}\sum_{n=-\infty}^{\infty}\mathcal{K}_{n}\left(a \sqrt{1-in\omega}\right)\mathcal{I}_{n}\left(r\sqrt{1-in\omega}\right)e^{in \theta}, \tag{113}\] for \(r<a\), and \[u_{\text{out}}(r,\theta)\] \[=\frac{1}{2\pi}\mathcal{I}_{0}(a)\mathcal{K}_{0}(r)\] \[\quad\quad+\sum_{n=1}^{\infty}\Re\left[\frac{1}{\pi}\mathcal{I} _{n}\left(a\sqrt{1-in\omega}\right)\mathcal{K}_{n}\left(r\sqrt{1-in\omega} \right)e^{in\theta}\right]\] \[=\frac{1}{2\pi}\sum_{n=-\infty}^{\infty}\mathcal{I}_{n}\left(a \sqrt{1-in\omega}\right)\mathcal{K}_{n}\left(r\sqrt{1-in\omega}\right)e^{in \theta}, \tag{114}\] for \(r>a\), which are Eqs. (24) and (25) in the main text. In the calculation, we used the equalities given in the reference [40]. ## Appendix B Derivation of the asymptotic form Here, we derive the asymptotic form of the concentration field far from the source (\(r\gg 1\)) for \(a\ll 1\). Considering the Maclaurin expansion of \(\mathcal{I}_{n}(z)\) is given as [40] \[\mathcal{I}_{n}(z)=\sum_{k=0}^{\infty}\frac{1}{k!\Gamma(n+k+1)}\left(\frac{z}{ 2}\right)^{n+2k}, \tag{115}\] the leading term of \(I_{n}(z)\) is \[\mathcal{I}_{n}(z)=\frac{1}{2^{n}n!}z^{n}. \tag{116}\] Therefore, we only need to consider \(n=0\) and \(n=\pm 1\) as far as we consider the first order of \(a\). Considering that the asymptotic form of \(\mathcal{K}_{n}(z)\) is \[\mathcal{K}_{n}(z)\sim\sqrt{\frac{\pi}{2z}}e^{-z}\sum_{k=0}^{\infty}\frac{ \Gamma(n+k+1/2)}{k!\Gamma(n-k+1/2)}\frac{1}{(2z)^{k}}, \tag{117}\] for \(|\mathrm{arg}\,z|<3\pi/2\)[40]. Here, \(\Gamma(\cdot)\) is the gamma function. For \(n=0\) and \(n=1\), we obtain \[\mathcal{K}_{0}(z)=\sqrt{\frac{\pi}{2z}}e^{-z}\left(1-\frac{1}{8z}+\mathcal{O} \left(\frac{1}{z^{2}}\right)\right), \tag{118}\] \[\mathcal{K}_{1}(z)=\mathcal{K}_{-1}(z)=\sqrt{\frac{\pi}{2z}}e^{-z}\left(1+\frac{3} {8z}+\mathcal{O}\left(\frac{1}{z^{2}}\right)\right), \tag{50}\] Then, we only need to consider the terms with \(n=0,\pm 1\). By setting \(\chi=\chi_{1}=-\chi_{-1}\), we obtain \[u_{\text{out}}(r,\theta)\] \[=\frac{1}{2\pi}\left[\mathcal{K}_{0}\left(r\right)+\frac{a}{2} \sqrt{\rho}e^{-i\chi/2}\mathcal{K}_{1}\left(r\sqrt{\rho}e^{-i\chi/2}\right)e^{ i\theta}\right.\] \[\quad\left.+\frac{a}{2}\sqrt{\rho}e^{i\chi/2}\mathcal{K}_{1} \left(r\sqrt{\rho}e^{i\chi/2}\right)e^{-i\theta}\right]+\mathcal{O}(a^{2})\] \[\simeq\frac{1}{2\pi}\left[\sqrt{\frac{\pi}{2r}}e^{-r}\right. \tag{51}\] \[\quad\left.+\frac{a}{2}\sqrt{\rho}e^{-i\chi/2}\frac{\sqrt{\pi}}{ \sqrt{2\pi}\rho^{1/4}e^{-i\chi/4}}e^{-r\sqrt{\rho}e^{-i\chi/2}}e^{i\theta}\right.\] \[\quad\left.+\frac{a}{2}\sqrt{\rho}e^{i\chi/2}\frac{\sqrt{\pi}}{ \sqrt{2\pi}\rho^{1/4}e^{i\chi/4}}e^{-r\sqrt{\rho}e^{i\chi/2}}e^{-i\theta} \right]+\mathcal{O}\left(a^{2}\right)\] \[=\frac{1}{2\sqrt{2\pi r}}e^{-r}\left[1+\frac{a}{2}\rho^{1/4}e^{r( 1-\sqrt{\rho}\cos(\chi/2))}\right.\] \[\quad\left.\times\left(e^{i(\theta-\chi/4+r\sqrt{\rho}\sin(\chi/ 2))}+e^{-i(\theta-\chi/4+r\sqrt{\rho}\sin(\chi/2))}\right)\right]\] \[\quad+\mathcal{O}\left(a^{2}\right)\] \[=\frac{1}{2\sqrt{2\pi r}}e^{-r}+\frac{a}{2\sqrt{2\pi r}}\rho^{1/4 }e^{-r\sqrt{\rho}\cos(\chi/2)}\] \[\quad\left.\times\cos\left(\theta-\frac{\chi}{4}+r\sqrt{\rho}\sin \frac{\chi}{2}\right)+\mathcal{O}\left(a^{2}\right), \tag{52}\] and thus we obtain Eq. (26) in the main text. Here, we only considered the leading terms in Eqs. (50) and (50), and used Eq. (51). In the case that the phase of the rotor is \(\phi_{1}\), we obtain Eq. (27) by replacing \(\theta\) with \(\theta-\phi_{1}\). ## Appendix C Calculation on the phase description Here, we show the detailed calculation on the time evolution using the averaging method. From Eqs. (19) and (29), we need to obtain \(\left.-\nabla u\cdot\mathbf{e}(\phi_{2}-\pi/2)\right|_{\mathbf{r}=\mathbf{r}_{2}}\). Thus, we first calculate the gradient of the concentration field in the polar coordinates, \[\nabla u=\frac{\partial u}{\partial r}\mathbf{e}(\theta)+\frac{1}{r}\frac{\partial u }{\partial\theta}\mathbf{e}\left(\theta+\frac{\pi}{2}\right). \tag{53}\] The asymptotic form in Eq. (27) is separated into two parts \[u(r,\theta,\phi_{1})=u^{(0)}(r)+u^{(1)}(r,\theta,\phi_{1})a+\mathcal{O}(a^{2}), \tag{54}\] where \[u^{(0)}(r)=\frac{1}{2\sqrt{2\pi r}}e^{-r}, \tag{55}\] and \[u^{(1)}(r,\theta,\phi_{1})= \frac{1}{2\sqrt{2\pi r}}\rho^{1/4}e^{-r\sqrt{\rho}\cos(\chi/2)}\] \[\times\cos\left(\theta-\phi_{1}-\frac{\chi}{4}+r\sqrt{\rho}\sin \frac{\chi}{2}\right). \tag{56}\] Considering that \(u^{(0)}\) does not depend on \(\phi_{1}\) or \(\phi_{2}\), \(-\left.\nabla u^{(0)}\cdot\mathbf{e}(\phi_{2}-\pi/2)\right|_{\mathbf{r}=\mathbf{r}_{2}}\) is a function of only \(\phi_{2}\) but not \(\phi_{1}\). As a result of the averaging, the dependence on \(\phi_{2}\) should be omitted, and it gives only a constant value. Therefore, \(u^{(0)}\) does only secondarily affect the stability of the synchronization mode. Therefore, we consider the effect by \(u^{(1)}\). First we calculate the gradient of \(u^{(1)}\) as \[\nabla u^{(1)}= \left[-\frac{1}{2r}u^{(1)}-\sqrt{\rho}u^{(1)}\cos\frac{\chi}{2}- \sqrt{\rho}\hat{u}^{(1)}\sin\frac{\chi}{2}\right]\mathbf{e}(\theta)\] \[-\frac{1}{r}\hat{u}^{(1)}\mathbf{e}\left(\theta+\frac{\pi}{2}\right), \tag{57}\] where we set \[\hat{u}^{(1)}= \frac{1}{2\sqrt{2\pi r}}\rho^{1/4}e^{-\sqrt{\rho}\cos(\chi/2)r}\] \[\times\sin\left[\theta-\phi_{1}-\frac{\chi}{4}+\sqrt{\rho}\sin \left(\frac{\chi}{2}\right)r\right]. \tag{58}\] Hereafter, we separately calculate the two cases, i.e., the case with the same rotation directions and that with the opposite rotation directions. First, we consider the case with the same rotation. We calculate \(\nabla u_{1}\cdot\mathbf{e}(\phi_{2}-\pi/2)\) at \(\mathbf{r}=\mathbf{r}_{2}=L\mathbf{e}_{x}-a\mathbf{e}(\phi_{2})\). Considering that \(r=\sqrt{L^{2}+a^{2}-2La\cos\phi_{2}}\) and \(\tan\theta=-a\sin\phi_{2}/(L-a\cos\phi_{2})\) in the polar coordinates, we obtain \[-\nabla u^{(1)}\cdot\mathbf{e}\left(\phi_{2}-\frac{\pi}{2}\right) \Big{|}_{\mathbf{r}=\mathbf{r}_{2}}\] \[= \left.-\frac{\partial u^{(1)}}{\partial r}\mathbf{e}(\theta)\cdot\mathbf{e }\left(\phi_{2}-\frac{\pi}{2}\right)\right|_{\mathbf{r}=\mathbf{r}_{2}}\] \[-\frac{1}{r}\frac{\partial u^{(1)}}{\partial\theta}\mathbf{e}\left( \theta+\frac{\pi}{2}\right)\cdot\mathbf{e}\left(\phi_{2}-\frac{\pi}{2}\right) \Big{|}_{\mathbf{r}=\mathbf{r}_{2}}\] \[= \left(\frac{1}{2r}+\sqrt{\rho}\cos\frac{\chi}{2}\right)u^{(1)}\sin( \phi_{2}-\theta)\Big{|}_{\mathbf{r}=\mathbf{r}_{2}}\] \[-\left(\frac{1}{r}\cos(\phi_{2}-\theta)-\sqrt{\rho}\sin\frac{\chi}{2 }\sin(\phi_{2}-\theta)\right)\hat{u}^{(1)}\Big{|}_{\mathbf{r}=\mathbf{r}_{2}}. \tag{59}\] By considering the Maclaurin expansion of \(r\) and \(\theta\) with respect to \(a\), we obtain \[r= L+\mathcal{O}(a). \tag{60}\] and \[\theta=\mathcal{O}(a). \tag{61}\] Therefore, we obtain \[\sin(\phi_{2}-\theta)= \sin\phi_{2}+\mathcal{O}(a), \tag{62}\] \[\cos(\phi_{2}-\theta)= \cos\phi_{2}+\mathcal{O}(a). \tag{63}\] We also calculate \(u^{(1)}\) and \(\hat{u}^{(1)}\) at \(\mathbf{r}=\mathbf{r}_{2}\) as \[u^{(1)}\Big{|}_{\mathbf{r}=\mathbf{r}_{2}}\] \[=\frac{\rho^{1/4}e^{-\sqrt{\rho}(L-a\cos\phi)\cos(\chi/2)}}{2\sqrt{2 \pi(L-a\cos\phi_{2})}}\cos\left[-\frac{a}{L}\sin\phi_{2}-\phi_{1}-\frac{\chi}{ 4}\right.\] \[\left.\qquad+\sqrt{\rho}(L-a\cos\phi_{2})\sin\left(\frac{\chi}{2} \right)\right]\] \[=\frac{\rho^{1/4}e^{-\sqrt{\rho}L\cos(\chi/2)}}{2\sqrt{2\pi L}} \cos\left[-\phi_{1}-\frac{\chi}{4}+\sqrt{\rho}\sin\left(\frac{\chi}{2}\right) L\right]\] \[\quad+\mathcal{O}(a), \tag{101}\] and \[\hat{u}^{(1)}\Big{|}_{\mathbf{r}=\mathbf{r}_{2}}\] \[=\frac{\rho^{1/4}e^{-\sqrt{\rho}(L-a\cos\phi_{2})\cos(\chi/2)}}{ 2\sqrt{2\pi(L-a\cos\phi)}}\sin\left[-\frac{a}{L}\sin\phi_{2}-\phi_{1}-\frac{ \chi}{4}\right.\] \[\left.\qquad+\sqrt{\rho}(L-a\cos\phi_{2})\sin\left(\frac{\chi}{2 }\right)\right]\] \[=\frac{\rho^{1/4}e^{-\sqrt{\rho}L\cos(\chi/2)}}{2\sqrt{2\pi L}} \sin\left[-\phi_{1}-\frac{\chi}{4}+\sqrt{\rho}\sin\left(\frac{\chi}{2}\right) L\right]\] \[\quad+\mathcal{O}(a). \tag{102}\] Equation (100) with Eqs. (102)-(102) leads \[-\nabla u^{(1)}\cdot\mathbf{e}\left(\phi_{2}-\frac{\pi}{2}\right) \Big{|}_{\mathbf{r}=\mathbf{r}_{2}}\] \[=\frac{\rho^{1/4}e^{-\sqrt{\rho}L\cos(\chi/2)}}{2\sqrt{2\pi L}}\] \[\quad\times\left\{-\frac{1}{2L}\sin\left[-\phi_{2}-\phi_{1}-\frac {\chi}{4}+L\sqrt{\rho}\sin\left(\frac{\chi}{2}\right)\right]\right.\] \[\left.\qquad-\frac{1}{2L}\cos\phi_{2}\sin\left[-\phi_{1}-\frac{ \chi}{4}+L\sqrt{\rho}\sin\left(\frac{\chi}{2}\right)\right]\right.\] \[\left.\qquad+\sqrt{\rho}\sin\phi_{2}\cos\left[-\phi_{1}-\frac{3 \chi}{4}+L\sqrt{\rho}\sin\left(\frac{\chi}{2}\right)\right]\right\}\] \[\quad+\mathcal{O}(a), \tag{103}\] and thus Eqs. (29) and (101) give \[\frac{d\phi_{2}}{dt}\simeq\omega+\frac{\Gamma\rho^{1/4}e^{-\sqrt{ \rho}L\cos(\chi/2)}}{4\eta\sqrt{2\pi L}}\] \[\times\left\{-\frac{1}{2L}\sin\left[\Delta\phi-\frac{\chi}{4}+L \sqrt{\rho}\sin\left(\frac{\chi}{2}\right)\right]\right.\] \[\left.\qquad+\sqrt{\rho}\sin\left[\Delta\phi-\frac{3\chi}{4}+L \sqrt{\rho}\sin\left(\frac{\chi}{2}\right)\right]\right\}. \tag{104}\] Considering geometric symmetry, we obtain \[\frac{d\phi_{1}}{dt}\simeq\omega+\frac{\Gamma\rho^{1/4}e^{-\sqrt{ \rho}L\cos(\chi/2)}}{4\eta\sqrt{2\pi L}}\] \[\times\left\{-\frac{1}{2L}\sin\left[-\Delta\phi-\frac{\chi}{4}+L \sqrt{\rho}\sin\left(\frac{\chi}{2}\right)\right]\right.\] \[\left.\qquad+\sqrt{\rho}\sin\left[-\Delta\phi-\frac{3\chi}{4}+L \sqrt{\rho}\sin\left(\frac{\chi}{2}\right)\right]\right\}. \tag{105}\] From these equations, we obtain the time evolution of \(\Delta\phi=\phi_{2}-\phi_{1}\) as \[\frac{d\Delta\phi}{dt}= \frac{\Gamma\rho^{1/4}e^{-\sqrt{\rho}L\cos(\chi/2)}}{2\eta\sqrt{2 \pi L}}\sin\Delta\phi\] \[\times\left\{-\frac{1}{2L}\cos\left[-\frac{\chi}{4}+L\sqrt{\rho} \sin\left(\frac{\chi}{2}\right)\right]\right.\] \[\left.\qquad+\sqrt{\rho}\cos\left[-\frac{3\chi}{4}+L\sqrt{\rho} \sin\left(\frac{\chi}{2}\right)\right]\right\}, \tag{106}\] which corresponds to Eq. (34). In the calculation we use \[\cos(\Delta\phi+\Xi)-\cos(-\Delta\phi+\Xi)=2\cos\Xi\sin\Delta\phi \tag{107}\] Next, we consider the case with the opposite rotation directions. In this case, we have to obtain \(-\nabla u\cdot\mathbf{e}(-\phi_{2}+\pi/2)\big{|}_{\mathbf{r}=\mathbf{r}_{2}}\). The polar coordinates corresponding to \(\mathbf{r}_{2}\) changes as \(r=\sqrt{L^{2}+a^{2}-2La\cos\phi_{2}}\) and \(\tan\theta=a\sin\phi_{2}/(L-a\cos\phi_{2})\). Then, we obtain \[-\nabla u^{(1)}\cdot\mathbf{e}\left(-\phi_{2}+\frac{\pi}{2}\right) \Big{|}_{\mathbf{r}=\mathbf{r}_{2}}\] \[=\left.-\frac{\partial u^{(1)}}{\partial r}\mathbf{e}(\theta)\cdot\mathbf{e} \left(-\phi_{2}+\frac{\pi}{2}\right)\right|_{\mathbf{r}=\mathbf{r}_{2}}\] \[\quad-\frac{1}{r}\frac{\partial u^{(1)}}{\partial\theta}\mathbf{e} \left(\theta+\frac{\pi}{2}\right)\cdot\mathbf{e}\left(-\phi_{2}+\frac{\pi}{2} \right)\bigg{|}_{\mathbf{r}=\mathbf{r}_{2}}\] \[=\left.\left(\frac{1}{2r}+\sqrt{\rho}\cos\frac{\chi}{2}\right)u^{( 1)}\sin(\phi_{2}+\theta)\right|_{\mathbf{r}=\mathbf{r}_{2}}\] \[\quad+\left(\frac{1}{r}\cos(\phi_{2}+\theta)+\sqrt{\rho}\sin\frac{ \chi}{2}\sin(\phi_{2}+\theta)\right)\hat{u}^{(1)}\bigg{|}_{\mathbf{r}=\mathbf{r}_{2}}. \tag{108}\] Eqs. (107)-(102) do not change irrespective to the rotation direction within the order of \(\mathcal{O}(1)\), and thus we can adopt the expression in Eqs. (101) and (102). Therefore, we obtain \[-\nabla u^{(1)}\cdot\mathbf{e}\left(-\phi_{2}+\frac{\pi}{2}\right) \Big{|}_{\mathbf{r}=\mathbf{r}_{2}}\] \[=\frac{\rho^{1/4}e^{-\sqrt{\rho}L\cos(\chi/2)}}{2\sqrt{2\pi L}}\] \[\quad\times\left\{-\frac{1}{2L}\sin\left[-\phi_{2}-\phi_{1}-\frac{ \chi}{4}+L\sqrt{\rho}\sin\left(\frac{\chi}{2}\right)\right]\right.\] \[\left.\qquad+\frac{3}{2L}\cos\phi_{2}\sin\left[-\phi_{1}-\frac{ \chi}{4}+L\sqrt{\rho}\sin\left(\frac{\chi}{2}\right)\right]\right.\] \[\left.\qquad+\sqrt{\rho}\sin\phi_{2}\cos\left[-\phi_{1}-\frac{3 \chi}{4}+L\sqrt{\rho}\sin\left(\frac{\chi}{2}\right)\right]\right\}\] \[\quad+\mathcal{O}(a), \tag{109}\] Eqs. (28) and (101) give \[\frac{d\phi_{2}}{dt}\simeq\omega+\frac{\Gamma\rho^{1/4}e^{-\sqrt{ \rho}L\cos(\chi/2)}}{4\eta\sqrt{2\pi L}}\] \[\times\left\{\frac{3}{2L}\sin\left[\Delta\phi-\frac{\chi}{4}+L \sqrt{\rho}\sin\left(\frac{\chi}{2}\right)\right]\right.\] \[\left.\qquad+\sqrt{\rho}\sin\left[\Delta\phi-\frac{3\chi}{4}+L \sqrt{\rho}\sin\left(\frac{\chi}{2}\right)\right]\right\}. \tag{110}\] In the same manner as that with the same rotation direction, the time-evolution equation is obtained by considering the geometric symmetry as \[\frac{d\phi_{1}}{dt}\simeq\omega+\frac{\Gamma\rho^{1/4}e^{-\sqrt{ \rho}L\cos(\chi/2)}}{4\eta\sqrt{2\pi L}}\] \[\times\left\{\frac{3}{2L}\sin\left[-\Delta\phi-\frac{\chi}{4}+L \sqrt{\rho}\sin\left(\frac{\chi}{2}\right)\right]\right.\] \[\left.+\sqrt{\rho}\sin\left[-\Delta\phi-\frac{3\chi}{4}+L\sqrt{ \rho}\sin\left(\frac{\chi}{2}\right)\right]\right\}. \tag{100}\] From these equations, the time evolution equation for \(\Delta\phi\) is obtained as in Eq. (37). ## Appendix D Phase response function including the time change in angular velocity In this section, we discuss the phase response function considering the time change in the angular velocity. We calculated the dynamics of one rotor (rotor 1) taking into consideration the concentration field of chemicals generated by the other rotor (rotor 2), though the position of the particle composed of the rotor 2 is approximated to be located at the center of it, i.e., \(\mathbf{r}_{2}(t)=\mathbf{\ell}_{2}\), in order to neglect the dependence of \(\phi_{2}\). In this case, the angular velocity of the first rotor \(d\phi_{1}/dt\) is no longer constant but depends on the phase of the rotor 1, \(\phi_{1}\). Thus we have to define a new phase \(\varphi_{i}\), which is defined by \(d\varphi_{i}/dt=2\pi/T\), where \(T\) is a period. \(\varphi_{i}\) is described as a monotonous increasing function of \(\phi_{i}\). The functions \(\mathcal{G}_{s}(\Delta\varphi)\) and \(\mathcal{G}_{o}(\Delta\varphi)\) are defined as \[\mathcal{G}_{s}(\Delta\varphi)\] \[=\frac{1}{2\pi\eta a}\int_{0}^{2\pi}\mathbf{e}\left(\varphi_{2}- \frac{\pi}{2}\right)\cdot\mathbf{F}_{u,1,2}(\phi_{2}+\Delta\varphi,\varphi_{2})d \varphi_{2}, \tag{101}\] \[\mathcal{G}_{o}(\Delta\varphi)\] \[=\frac{1}{2\pi\eta a}\int_{0}^{2\pi}\mathbf{e}\left(-\left(\varphi_{2 }-\frac{\pi}{2}\right)\right)\cdot\mathbf{F}_{u,1,2}(\phi_{2}+\Delta\varphi, \varphi_{2})d\varphi_{2}. \tag{102}\] Here, we calculated the force \(\mathbf{F}_{u,1,2}\) by taking the rotation of the rotor 2 into consideration. Then, the time evolution equation of \(\Delta\varphi\) is expressed as \[\frac{d\Delta\varphi}{dt}=\mathcal{G}_{i}(-\Delta\varphi)-\mathcal{G}_{i}( \Delta\varphi)\equiv\mathcal{H}_{i}(\Delta\varphi) \tag{103}\] where \(i\) denotes \(s\) or \(o\). The parameters for the simulation were the same as in Sec. V. The plots of \(\mathcal{G}_{s}(\Delta\varphi)\), \(\mathcal{G}_{o}(\Delta\varphi)\), \(\mathcal{H}_{s}(\Delta\varphi)\) and \(\mathcal{H}_{o}(\Delta\varphi)\) obtained by the numerical simulation are shown in Fig. 11. From Figs. 11(a) and (b), \(\mathcal{G}_{s}(\Delta\varphi)\) and \(\mathcal{G}_{o}(\Delta\varphi)\) are less than those in Figs. 9(a) and (b). This is due to the interaction through the concentration field, and shows that the effect by the concentration field from the other rotor reduces the averaged angular velocity. Despite of the difference in \(\mathcal{G}_{s}(\Delta\varphi)\) and \(\mathcal{G}_{o}(\Delta\varphi)\), \(\mathcal{H}_{s}(\Delta\varphi)\) and \(\mathcal{H}_{o}(\Delta\varphi)\) in Figs. 11(c) and (d) are almost the same as those in Figs. 9(c) and (d). In the same manner in the previous paragraph, we consider the Fourier expansion of \(\mathcal{H}_{s}(\Delta\varphi)\) and \(\mathcal{H}_{o}(\Delta\varphi)\) as \[\mathcal{H}_{i}(\Delta\varphi)=\sum_{k=1}^{\infty}\hat{\mathcal{H}}_{i}^{(k)} \sin k\Delta\varphi, \tag{104}\] where \(i\) denotes \(s\) or \(o\). In Fig. 12, \(\hat{\mathcal{H}}_{s}^{(1)}(\Delta\varphi)\) and \(\hat{\mathcal{H}}_{o}^{(1)}(\Delta\varphi)\) are plotted against \(L\), which are almost the same as those in Figs. 7 and 10. This indicates that the decrease in the averaged angular velocity does not matter for the stable synchronization mode, but the interaction between the rotor position and time-dependent concentration field, which is shown in Figs. 8 (c) and (d), plays an important role.
2307.16020
Global planar dynamics with a star node and contracting nolinearity
This is a complete study of the dynamics of polynomial planar vector fields whose linear part is a multiple of the identity and whose nonlinear part is a contracting homogeneous polynomial. The contracting nonlinearity provides the existence of an invariant circle and allows us to obtain a classification through a complete invariant for the dynamics, extending previous work by other authors that was mainly concerned with the existence and number of limit cycles. The general results are also applied to some classes of examples: definite nonlinearities, $\ZZ_2\oplus\ZZ_2$ symmetric systems and nonlinearities of degree 3, for which we provide complete sets of phase portraits.
Begoña Alarcón, Sofia B. S. D. Castro, Isabel S. Labouriau
2023-07-29T16:25:09Z
http://arxiv.org/abs/2307.16020v2
# Global planar dynamics with a star node and contracting nonlinearity ###### Abstract. This is a complete study of the dynamics of polynomial planar vector fields whose linear part is a multiple of the identity and whose nonlinear part is a contracting homogeneous polynomial. The contracting nonlinearity provides the existence of an invariant circle and allows us to obtain a classification through a complete invariant for the dynamics, extending previous work by other authors that was mainly concerned with the existence and number of limit cycles. The general results are also applied to some classes of examples: definite nonlinearities, \(\mathbf{Z}_{2}\oplus\mathbf{Z}_{2}\) symmetric systems and nonlinearities of degree \(3\), for which we provide complete sets of phase portraits. The first author was partially supported by the Spanish Research Project MINECO-18-MTM2017-87697-P. The last two authors were partially supported by CMUP, member of LASI, which is financed by national funds through FCT -- Fundacao para a Ciencia e a Tecnologia, I.P. (Portugal) under the projects with reference UIDB/00144/2020 and UIDP/00144/2020. Cima and Llibre in [7] define bounded vector fields in the plane and provide a classification of their behaviour at infinity. Since vector fields with contracting nonlinearities are bounded in their sense, our results complement theirs by extending the classification globally. The classification is then used to address some classes of examples. We start with definite nonlinearities, that have been addressed by Gasull _et al_[12]. When the nonlinear part of the vector field is a contracting cubic, we are able to provide the full list of global phase portraits by making use of the results in Cima and Llibre [8]. If the vector field is additionally \(\mathbf{Z}_{2}\oplus\mathbf{Z}_{2}\)-equivariant, we provide a complete description of the global planar dynamics, including the study of stability and bifurcation of equilibria. ### Structure of the article In the next section we establish some notation and state some results that will be used. A normal form for planar contracting vector fields and sufficient conditions for a planar vector field to be contracting are obtained in Section 3. Dynamics is discussed in Section 4 for the restriction to the invariant circle and globally in Section 5, where we also obtain a complete invariant for the dynamics and from it a complete classification of this type of vector fields. This is used in the remainder of the article to obtain a complete description of some families of examples: definite nonlinearities in Section 6; cubic nonlinearities in Section 7; \(\mathbf{Z}_{2}\oplus\mathbf{Z}_{2}\)-equivariant nonlinearities as special cases in Subsections 4.1 and in 7.1. ## 2. Preliminary results and notation In this article we are concerned with the differential equation \[\left\{\begin{array}{rcl}\dot{x}&=&\lambda x+Q_{1}(x,y)\\ \dot{y}&=&\lambda y+Q_{2}(x,y)\end{array}\right.\qquad\text{with}\qquad \lambda>0 \tag{1}\] where the \(Q_{i}\), \(i=1,2\) are homogeneous non-zero polynomials of the same degree \(n>1\) and \((x,y)\in\mathbf{R}^{2}\). We define \(Q=(Q_{1},Q_{2})\) and say it is a homogeneous polynomial of degree \(n\). The origin of such a system is an _unstable star node_, a node with equal and positive eigenvalues. For \(\lambda<0\) the origin is an attracting star node and the dynamics corresponds to the equation with \(Q\) replaced by \(-Q\) and reversed time orientation. We recall some elementary notions in (equivariant) dynamical systems. The standard reference is the book [14]. We say that the dynamical system described by an ordinary differential equation \(\dot{X}=f(X)\), \(X\in\mathbf{R}^{n}\) is _equivariant_ under the action of a compact Lie group \(\Gamma\) if \[f(\gamma.X)=\gamma.f(X)\] for all \(X\in\mathbf{R}^{n}\) and \(\gamma\in\Gamma\). An _equilibrium_ of \(\dot{X}=f(X)\) is a solution of \(f(X)=0\), the form (1) implies that at least the origin is an equilibrium. A _limit cycle_ is an isolated periodic orbit. A _policycle_ is the cyclic union of finitely many equilibria and trajectories connecting them. Let \(\langle,\rangle\) denote the inner product and \(||.||\) the norm in \(\mathbf{R}^{2}\), and let \(P^{d}(\mathbf{R}^{2},\mathbf{R}^{2})\) be the vector space of homogeneous polynomial maps of degree \(d\) from \(\mathbf{R}^{2}\) to itself. Denote by \(P^{d+1}(\mathbf{R}^{2},\mathbf{R})\) the vector space of homogeneous polynomial maps of degree \(d+1\) from \(\mathbf{R}^{2}\) to \(\mathbf{R}\) and let \(X\in\mathbf{R}^{2}\). Consider the linear map: \[\mathcal{M}:P^{d}(\mathbf{R}^{2},\mathbf{R}^{2})\longrightarrow P^{d+1}( \mathbf{R}^{2},\mathbf{R})\qquad\mathcal{M}Q(X)=\langle X,Q(X)\rangle. \tag{2}\] A polynomial \(Q\in P^{d}(\mathbf{R}^{2},\mathbf{R}^{2})\), \(d>1\) is said to be _contracting_ if \[\mathcal{M}Q(X)<0,\quad\text{for all}\quad\big{|}\big{|}X\big{|}=1.\] It follows that polynomials of even degree are never contracting. It is also useful to recall that, since the polynomial is homogeneous, stating that the inequality in the definition of contracting holds on the unit sphere is equivalent to saying that it holds for any non-zero vector. We will also need the linear map \(\mathcal{L}:P^{2p+1}(\mathbf{R}^{2},\mathbf{R}^{2})\longrightarrow P^{2p+2}( \mathbf{R}^{2},\mathbf{R})\), given by \[\mathcal{L}Q(X)=\langle X^{\perp},Q(X)\rangle\qquad\text{for}\qquad X=(x,y) \qquad\text{and}\qquad X^{\perp}=(-y,x). \tag{3}\] For ease of reference we state next a two-dimensional version of the Invariant Sphere Theorem [11, Theorem 5.1], which we will use extensively. **Theorem 2.1** (The Invariant Sphere Theorem).: _Let \(p\geq 1\) and suppose that \(Q\in P^{2p+1}(\mathbf{R}^{2},\mathbf{R}^{2})\) is contracting. Then, for every \(\lambda>0\), there exists a unique topological circle \(S(\lambda)\subset\mathbf{R}^{2}\setminus\{0\}\) which is invariant by the flow of (1). Further,_ * \(S(\lambda)\) _is globally attracting in the sense that every trajectory_ \(x(t)\) _of (_1_) with nonzero initial condition is asymptotic to_ \(S(\lambda)\) _as_ \(t\to+\infty\)_._ * \(S(\lambda)\) _is embedded as a topological submanifold of_ \(\mathbf{R}^{2}\) _and the bounded component of_ \(\mathbf{R}^{2}\setminus S(\lambda)\) _contains the origin._ * _The flow of (_1_) restricted to_ \(S(\lambda)\) _is topologically equivalent to the flow of the phase equation_ \(\dot{\theta}=g(\theta)\) _where_ \(g(\theta)=\mathcal{L}Q(\cos\theta,\sin\theta)\)_._ The odd degree of the nonlinear part \(Q\) in the statement of Theorem 2.1 implies that the vector field is \(\mathbf{Z}_{2}\)-equivariant, where \(\mathbf{Z}_{2}\) is generated by \(-Id\). We will use the representation of (1) in polar coordinates \((x,y)=(r\cos\theta,r\sin\theta)\), with \((r,\theta)\in\mathbb{R}^{+}\times\mathcal{S}^{1}\). This is given by \[\left\{\begin{array}{ll}\dot{r}=\lambda r+f(\theta)r^{2p+1}&f(\theta)= \mathcal{M}Q(\cos\theta,\sin\theta)\\ \dot{\theta}=g(\theta)r^{2p}&\text{with}\\ \end{array}\right.\qquad\qquad\qquad\qquad\qquad\qquad g(\theta)=\mathcal{L }Q(\cos\theta,\sin\theta) \tag{4}\] where \(\mathcal{L}\) and \(\mathcal{M}\) are the linear maps defined in (2) and (3). Let \(\mathcal{C}^{2p+1}\subset P^{2p+1}(\mathbf{R}^{2},\mathbf{R}^{2})\) denote the set of contracting polynomial vector fields. Our aim is to describe the global dynamics of (1) for \(Q\in\mathcal{C}^{2p+1}\), \(p\geq 1\), including the behaviour at infinity using the Poincare disc, a compactification of \(\mathbf{R}^{2}\) (see Chapter 5 of Dumortier _et al._[10]). The plane \(\mathbf{R}^{2}\) is identified to a compact disc, with its boundary corresponding to infinity. The disc is also identified to a hemisphere in the unit sphere \(\mathcal{S}^{2}\subset\mathbf{R}^{3}\), covered by six charts \(U_{i},V_{i}\), \(i=1,2,3\). In the coordinates \((u,v)\) on any of the charts \(v=0\) corresponds to the equator \(\mathcal{S}^{1}\) of the sphere, the _circle at infinity_ of the Poincare disk. A point with coordinates \((u,v)\), \(u\neq 0\) in \(U_{1}\) corresponds to the point with coordinates \((\tilde{u},\tilde{v})=(1/u,v/u)\) in \(U_{2}\) and to the point with coordinates \((\hat{u},\hat{v})=(u/v,1/v)\) in \(U_{3}\). The dynamics of (1) in the charts \(U_{1}\) and \(U_{2}\) is given, respectively, by \[\left\{\begin{array}{lll}\dot{u}&=&Q_{2}(1,u)-uQ_{1}(1,u)\\ \\ \dot{v}&=&-\lambda v^{2p+1}-vQ_{1}(1,u)\end{array}\right.\quad\text{and}\qquad \left\{\begin{array}{lll}\dot{u}&=&Q_{1}(u,1)-uQ_{2}(u,1)\\ \\ \dot{v}&=&-\lambda v^{2p+1}-vQ_{2}(u,1)\end{array}\right. \tag{5}\] and the expression on the chart \(U_{3}\) is just (1) computed at \((x,y)=(u,v)\). The expressions of the Poincare compactification in the three remaining charts \(V_{j}\) are the same as in \(U_{j}\). The dynamics at infinity of (1) is thus given by the restriction of each one of the expressions in (5) to the flow-invariant line \((u,0)\), since the second equation is trivially satisfied for \(v=0\). An equilibrium at infinity of (1) is an equilibrium \((u,0)\in\mathcal{S}^{1}\) of one of the two equations. We refer to it as an _infinite equilibrium_, by opposition to _finite equilibria_\((u,v)\), \(v\neq 0\). The dynamics of the restriction of (5) to the circle at infinity \((u,0)\) does not depend on the linear part of (1). Hence, it is equivalent to the dynamics of the phase equation \(\dot{\theta}=g(\theta)\) equivalent to that in (4). ## 3. Contracting polynomial vector fields in dimension \(2\) The results in this section describe the homogeneous polynomial planar vector fields and provide conditions for ensuring these are contracting. In this way we obtain a description of vector fields (1) to which Theorem 2.1 applies. **Proposition 3.1**.: _Any homogeneous polynomial vector field \(Q(x,y)=\left(Q_{1}(x,y),Q_{2}(x,y)\right)\) in \(\mathbf{R}^{2}\) of degree \(2p+1\) may be written in the form_ \[Q(x,y)=p_{1}(x^{2},y^{2})\left(x,0\right)+p_{2}(x^{2},y^{2})\left(0,y\right)+ p_{3}(x^{2},y^{2})\left(y,0\right)+p_{4}(x^{2},y^{2})\left(0,x\right) \tag{6}\] _where \(p_{j}(u,v)\), \(j=1,\ldots,4\) are homogeneous polynomials of degree \(p\)._ Proof.: Each vector monomial occurring in \(Q\) has the form \(ax^{k}y^{\ell}e_{j}\) where \(k+\ell=2p+1\), hence in each case one of \(k,\ell\) is odd and the other is even. Then \(xp_{1}(x^{2},y^{2})\) is the sum of the vector monomials in \(Q_{1}\) with odd \(k\), and \(yp_{3}(x^{2},y^{2})\) is the sum of those with odd \(\ell\). Similarly, \(yp_{2}(x^{2},y^{2})\) is the sum of the vector monomials in \(Q_{2}\) with odd \(\ell\), and \(xp_{4}(x^{2},y^{2})\) is the sum of those with odd \(k\). We call \(p_{1}(x^{2},y^{2})\left(x,0\right)+p_{2}(x^{2},y^{2})\left(0,y\right)\) the _symmetric part_ of \(Q\) and \(p_{3}(x^{2},y^{2})\left(y,0\right)+p_{4}(x^{2},y^{2})\left(0,x\right)\) the _asymmetric part_ of \(Q\). We write \(Q_{s}(x,y)\) for the symmetric part of \(Q\) and note that it is \(\mathbf{Z}_{2}\oplus\mathbf{Z}_{2}\)-equivariant, where \(\mathbf{Z}_{2}\oplus\mathbf{Z}_{2}\) is the group generated by the maps \((x,y)\mapsto(-x,y)\) and \((x,y)\mapsto(x,-y)\). **Proposition 3.2**.: _A homogeneous polynomial vector field \(Q\) of degree \(2p+1\) in \(\mathbf{R}^{2}\) is contracting if for the polynomials in (6), we have for all \((u,v)\neq(0,0)\) with \(u\geq 0\), \(v\geq 0\), that one of the \(p_{j}(u,v)<0\), \(j=1,2\) and_ \[2\max_{j=1,2}\left\{p_{j}(u,v)\right\}<-\left|p_{3}(u,v)+p_{4}(u,v)\right|.\] Note that if \(p_{1}(u,v)<0\), then the second condition implies \(p_{2}(u,v)<0\) and vice-versa. Proof.: We have \(\mathcal{M}Q(x,y)=(x,y)\cdot A(x^{2},y^{2})\cdot(x,y)^{T}\) where \(A\) is the symmetric matrix \[A(u,v)=\left(\begin{array}{cc}p_{1}(u,v)&\frac{p_{3}(u,v)+p_{4}(u,v)}{2}\\ \\ \frac{p_{3}(u,v)+p_{4}(u,v)}{2}&p_{2}(u,v)\end{array}\right).\] The polynomial \(Q\) is contracting if for each \((u,v)\) the quadratic form \((x,y)\cdot A(u,v)\cdot(x,y)^{T}\) be negative definite. This holds if and only if both eigenvalues of \(A(u,v)\) are negative. By Gershgorin's Theorem [9, Section 2.7.3] the eigenvalues of \(A\) lie in the union of the closed intervals with centre at \(p_{j}(u,v)\), \(j=1,2\) and radius \(\left|p_{3}(u,v)+p_{4}(u,v)\right|/2\). The inequality implies that both these intervals are contained in the negative half line. **Proposition 3.3**.: _A homogeneous polynomial vector field \(Q\) of degree \(2p+1\) in \(\mathbf{R}^{2}\) is contracting if for the polynomials in (6), for all \((u,v)\neq(0,0)\) with \(u\geq 0\), \(v\geq 0\), one of the \(p_{j}(u,v)<0\), \(j=1,2\) and_ \[4p_{1}(u,v)p_{2}(u,v)>(p_{3}(u,v)+p_{4}(u,v))^{2}. \tag{7}\] Proof.: The eigenvalues of the matrix \(A\) of the proof of Proposition 3.2 are negative if and only if \(\operatorname{Tr}A(u,v)<0\) and \(\det A(u,v)>0\). In case (7) holds then the hypothesis on the sign of one \(p_{j}(u,v)\), \(j=1,2\) implies that both \(p_{j}(u,v)<0\), \(j=1,2\) and hence that \(\operatorname{Tr}A=p_{1}(u,v)+p_{2}(u,v)<0\). The result follows from \(\det A=p_{1}(u,v)p_{2}(u,v)-\left(p_{3}(u,v)+p_{4}(u,v)\right)^{2}/4\). The conditions in Propositions 3.2 and 3.3 are not necessary. A simple example is the symmetric vector field with \(p_{1}(x,y)=y^{2}-x^{2}\), \(p_{2}(x,y)=-2x^{2}-y^{2}\), \(p_{3}(x,y)=p_{4}(x,y)=0\), for which \(\mathcal{M}Q(x,y)=-(x^{4}+y^{4}+x^{2}y^{2})<0\) for \((x,y)\neq(0,0)\), but \(p_{1}(x,y)=0\) for \(x=\pm y\). **Corollary 3.4**.: _If \(Q\) is a polynomial vector field satisfying the hypothesis of either Proposition 3.2 or 3.3 then its symmetric part \(Q_{s}\) is also contracting._ Proof.: We have \(Q_{s}(x,y)=p_{1}(x^{2},y^{2})\left(x,0\right)+p_{2}(x^{2},y^{2})\left(0,y\right)\). Hence it follows that both \(p_{1}(x^{2},y^{2})\) and \(p_{2}(x^{2},y^{2})\) are negative. Since in this case \(\mathcal{M}Q(x,y)=(x,y)\cdot D\cdot(x,y)^{T}\) where \[D=\left(\begin{array}{cc}p_{1}(x^{2},y^{2})&0\\ 0&p_{2}(x^{2},y^{2})\end{array}\right)\] the definition of a contraction is satisfied. ## 4. Dynamics on the invariant circle The hypothesis of contracting homogeneous nonlinearities in the vector field given by (1), allows us to apply Theorem 2.1, guaranteeing the existence of a globally attracting invariant circle. Observe that, from the expression in polar coordinates (4), the homogeneous polynomial vector field \(Q\) is contracting if and only if \(f(\theta)<0\) for all \(\theta\). The form of the phase vector field on the invariant circle \(\mathcal{S}^{1}\subset\mathbb{R}^{2}\) in Theorem 2.1 is \(\dot{\theta}=g(\theta)=\mathcal{L}Q(\cos\theta,\sin\theta)\). It determines the same dynamics as the expression (4) for \(\dot{\theta}\) in in polar coordinates, since they differ by a positive function \(r^{2p}\). It follows that the dynamics on the invariant circle coincides with the dynamics on the circle at infinity. We explore this in the following results, starting with three lemmas that are immediate. These results are strongly related to [1] and [3]. **Lemma 4.1**.: _Assume that \(Q\) in (1) is contracting and homogeneous. The invariant circle that exists for the dynamics of (1) is an attracting limit cycle if and only if \(g(\theta)\neq 0\) for all \(\theta\in[0,2\pi)\). Moreover, in this case the invariant circle is the curve \(\mathcal{L}Q(x,y)=0\) and another periodic orbit exists at infinity._ Proof.: There are no equilibria on invariant circle and on the circle at infinity since the phase equation has no zeros, hence both circles are limit cycles. The form of the invariant circle follows from the invariance of this curve established in [3, Theorem 1 (a)], see also Figure 1 (a). **Lemma 4.2**.: _Assume that \(Q\) in (1) is contracting and homogeneous. The invariant circle that exists for the dynamics of (1) is an attracting policye if and only if \(g(\theta)=0\) for a finite number of \(\theta\in[0,2\pi)\). Moreover, in this case another policye exists at infinity._ Proof.: Both the invariant circle and the circle at infinity contain equilibria, hence they must be policycles as in Figure 1 (b). **Lemma 4.3**.: _Assume that \(Q\) in (1) is contracting and homogeneous of degree \(2p+1\). The invariant circle that exists for the dynamics of (1) is a continuum of equilibria if and only if \(g(\theta)=0\) for all \(\theta\in[0,2\pi)\). Moreover, in this case the invariant circle is the graph of the map \(r(\theta)=\sqrt[2p/-\lambda/f(\theta)\) and the circle at infinity is also a continuum of equilibria._ Proof.: The phase equation equation being identically zero, both the invariant circle and the circle at infinity consist of equilibria. In polar coordinates, finite equilibria must also satisfy \(\dot{r}=0\) and this provides the equation for the invariant circle. Phase portrait in Figure 1 (c). **Proposition 4.4**.: _Consider (1) with \(Q(x,y)\in\mathcal{C}^{2p+1}\) a contracting polynomial vector field in the form given by (6). Then:_ 1. _If_ \(p_{3}(u,v)p_{4}(u,v)<0\) _and_ \(-4p_{3}(u,v)p_{4}(u,v)>(p_{2}(u,v)-p_{1}(u,v))^{2}\) _the invariant circle is a limit cycle._ 2. _If_ \(p_{3}(0,1)p_{4}(1,0)\geq 0\) _the invariant circle is either a policycle with at most_ \(4(p+1)\) _equilibria or a continuum of equilibria. Moreover, if_ \(p_{3}(0,1)p_{4}(1,0)>0\) _then the invariant circle is a policycle._ 3. _If_ \(p_{3}(u,v)\equiv p_{4}(u,v)\equiv 0\) _and_ \(p_{1}(u,v)\equiv p_{2}(u,v)\)_, the invariant circle is a continuum of equilibria._ Note that in case (c) the equations are \(\mathbf{Z}_{2}\oplus\mathbf{Z}_{2}\)-equivariant, this property will be explored further in Subsections 4.1 and 7.1. We illustrate in Figure 1 the possibilities described in Proposition 4.4. Proof.: Using (6) we can write \(\mathcal{L}Q(x,y)=(x,y)\cdot B(x^{2},y^{2})\cdot(x,y)^{T}\) where \[B(x^{2},y^{2})=\begin{pmatrix}p_{4}(x^{2},y^{2})&\left(p_{2}(x^{2},y^{2})-p_{ 1}(x^{2},y^{2})\right)/2\\ \left(p_{2}(x^{2},y^{2})-p_{1}(x^{2},y^{2})\right)/2&-p_{3}(x^{2},y^{2})\end{pmatrix}.\] If \(p_{3}(u,v)p_{4}(u,v)<0\) and \(-4p_{3}(u,v)p_{4}(u,v)>(p_{2}(u,v)-p_{1}(u,v))^{2}\) then \(\det B>0\). Hence if \(p_{4}(u,v)>0\) then \(g(\theta)=\mathcal{L}Q(\cos\theta,\sin\theta)>0\) for all \(\theta\), with \(g(\theta)<0\) provided \(p_{4}(u,v)<0\). In both cases \(g(\theta)\neq 0\) for all \(\theta\in[0,2\pi]\) and item (a) holds by Lemma 4.1. If either \(p_{3}(0,1)=0\) or \(p_{4}(1,0)=0\) then trivially \(g(\theta)\) vanishes on one of the axes and one of Lemmas 4.2 and 4.3 holds. Assume \(p_{3}(0,1)p_{4}(1,0)\neq 0\). Since \(g(0)=p_{4}(1,0)\) and \(g(\frac{\pi}{2})=-p_{3}(0,1)\), then \(g(\theta)\not\equiv 0\) and thus the invariant circle is not a continuum of equilibria. Moreover, in this case \(g(\theta)\) changes sign in the interval \((0,\pi/2)\). Therefore, since \(g\) is continuous, there must be at least one \(\theta^{*}\in(0,\pi/2)\) for which \(g(\theta^{*})=0\). Hence Lemma 4.2 applies and (1) has a policycle, establishing (b). Item (c) is an immediate consequence of Lemma 4.3. The next example illustrates a situation not accounted for by Proposition 4.4. Figure 1. Global planar dynamics with star nodes as described in Proposition 4.4 and Lemmas 4.1 – 4.3. **Example 4.5**.: The family of vector fields studied by Boukoucha [5] is such that \(p_{3}(0,1)p_{4}(1,0)<0\) and a limit cycle exists. When \(n=1\) in [5], we obtain \[p_{1}(x^{2},y^{2})=-\beta ax^{2}-(\beta a+\alpha b)y^{2} \text{ and } p_{3}(x^{2},y^{2})=(\alpha a+\beta b)x^{2}+\alpha ay^{2}\] \[p_{2}(x^{2},y^{2})=(\alpha b-\beta a)x^{2}-\beta ay^{2} \text{ and } p_{4}(x^{2},y^{2})=(\beta b-\alpha a)y^{2}-\alpha ax^{2}\] for real constants \(\alpha,\beta,a,b\). Then \(p_{3}(0,1)p_{4}(1,0)=-\alpha^{2}a^{2}<0\). If \(b=0\), then \(p_{1}\equiv p_{2}\) and \(p_{3}+p_{4}\equiv 0\). Hence if \(\beta a>0\) then \(Q\) is contracting by Proposition 3.2. Moreover, \(p_{3}p_{4}(u,v)=-\alpha^{2}a^{2}(u+v)^{2}<0\) and this example satisfies the conditions in Proposition 4.4 (a). Another choice of parameters for which \(Q\) is contracting is \(\alpha=0\), \(\beta a>0\) and \(4a^{2}-b^{2}>0\), this time by Proposition 3.3. In this case \(p_{3}(x,y)=b\beta x^{2}\), \(p_{4}(x,y)=b\beta y^{2}\) and \(p_{1}(x,y)=p_{2}(x,y)=-\beta a(x^{2}+y^{2})\). Therefore with this choice of parameters and if \(b\beta\neq 0\) the example does not satisfy the conditions in Proposition 4.4 (a) and yet the invariant circle is a limit cycle. **Corollary 4.6**.: _If \(Q(x,y)\) is a contracting polynomial vector field for which (1) has a finite number of equilibria then:_ 1. _if all the equilibria of (_1_) are hyperbolic, then the number of equilibria away from the origin is a multiple of 4 and they alternate as sinks and saddles;_ 2. _all the equilibria of (_1_) away from the origin are either sinks or saddles (possibly non-hyperbolic) or saddle-nodes;_ 3. _the equilibria that are sinks and saddles appear at alternating positions in the policycle._ **Example 4.7**.: The following vector field illustrates the global dynamics given in Proposition 4.4 (b) when the nonlinearity is of degree \(2p+1=5\) (see Figure 2): \[\left\{\begin{array}{rcl}\dot{x}&=&\lambda x-x(x^{4}+x^{2}y^{2}+y^{4})-y(-x ^{4}+x^{2}y^{2})\\ \dot{y}&=&\lambda y-y(x^{4}+x^{2}y^{2}+y^{4})-x(x^{2}y^{2}-y^{4})\end{array}\right.\] It follows by Proposition 3.3 that the nonlinear part \(Q(x,y)\) of this example is contracting because, for all \((x,y)\neq(0,0)\) \[p_{1}(x^{2},y^{2})=p_{2}(x^{2},y^{2})=-(x^{4}+x^{2}y^{2}+y^{2})<0\] and \[4p_{1}(x^{2},y^{2})^{2}-(p_{3}(x^{2},y^{2})+p_{4}(x^{2},y^{2}))^{2}=3(x^{2}-y^ {2})^{2}+9x^{4}y^{4}+6x^{2}y^{2}(x^{2}-y^{2})^{2}>0.\] Then \[g(\theta)=2\cos^{2}\theta\sin^{2}\theta\cos{(2\theta)}=\frac{1}{2}\sin^{2}{(2 \theta)}\cos{(2\theta)}\] Figure 2. Phase portrait of Example 4.7 and \[g^{\prime}(\theta)=\sin{(2\theta)}[2\cos^{2}{(2\theta)}-\sin^{2}{(2\theta)}].\] Hence, the four infinite equilibria on the axes are of saddle-node type and \(\theta=\frac{\pi}{4},\frac{3\pi}{4},\frac{5\pi}{4},\frac{7\pi}{4}\) are attractor, saddle, attractor and saddle, respectively. Moreover, \(p_{3}(0,1)=p_{4}(1,0)=0\) and the system has a policyce with the total number of equilibria away from the origin equal to \(8\). ### Special case: \(\mathbf{Z}_{2}\oplus\mathbf{Z}_{2}\) equivariant nonlinearity If the vector field (1) has \(\mathbf{Z}_{2}\oplus\mathbf{Z}_{2}\) symmetry then \(Q\) has the form \(Q(x,y)=p_{1}(x^{2},y^{2})\left(x,0\right)+p_{2}(x^{2},y^{2})\left(0,y\right)\) and we may say more about the dynamics on the invariant circle. In this case if \(Q\) has degree \(2p+1\) we may write \[p_{j}(x,y)=\sum_{k}a_{jk}(x^{2})^{p-k}(y^{2})^{k}\qquad j=1,2.\] **Lemma 4.8** (Infinitely many equilibria).: _Let \(Q\) be a \(\mathbf{Z}_{2}\oplus\mathbf{Z}_{2}\)-equivariant contracting homogeneous polynomial vector field and suppose \(\lambda>0\). Then the invariant circle of (1) consists entirely of non hyperbolic equilibria if and only if \(p_{1}(x^{2},y^{2})\equiv p_{2}(x^{2},y^{2})\)._ Proof.: If \(p_{1}(x^{2},y^{2})\equiv p_{2}(x^{2},y^{2})\) then all points in the curve \(\lambda=-p_{1}(x^{2},y^{2})\) are equilibria. Conversely, all the points in the invariant circle are equilibria if and only if \(\mathcal{L}Q(x,y)=xy\left(p_{2}(x^{2},y^{2})-p_{1}(x^{2},y^{2})\right)\equiv 0\). The equilibria are not hyperbolic since they form a continuum. When there are finitely many equilibria we use the polar form (4) and write \[g(\theta)=\mathcal{L}Q(\cos{\theta},\sin{\theta})=\frac{1}{2}\sin(2\theta) \left[p_{2}\left(\frac{1+\zeta}{2},\frac{1-\zeta}{2}\right)-p_{1}\left(\frac{1 +\zeta}{2},\frac{1-\zeta}{2}\right)\right]=\frac{1}{2}\sin(2\theta)q(\zeta)\] where \(\zeta=\cos(2\theta)\). Denote \((\xi_{1},\xi_{2})=\left(\frac{1+\zeta}{2},\frac{1-\zeta}{2}\right)\). Then \[g^{\prime}(\theta) = \cos(2\theta)\left[p_{2}\left(\xi_{1},\xi_{2}\right)-p_{1}\left( \xi_{1},\xi_{2}\right)\right]+\] \[\frac{1}{2}\sin(2\theta)\left[\frac{dp_{2}}{d\xi_{2}}\left(\xi_{ 1},\xi_{2}\right)-\frac{dp_{1}}{d\xi_{2}}\left(\xi_{1},\xi_{2}\right)+\frac{dp _{1}}{d\xi_{1}}\left(\xi_{1},\xi_{2}\right)-\frac{dp_{2}}{d\xi_{1}}\left(\xi_ {1},\xi_{2}\right)\right].\] **Corollary 4.9** (Equilibria on the axes).: _If \(a_{10}-a_{20}\neq 0\) then \(\theta=0\) and \(\theta=\pi\) are hyperbolic equilibria; otherwise they are non hyperbolic. If \(a_{1p}-a_{2p}\neq 0\) then \(\theta=\pi/2\) and \(\theta=3\pi/2\) are hyperbolic equilibria; otherwise they are non hyperbolic._ Proof.: On the horizontal axis \(g^{\prime}(\theta)=p_{2}(1,0)-p_{1}(1,0)=a_{10}-a_{20}\). On the vertical axis \(g^{\prime}(\theta)=p_{1}(0,1)-p_{2}(0,1)=-a_{1p}+a_{2p}\). **Corollary 4.10** (Equilibria outside the axes).: _Equilibria with \(\theta\neq n\pi/2\), \(n\in\mathbf{Z}\), are hyperbolic if and only if \(\frac{dp_{1}}{d\theta}\neq\frac{dp_{2}}{d\theta}\)._ Proof.: In this case both \(\cos(2\theta)\neq 0\), \(\sin(2\theta)\neq 0\) and \(p_{1}=p_{2}\). Hence \(g^{\prime}(\theta)\neq 0\) if and only if \[\frac{dp_{1}}{d\theta}-\frac{dp_{2}}{d\theta}=\frac{dp_{1}}{d\xi_{2}}-\frac{ dp_{2}}{d\xi_{2}}+\frac{dp_{2}}{d\xi_{1}}-\frac{dp_{1}}{d\xi_{1}}\neq 0.\] ## 5. Global dynamics and classification Next we focus on the different possibilities for the dynamics of (1) when the nonlinear part is a contracting homogeneous polynomial. We classify the possible dynamical behaviour, up to a global planar homeomorphism that maps trajectories to trajectories, preserving the time orientation in each trajectory, plus a global rescaling of time. This induces an equivalence relation on the set \(\mathcal{C}^{2p+1}\) of contracting homogeneous polynomial vector fields in \(\mathbf{R}^{2}\) of degree \(2p+1\). Given \(Q_{a},Q_{b}\in\mathcal{C}^{2p+1}\) we indicate this equivalence relation as \(Q_{a}\sim Q_{b}\). Since the set of positive definite polynomials is an open half cone in \(P^{2p+2}(\mathbf{R}^{2},\mathbf{R})\) then its inverse image \(\mathcal{C}^{2p+1}\subset P^{2p+1}(\mathbf{R}^{2},\mathbf{R}^{2})\) under the linear map \(\mathcal{M}\) defined in (2) is also an open half cone in \(P^{2p+1}(\mathbf{R}^{2},\mathbf{R}^{2})\). The next result shows that \(\mathcal{L}\left(\mathcal{C}^{2p+1}\right)=P^{2p+2}(\mathbf{R}^{2},\mathbf{R})\) where \(\mathcal{L}\) is the linear map defined in (3) that generates the phase vector field. **Theorem 5.1**.: _Given a homogeneous polynomial \(q(x,y)\in P^{2p+2}(\mathbf{R}^{2},\mathbf{R})\) of degree \(2(p+1)\) there is a contracting homogeneous polynomial vector field \(Q(x,y)\in\mathcal{C}^{2p+1}\) for which \(\mathcal{L}Q(x,y)=q(x,y)\)._ Proof.: Write \(q(x,y)=x^{2}b_{1}(x^{2},y^{2})+xyb_{2}(x^{2},y^{2})+y^{2}b_{3}(x^{2},y^{2})\) where \(b_{j}(u,v)\) are homogeneous of degree \(p\). Let \(Q\) be the vector field of the form (6) in Proposition 3.1 where, for some \(K>0\) to be determined, the \(p_{j}\) are \[p_{1}(u,v)=-K(u^{p}+v^{p})\quad p_{2}(u,v)=b_{2}(u,v)+p_{1}(u,v)\quad p_{3}(u, v)=-b_{3}(u,v)\quad p_{4}(u,v)=b_{1}(u,v).\] Then \(\mathcal{L}Q(x,y)=x^{2}p_{4}(x^{2},y^{2})+xy\left[p_{2}(x^{2},y^{2})-p_{1}(x^ {2},y^{2})\right]-y^{2}p_{3}(x^{2},y^{2})=q(x,y)\). We want to choose \(K\) so that the \(p_{j}\) satisfy the conditions of Proposition 3.3. Since \(K>0\) then \[\max_{t\in[0,\pi/2]}p_{1}(\cos t,\sin t)=-2^{1-p/2}K<0,\] hence \(p_{1}(u,v)<0\) for \(u\geq 0\), \(v\geq 0\), \((u,v)\neq(0,0)\). It remains to find \(K>0\) such that (7) holds for all \((u,v)\) with \(u\geq 0\), \(v\geq 0\), i.e., such that for \((u,v)=(x^{2},y^{2})\) we have: \[4K^{2}(u^{p}+v^{p})^{2}-4K(u^{p}+v^{p})b_{2}(u,v)-[b_{1}(u,v)-b_{3}(u,v)]^{2}> 0\quad\forall u\geq 0,v\geq 0.\] Since \(D(u,v)=b_{2}^{2}(u,v)+(b_{1}(u,v)-b_{3}(u,v))^{2}\geq 0\), then if we find \(K\) such that \(2p_{1}\leq-b_{2}-\sqrt{D}\) for \(u\geq 0\), \(v\geq 0\), \((u,v)\neq(0,0)\) it will follow that \(Q\) is contracting. Let \(M\) satisfy \(M\leq(-b_{2}-\sqrt{D})/2\) for \((u,v)=(\cos^{2}t,\sin^{2}t)\) with \(0\leq t\leq\pi/2\). Since \(p_{1}\) and the \(b_{j}\) are homogeneous of the same degree then by taking \(-K\leq M/2^{1-p/2}\) the result is proved. We establish in this section that the global dynamics of (1) for \(Q\in\mathcal{C}^{2p+1}\) is completely determined by \(g(\theta)=\mathcal{L}Q(\cos\theta,\sin\theta)\). This feature allows us to have a complete classification of vector fields in \(\mathcal{C}^{2p+1}\) from the point of view of the dynamics of (1), by describing the equivalence relation induced by \(\sim\) in the set \(P^{2p+2}(\mathbf{R}^{2},\mathbf{R})\) The natural classification in \(P^{2p+2}(\mathbf{R}^{2},\mathbf{R})\) is to allow linear changes of coordinates and multiplication by a non-zero constant, that we will take to be always positive in order to preserve stability, as discussed below. This classification has good properties with respect to the topology induced in \(P^{2p+2}(\mathbf{R}^{2},\mathbf{R})\) by identifying the coefficients in the polynomials to points in \(\mathbf{R}^{2p+3}\). In particular, it creates a Whitney stratification of \(P^{2p+2}(\mathbf{R}^{2},\mathbf{R})\). It also translates well to \(\mathcal{C}^{2p+1}\) respecting the dynamics in the invariant circle, as the next simple result shows. **Lemma 5.2**.: _If \(L:\mathbf{R}^{2}\longrightarrow\mathbf{R}^{2}\) is an invertible linear map and \(\mathcal{G}(X)=\mathcal{L}Q(X)\) then the change of coordinates \(L\widetilde{X}=X\) transforms (1) into an equation with \(\mathcal{L}\widetilde{Q}\left(\widetilde{X}\right)=\dfrac{1}{\det L} \mathcal{G}\left(L\widetilde{X}\right)\)._ Proof.: The linear part \(\lambda X\) of equation (1) commutes with every linear map of \(\mathbf{R}^{2}\). Therefore, the change of coordinates transforms \(\dot{X}=\lambda X+Q(X)\) into \(\dot{\widetilde{X}}=\lambda\widetilde{X}+L^{-1}Q\left(L\widetilde{X}\right)\). Writing \(X^{\perp}=\left(PX^{T}\right)^{T}\) where \(P=\begin{pmatrix}0&-1\\ 1&0\end{pmatrix}\) we get \[\mathcal{L}\left(L^{-1}QL\right)\left(\widetilde{X}\right) =\left\langle P\widetilde{X},L^{-1}Q\left(L\widetilde{X}\right) \right\rangle=\left\langle\left(L^{-1}\right)^{T}P\widetilde{X},Q\left(L \widetilde{X}\right)\right\rangle\] \[=\frac{1}{\det L}\left\langle PL\widetilde{X},Q\left(L \widetilde{X}\right)\right\rangle=\frac{1}{\det L}\mathcal{G}\left(L \widetilde{X}\right)\] since by Cramer's rule \( PL=\frac{1}{\det L}\left(L^{-1}\right)^{T}P\). Under the equivalence induced by \(\sim\), the classification in \(P^{2p+2}(\mathbf{R}^{2},\mathbf{R})\) under linear changes of coordinates gives rise to moduli: parametrised families of polynomials that share the same geometry. For instance in Cima & Llibre's [8] classification of \(\mathcal{P}^{1}\), that we use in Section 7 below, the families (I), (II) and (III) all contain a parameter \(\mu\) that does not have a qualitative meaning for the dynamics. The moduli arise from the position of the roots of the polynomial \(\mathcal{L}Q\) in the projective space \(\mathbf{RP}^{1}\), since a linear map on the plane is determined by its value at two points, so a linear change of coordinates only controls the position of two roots. Therefore, \(\sim\) induces a coarser equivalence relation in \(P^{2p+2}(\mathbf{R}^{2},\mathbf{R})\), since a homeomorphism would not have this restriction. This is addressed in the next definition. **Definition 5.3**.: _The symbol sequence \(\sigma(\mathcal{G})\)associated to \(\mathcal{G}\in P^{2p+2}(\mathbf{R}^{2},\mathbf{R})\) is a cyclic oriented list of the form \(\sigma(\mathcal{G})=(j_{1}s_{1}),(j_{2}s_{2}),\ldots,(j_{\ell}s_{\ell})\) where \(j_{i}\in\{1,2\}\) and \(s_{i}=\pm\) obtained from the ordered set of zeros \(0\leq\theta_{1}<\theta_{2}<\cdots<\theta_{\ell}<\pi\) of \(g(\theta)=\mathcal{G}(\cos\theta,\sin\theta)\) as follows (see also Figure 3):_ 1. \(j_{i}=1\) _if the multiplicity of_ \(\theta_{i}\) _is odd and_ \(j_{i}=2\) _if it is even;_ 2. _if_ \(j_{i}=1\) _then_ \(s_{i}=+\) _if_ \(g(\theta)\) _is increasing around_ \(\theta_{i}\) _and_ \(s_{i}=-\) _if_ \(g\) _is decreasing;_ 3. _if_ \(j_{i}=2\) _then_ \(s_{i}=+\) _if_ \(\theta_{i}\) _is a local minimum of_ \(g(\theta)\) _and_ \(s_{i}=-\) _if_ \(\theta_{i}\) _is a local maximum;_ 4. _if_ \(g(\theta)\neq 0\) _for all_ \(\theta\in[0,\pi)\) _then_ \(\sigma(\mathcal{G})=\varnothing\)_;_ 5. _if_ \(\mathcal{G}(x,y)\equiv 0\) _then_ \(\sigma(\mathcal{G})=\infty\)_._ _For \(\sigma=(j_{1}s_{1}),(j_{2}s_{2}),\ldots,(j_{\ell}s_{\ell})\), the backward sequence is \(\bar{\sigma}=(j_{\ell}\tilde{s}_{\ell}),(j_{\ell-1}\tilde{s}_{\ell-1}),\ldots,( j_{2}\tilde{s}_{2}),(j_{1}\tilde{s}_{1})\), where \(\tilde{s}_{i}=-s_{i}\) if \(j_{i}=1\) and \(\tilde{s}_{i}=s_{i}\) if \(j_{i}=2\). We identify \(\sigma\) and \(\bar{\sigma}\), and indicate this by \(\sigma\equiv\bar{\sigma}\)._ For instance, the symbol sequence for \(\mathcal{G}_{1}(x,y)=x^{3}y^{2}(x-y)\) is \(\sigma(\mathcal{G}_{1})=(2+)(1-)(1+)\) corresponding to \(\theta_{1}=0\) (double), \(\theta_{2}=\pi/4\) (simple) \(\theta_{3}=\pi/2\) (triple). For \(\mathcal{G}_{2}(x,y)=-x^{2}y^{3}(-x+y)=-\mathcal{G}_{1}(y,x)\) the symbol sequence is \(\sigma(\mathcal{G}_{2})=(1-)(1+)(2+)\) corresponding to \(\theta_{1}=0\) (triple), \(\theta_{2}=\pi/4\) (simple) \(\theta_{3}=\pi/2\) (double). Since the sequences are cyclic, they coincide. Moreover, in this example \(\sigma(\mathcal{G}_{2})=\overline{\sigma(\mathcal{G}_{1})}\). The sequence \(\bar{\sigma}\) does not always coincide with \(\sigma\). An example is \[\sigma=(2+),(1-),(2-),(1+),(1-),(1+)\quad\text{with}\quad\bar{\sigma}=(1-),(1+),(1-),(2-),(1+),(1-)\] where \(\sigma=\sigma(\mathcal{G}_{1})\) for \[\mathcal{G}_{1}(x,y)=x(a_{1}x-y)(a_{2}x-y)^{2}(a_{3}x-y)(a_{4}x-y)(a_{5}x-y)y^{2}\] with \(a_{i}=\tan(i\pi/12)\) and \(\bar{\sigma}=\sigma(\mathcal{G}_{2})\) for \(\mathcal{G}_{2}(x,y)=-\mathcal{G}_{1}(x,-y)\). **Lemma 5.4**.: _The symbol sequence \(\sigma(\mathcal{G})=(j_{1}s_{1}),(j_{2}s_{2}),\ldots,(j_{\ell}s_{\ell})\) of \(\mathcal{G}\in P^{2p+2}(\mathbf{R}^{2},\mathbf{R})\) always satisfies the following restrictions:_ 1. \(\sum_{i=1}^{\ell}j_{i}=0\pmod{2}\) _;_ 2. \((1+)\) _and_ \((1-)\) _occur in alternating sequences of sign + and -, even when the sequence is interrupted by one or more symbols_ \((2\pm)\)_;_ 3. _If_ \(s_{i}=+\) _then_ \((j_{i+1},s_{i+1})\in\{(1-),(2+)\}\)_. If_ \(s_{i}=-\) _then_ \((j_{i+1},s_{i+1})\in\{(1+),(2-)\}\)_._ _Moreover, if \(\sigma\) satisfies these restrictions then \(\bar{\sigma}\) also satisfies the same restrictions_ Proof.: Since the degree of \(\mathcal{G}\) is even, restriction (a) follows. The other two restrictions can be seen immediately from Figure 3. The restriction (b) corresponds to assertion (c) in Corollary 4.6. Heteroclinic cycles occur for those \(Q\) such that \(\sigma(\mathcal{L}Q)\) only contains one of the symbols \((2\pm)\). **Proposition 5.5**.: _The symbol sequence \(\sigma(\mathcal{G})\), under the identification \(\equiv\), is invariant under linear changes of coordinates in \(P^{2p+2}(\mathbf{R}^{2},\mathbf{R})\)._ Proof.: Suppose \(L:\mathbf{R}^{2}\longrightarrow\mathbf{R}^{2}\) is an invertible linear map and let \(\mathcal{G}_{1}(x,y)=\dfrac{1}{\det L}\mathcal{G}_{2}\circ L(x,y)\) with \(g_{j}(\theta)=\mathcal{G}_{j}(\cos\theta,\sin\theta)\), \(j=1,2\). Then \(L\) maps the roots of \(\mathcal{G}_{1}\) in \(\mathcal{S}^{1}\) into the roots of \(\mathcal{G}_{2}\) with the same multiplicity. Also there is a bijection \(\varphi:\mathcal{S}^{1}\longrightarrow\mathcal{S}^{1}\) such that \(g_{2}(\varphi(\theta))=\dfrac{1}{\det L}g_{1}(\theta)\). If \(L\) preserves orientation, i.e. \(\det L>0\), then the roots of \(\mathcal{G}_{1}\) and \(\mathcal{G}_{2}\) occur in the same order in \(\mathcal{S}^{1}\). The map \(\varphi\) is monotonically increasing, hence \(\sigma(\mathcal{G}_{2})=\sigma(\mathcal{G}_{1})\). If \(L\) reverses orientation, i.e. \(\det L<0\), then the roots of \(\mathcal{G}_{1}\) and \(\mathcal{G}_{2}\) occur in the opposite order in \(\mathcal{S}^{1}\). In this case the function \(\varphi(\theta)\) is monotonically decreasing. Hence, if \(g_{1}\) is a monotonically increasing (respectively, decreasing) function of \(\theta\in[\theta_{a},\theta_{b}]\) then \(g_{2}\) is also a monotonically increasing (respectively, decreasing) function of \(\tilde{\theta}=\varphi(\theta)\) for \(\tilde{\theta}\in[\varphi(\theta_{b}),\varphi(\theta_{a})]\). Therefore \(\sigma(\mathcal{G}_{2})=\overline{\sigma(\mathcal{G}_{1})}\equiv\sigma( \mathcal{G}_{1})\). In order to deal with the full equivalence relation \(\sim\) in \(\mathcal{C}^{2p+1}\) we use results of Neumann and O'Brien [17] for which we need to establish some terminology. Let \(\mathbb{D}\) be the Poincare disc and let \(\phi\) be the flow of (1). Identifying each trajectory of (1) to a point we obtain the _cell complex_\(K(\phi)=\mathbb{D}/\phi\), with projection \(\pi:\mathbb{D}\longrightarrow K(\phi)\) and some additional structure, as follows: Figure 3. Dynamics around the equilibria in the invariant circle and codes for the cyclic sequence. Codes (1+) and (1-) refer to repelling and attracting equilibria, respectively, codes (2+) and (2-) correspond the two possible orientations around a saddle-node. 1. cells of dimension 1 correspond to _canonical regions_: open sets \(A\subset\mathbb{D}\), homeomorphic to \(\mathbf{R}^{2}\) where the flow is equivalent to \(\dot{x}=1\), \(\dot{y}=0\); 2. cells \(c\) of dimension 0 correspond to equilibria and separatrices of the flow and are initially classified by the dimension of the fibre \(\pi^{-1}(c)\); 3. a partial order \(<\) is defined on \(K(\phi)\) as follows: separatrices in the boundary of canonical regions have the order induced by the flow; if \(p\) is an equilibrium and \(q\) is a point in a separatrix then if \(p\in\alpha(q)\) then \(\pi(p)<\pi(q)\), if \(p\in\omega(q)\) then \(\pi(q)<\pi(p)\), otherwise \(\pi(q)\) and \(\pi(p)\) are not related. Examples are shown in Figures 4 and 5. **Theorem 5.6**.: _The symbol sequence \(\sigma(\mathcal{L}Q)\), under the identification \(\equiv\), is a complete invariant for the equivalence relation \(\sim\) in \(\mathcal{C}^{2p+1}\)._ Proof.: Let \(\mathcal{G}=\mathcal{L}(Q)\) and \(g(\theta)=\mathcal{G}(\cos\theta,\sin\theta)\). First suppose \(g(\theta)\equiv 0\) or equivalently \(\sigma(\mathcal{G})=\infty\). In this case, as in Lemma 4.3, all points in the invariant circle and in the circle at infinity Figure 4. (a) Phase portrait of (1) when \(g(\theta)\equiv 0\). (b) Lattice for the partial order \(<\) on \(K(\phi)\) for this case. Arrows \(x\to y\) indicate \(\pi(x)<\pi(y)\). Cells \(\pi(A_{1})\) and \(\pi(A_{2})\) are 1-dimensional, for \(A_{1}\) (respectively \(A_{2}\)) they are the projection of trajectories starting at the origin (respectively \(c_{2}\)) and ending at \(c_{1}\). All the other cells are points. Figure 5. (a) Phase portrait of (1) on a sector. (b) Lattice for the partial order \(<\) on \(K(\phi)\) for the same sector: arrows \(x\to y\) indicate \(\pi(x)<\pi(y)\). Cells \(\pi(A_{1})\) and \(\pi(A_{2})\) are 1-dimensional, for \(A_{1}\) (respectively \(A_{2}\)) they are the projection of trajectories starting at the origin (respectively \(p_{2}\)) and ending at \(q_{1}\). All the other cells are points. are equilibria. Apart from the origin all other trajectories are contained in rays, as in Figure 1 (c), hence all \(Q\) for which \(\mathcal{L}(Q)\) has this symbol sequence are equivalent. The other simple case is \(g(\theta)\neq 0\) for all \(\theta\), as in Lemma 4.1, or equivalently \(\sigma(\mathcal{G})=\varnothing\). The invariant circle and the circle at infinity are closed trajectories and the invariant circle attracts all finite trajectories not starting at the origin, by Theorem 2.1. Apart from the origin all other trajectories are spirals, as in Figure 1 (a). The cell complex consists of two 1-dimensional cells, two separatrices (the closed trajectories) giving rise to 0-dimensional cells with 1-dimensional fibre, and the equilibrium at the orgin yielding a 0-dimensional cell with 0-dimensional fibre, with the order shown in Figure 4. Suppose now \(g(\theta)=0\) at finitely many (and not zero) points, as in Lemma 4.2. If \(g(\theta_{1})=0\) then, from the equation (4) in polar coordinates, it follows that the ray given by \(\{(r,\theta_{1})\ :\ r\geq 0\}\) is flow-invariant. Therefore two consecutive zeros \(\theta_{1}<\theta_{2}\) of \(g\) define a flow-invariant _sector_ \[\left\{(r,\theta)\ :\ r\geq 0,\ \theta_{1}\leq\theta\leq\theta_{2}\right\}.\] If \(\theta_{1}<\theta_{2}<\theta_{3}\) are consecutive zeros of \(g(\theta)\) we say the sector determined by \(\theta_{2}\) and \(\theta_{3}\) comes _after_ the sector determined by \(\theta_{1}\) and \(\theta_{2}\). The dynamics of (1) in each sector is the same, as shown in Figure 5 (a), up to a reflection on a line through the origin, since the interior of the sector contains no equilibria and the invariant circle is globally attracting by Theorem 2.1. Hence the part of the cell complex corresponding to the sector is always the same: two 1-dimensional cells, six separatrices giving rise to 0-dimensional cells with 1-dimensional fibre, five equilibria yielding 0-dimensional cells with 0-dimensional fibre, with the order shown in Figure 5 (b). The global cell complex is a concatenation of those obtained from the sectors, depending on the stability within the invariant circle of the points denoted \(p_{1}\) and \(q_{1}\) in Figure 5 (a). In order to construct it, we start with the sector determined by \(\theta_{1}\) and \(\theta_{2}\). The point \(q_{1}\) is an attractor if and only if it determines a \((1-)\) in \(\sigma(\mathcal{G})\). Then the dynamics, and hence the cell complex, in the sector coming after this one is a reflection of that of Figure 5 on the line containing the ray from the origin to \(q_{1}\). The other possibility is that \(q_{1}\) is a saddle-node with symbol \((2+)\) in \(\sigma(\mathcal{G})\), and hence the sector coming after and its cell complex are copies of the first sector and its cell complex. From the reasoning above it is clear that for \(\mathcal{G}_{1}=\mathcal{L}(Q_{1})\), \(\mathcal{G}_{2}=\mathcal{L}(Q_{2})\), we have \(\sigma(\mathcal{G}_{1})\equiv\sigma(\mathcal{G}_{2})\) if and only if they correspond to dynamics on \(\mathbb{D}\) with isomorphic cell complexes. From [17, Theorem 2'], two continuous flows on the plane with finitely many separatrices are topologically equivalent if and only if they have isomorphic cell complexes. It follows that \(Q_{1}\sim Q_{2}\) if and only if \(\sigma(\mathcal{G}_{1})\equiv\sigma(\mathcal{G}_{2})\). Thus, the global dynamics of (1) for \(Q\in\mathcal{C}^{2p+1}\) is completely determined by the dynamics on the invariant circle, or equivalently, by the dynamics on the circle at infinity of the Poincare disc. When \(Q\) is contracting the dynamics of (1) only depends on the polynomial \(\mathcal{L}Q\), in sharp contrast with the general (not contracting) case where the dynamics also depends on \(\mathcal{M}Q\), as described in [1]. The invariant may now be used to decompose \(\mathcal{C}^{2p+1}\) under \(\sim\) into the following sets: * \(\Sigma_{0}\) is the set of \(Q\in\mathcal{C}^{2p+1}\) such that \(\sigma(\mathcal{L}(Q))\) does not contain the symbols \((2\pm)\) and \(\sigma(\mathcal{L}(Q))\neq\infty\); * \(\Sigma_{j}\) for \(j=1,2,\ldots,p+1\) is the set of \(Q\in\mathcal{C}^{2p+1}\) such that \(\sigma(\mathcal{L}(Q))\) contains exactly \(j\) occurrences of the symbols \((2\pm)\); * \(\Sigma_{p+2}\) is the set of \(Q\in\mathcal{C}^{2p+1}\) such that \(\sigma(\mathcal{L}(Q))=\infty\). The next result describes the geometry of these sets. In particular, it follows that generically \(Q\in\Sigma_{0}\). **Theorem 5.7**.: _The sets \(\Sigma_{j}\subset\mathcal{C}^{2p+1}\) satisfy:_ * \(\Sigma_{0}\) _is the union of an open and dense subset of_ \(\mathcal{C}^{2p+1}\) _with a set of codimension 2 in_ \(\mathcal{C}^{2p+1}\)_;_ * _each_ \(\Sigma_{j}\)_,_ \(j=1,2,\ldots,p+1\) _is the union of a subset of codimension_ \(j\) _of_ \(\mathcal{C}^{2p+1}\) _and a set of codimension_ \(2j+2\) _in_ \(\mathcal{C}^{2p+1}\)_;_ * \(\Sigma_{p+2}=\mathcal{C}^{2p+1}\cap\ker\mathcal{L}\) _and has codimension_ \(2p+3\) _in_ \(\mathcal{C}^{2p+1}\)_._ Proof.: The main argument in the proof is that for \(A\subset P^{2p+2}(\mathbf{R}^{2},\mathbf{R})\) we have that \(\operatorname{cod}\mathcal{L}^{-1}(A)\cap\mathcal{C}^{2p+1}=\operatorname{ cod}(A)\). This is true because \(\mathcal{C}^{2p+1}\) is an open subset of \(P^{2p+1}(\mathbf{R}^{2},\mathbf{R}^{2})\) and \(\mathcal{L}(\mathcal{C}^{2p+1})=P^{2p+2}(\mathbf{R}^{2},\mathbf{R})\) by Theorem 5.1. The set \(\mathcal{O}_{0}\) of polynomials that only have simple roots in \(\mathbf{RP}^{1}\) is open and dense in \(P^{2p+2}(\mathbf{R}^{2},\mathbf{R})\). Since \(\mathcal{L}\) is a continuous and open map, therefore \(\mathcal{L}^{-1}(\mathcal{O}_{0})\subset\Sigma_{0}\) is open and dense in \(\mathcal{C}^{2p+1}\). The complement \(\Sigma_{0}\backslash\mathcal{L}^{-1}(\mathcal{O}_{0})\) consists of those \(Q\) such that \(\mathcal{L}Q\) has at least one root of multiplicity at least 3 in \(\mathbf{RP}^{1}\), and this latter set is the union of sets of codimension \(\geq 2\). This establishes (a). Similarly, the set \(\mathcal{O}_{j}\), \(j=1,2,\ldots,p+1\) of polynomials with simple roots in \(\mathbf{RP}^{1}\), except for exactly \(j\) roots of multiplicity 2 satisfies \(\operatorname{cod}\mathcal{O}_{j}=j\) in \(P^{2p+2}(\mathbf{R}^{2},\mathbf{R})\) and \(\mathcal{L}^{-1}(\mathcal{O}_{j})\subset\Sigma_{j}\). The complement \(\Sigma_{j}\backslash\mathcal{L}^{-1}(\mathcal{O}_{j})\) consists of those \(Q\) such that either one of the roots of \(\mathcal{L}Q\) in \(\mathbf{RP}^{1}\) that corresponds to a symbol \((1\pm)\) has multiplicity at least 3, or one of the roots corresponding to a symbol \((2\pm)\) has multiplicity at least 4, establishing (b). Finally, \(\Sigma_{p+2}=\ker\mathcal{L}\cap\mathcal{C}^{2p+1}\) and hence \(\operatorname{cod}\Sigma_{p+2}=\dim P^{2p+2}(\mathbf{R}^{2},\mathbf{R})=2p+3\), hence (c) holds. The partition \(\mathcal{C}^{2p+1}=\bigcup_{j=0}^{p+2}\Sigma_{j}\) is not a stratification of \(\mathcal{C}^{2p+1}\). For instance, polynomials with \(\sigma(\mathcal{G})=(2+)(2+)\) and two different roots of multiplicity 2 may accumulate on a polynomial with a single root of multiplicity 4 for which \(\sigma=(2+)\). Therefore, the closure of \(\Sigma_{2}\), a set of codimension 2, contains points of \(\Sigma_{1}\) that has lower codimension. ## 6. A class of examples -- definite nonlinearities We consider the family of planar vector fields given in [12] \[X(v)=Av+\varphi(v)Bv \tag{8}\] where \(A=\begin{pmatrix}\lambda&0\\ 0&\lambda\end{pmatrix}\), \(\lambda>0\), \(B=\begin{pmatrix}a&b\\ c&d\end{pmatrix}\) is a \(2\times 2\) matrix and \(\varphi:\mathbf{R}\to\mathbf{R}\) is a homogeneous polynomial of even degree \(2n\geq 2\). The polynomial \(\varphi\) is said to be positive (negative) definite if \(\varphi(v)>0\) (\(\varphi(v)<0\)) for all \(v\neq(0,0)\). Hence, the polynomial \(Q(x,y)=\varphi(x,y)B\begin{pmatrix}x\\ y\end{pmatrix}\) is contracting provided by \(\varphi(v)\) is positive (negative) definite and \(B\) is a negative (positive) definite matrix in the sense that \((x,y)B(x,y)^{T}\) is a negative (positive) definite binary form. In that case we say that \(\varphi\) and \(B\) are _of opposite sign_. **Proposition 6.1**.: _Suppose \(\varphi\) and \(B\) in (8) are of opposite sign. Then, the qualitative phase-portrait of (8) (up to orientation of the orbits) is of one of types given in Figure 6._ Proof.: Since \(\mathcal{L}Q(x,y)=\varphi(x,y)(cx^{2}+(d-a)xy-by^{2})\) and \(\varphi\) is definite, the phase-portrait at infinity is given by the second order binary form \(\psi(x,y)=cx^{2}+(d-a)xy-by^{2}\) (up the orientation of the orbits). According to [8, Theorem 1.3] and the proof of Theorem 7.2 below, the following vector fields give all possible dynamics of the vector field (8) (up to orientation of the orbits): 1. \(\left\{\begin{array}{rcl}\dot{x}&=&\lambda x+\varphi(x,y)(ax-\alpha y)\\ \dot{y}&=&\lambda y+\varphi(x,y)(\alpha x+ay),\quad\alpha=\pm 1\end{array}\right.\) 2. \(\left\{\begin{array}{rcl}\dot{x}&=&\lambda x+\varphi(x,y)(ax+y)\\ \dot{y}&=&\lambda y+\varphi(x,y)(x+ay),\quad a^{2}-1>0\end{array}\right.\) 3. \(\left\{\begin{array}{rcl}\dot{x}&=&\lambda x+a\varphi(x,y)x\\ \dot{y}&=&\lambda y+\varphi(x,y)(\alpha x+ay),\quad\alpha=\pm 1,\quad 4a^{2}-1>0 \end{array}\right.\) 4. \(\left\{\begin{array}{rcl}\dot{x}&=&\lambda x+a\varphi(x,y)x\\ \dot{y}&=&\lambda y+a\varphi(x,y)y,\quad a\neq 0\end{array}\right.\) If \(a<0\) (\(a>0\)) in the normal forms above, then \(B\) is negative (positive) definite. So, if \(\varphi(x,y)a<0\) the vector field \(Q(x,y)=\varphi(x,y)B(x,y)^{T}\) is contracting and by Theorem 2.1 there exists a globally attracting circle. The dynamics on the circle is given by \(\dot{\theta}=\mathcal{L}Q(\cos\theta,\sin\theta)=g(\theta)\) and coincides with the dynamics on the circle at infinity. The expressions for \(g(\theta)\) for (I)-(IV) are given in Table 1, hence the phase-portraits are those in Figure 6. Observe that the family (8) realizes all possibilities given by Proposition 4.4. In case \(\lambda<0\), the polynomial \(\varphi\) and the matrix \(B\) must be of the same sign for Proposition 6.1 to hold. ## 7. Another class of examples -- cubic nonlinearities We can now describe the phase diagrams for star nodes in the plane with contracting homogeneous cubic nonlinearity. First we note that Proposition 3.3 takes a particularly simple form stated in the next result: \begin{table} \begin{tabular}{l|c c c c} case & (I) & (II) & (III) & (IV) \\ \hline \(g(\theta)\) & \(\alpha\varphi(\cos\theta,\sin\theta)\) & \(\cos\,(2\theta)\varphi(\cos\theta,\sin\theta)\) & \(\alpha\cos^{2}\theta\varphi(\cos\theta,\sin\theta)\) & 0 \\ \(\sigma(\mathcal{L}Q)\) & \(\varnothing\) & (1+)(1-) & (2+) & \(\infty\) \\ \end{tabular} \end{table} Table 1. Expressions of \(g(\theta)=\mathcal{L}Q(\cos\theta,\sin\theta)\) and symbol sequences for the normal forms in Proposition 6.1 \(\sigma(\mathcal{L}Q)\) where \(\alpha=\pm 1\). Figure 6. Phase-portraits for Proposition 6.1 where \(\alpha=\pm 1\). **Corollary 7.1** (of Proposition 3.3).: _A homogeneous polynomial vector field \(Q\) of degree \(3\) in \(\mathbf{R}^{2}\) is contracting if writing \(Q\) in the form (6) the following conditions hold:_ 1. _either_ \(p_{1}(1,0)\) _and_ \(p_{1}(0,1)<0\) _or_ \(p_{2}(1,0)\) _and_ \(p_{2}(0,1)<0\)_, and_ 2. \(4p_{1}(1,0)p_{2}(1,0)>(p_{3}(1,0)+p_{4}(1,0))^{2}\)__ 3. \(4p_{1}(0,1)p_{2}(0,1)>(p_{3}(0,1)+p_{4}(0,1))^{2}\)_._ Proof.: Writing \(p_{j}(x^{2},y^{2})=a_{j0}x^{2}+a_{j1}y^{2}\) for \(j=1,\ldots,4\), the result follows from Proposition 3.3, since for all \((x,y)\neq(0,0)\): 1. implies that either \(p_{1}(x^{2},y^{2})=a_{10}x^{2}+a_{11}y^{2}<0\) or \(p_{2}(x^{2},y^{2})=a_{20}x^{2}+a_{21}y^{2}<0)\). It has been observed before that together with the second condition (here ensured by (ii) and (iii)), it is enough to verify one of the inequalities. 2. and (iii) imply that \(4p_{1}(x^{2},y^{2})p_{2}(x^{2},y^{2})-(p_{3}(x^{2},y^{2})+p_{4}(x^{2},y^{2}))^ {2}=(4a_{10}a_{20}-(a_{30}+a_{40})^{2})x^{4}+(4a_{10}a_{21}+4a_{11}a_{20}-2(a_{ 30}+a_{40})(a_{31}+a_{41}))x^{2}y^{2}+(4a_{11}a_{21}-(a_{31}+a_{41})^{2})y^{4}>0\) since all the coefficients are positive. Note that \(4a_{10}a_{20}>(a_{30}+a_{40})^{2}\) and \(4a_{11}a_{21}>(a_{31}+a_{41})^{2}\) imply that \((a_{30}+a_{40})(a_{31}+a_{41})<4\sqrt{a_{10}a_{20}a_{11}a_{21}}\) which is smaller than \(2(a_{10}a_{21}+a_{11}a_{20})\). \begin{table} \begin{tabular}{l l l} (I) & \(\left\{\begin{array}{rcl}\dot{x}&=&\lambda x+3\mu x(x^{2}+y^{2})-y^{3}&\mu<- \frac{1}{3}\\ \dot{y}&=&\lambda y+3\mu y(x^{2}+y^{2})+x(x^{2}+6\mu y^{2})&\mathcal{G}(x,y)=x ^{4}+6\mu x^{2}y^{2}+y^{4}\end{array}\right.\) \\ (II) & \(\left\{\begin{array}{rcl}\dot{x}&=&\lambda x-Kx(x^{2}+y^{2})-\alpha y^{3}& \alpha=\pm 1,\mu>-\frac{1}{3},\mu\neq\frac{1}{3}\\ \dot{y}&=&\lambda y-Ky(x^{2}+y^{2})-\alpha x(x^{2}+6\mu y^{2})&\mathcal{G}(x,y)= \alpha(x^{4}+6\mu x^{2}y^{2}+y^{4})\\ &K=\max\{(3\mu)^{2},1/2\}\end{array}\right.\) \\ (III) & \(\left\{\begin{array}{rcl}\dot{x}&=&\lambda x-Kx(x^{2}+y^{2})+y^{3}&\mathcal{ G}(x,y)=x^{4}+6\mu x^{2}y^{2}-y^{4}\\ \dot{y}&=&\lambda y-Ky(x^{2}+y^{2})+x(x^{2}+6\mu y^{2})&K=\max\{(3\mu)^{2},1/2\} \end{array}\right.\) \\ (IV) & \(\left\{\begin{array}{rcl}\dot{x}&=&\lambda x-4x(x^{2}+y^{2})-\alpha y(6x^{ 2}+y^{2})&\alpha=\pm 1\\ \dot{y}&=&\lambda y-4y(x^{2}+y^{2})&\mathcal{G}(x,y)=\alpha(6x^{2}y^{2}+y^{4}) \end{array}\right.\) \\ (V) & \(\left\{\begin{array}{rcl}\dot{x}&=&\lambda x-x(x^{2}+y^{2})-\alpha y(+x^{ 2}-y^{2})&\alpha=\pm 1\\ \dot{y}&=&\lambda y-y(x^{2}+y^{2})&\mathcal{G}(x,y)=\alpha(6x^{2}y^{2}-y^{4}) \end{array}\right.\) \\ (VI) & \(\left\{\begin{array}{rcl}\dot{x}&=&\lambda x-x(x^{2}+y^{2})\\ \dot{y}&=&\lambda y-y(x^{2}+y^{2})+6xy^{2}\end{array}\right.\) & \(\mathcal{G}(x,y)=6x^{2}y^{2}\) \\ (VII) & \(\left\{\begin{array}{rcl}\dot{x}&=&\lambda x-x(x^{2}+y^{2})\\ \dot{y}&=&\lambda y-y(x^{2}+y^{2})\end{array}\right.\) & \(\mathcal{G}(x,y)=0\) \\ \end{tabular} \end{table} Table 2. Normal forms for contracting cubic nonlinearities in Theorem 7.2 with \(\mathcal{G}(x,y)=\mathcal{L}Q(x,y)\). **Theorem 7.2**.: _Let \(\lambda>0\) and \(Q\) be a contracting homogeneous cubic vector field. Then (1) is equivalent to one of the 7 normal forms in Table 2. The qualitative phase-portrait of (1) is of one of types shown in Figure 7 and symbol sequences are given in Table 4._ Proof.: Normal forms for binary forms of degree 4, up to a linear change of coordinates, are given in [8, Theorem 2.6]. For each binary form \(\mathcal{G}\) on this list, Theorem 5.1 ensures that there is a vector field (1) with contracting nonlinearity \(Q\) such that \(\mathcal{G}(x,y)=\mathcal{L}Q(x,y)\). Since the dynamics of (1) is totally determined by \(g(\theta)=\mathcal{G}(\cos\theta,\sin\theta)\) and since by Lemma 5.2 a linear \begin{table} \begin{tabular}{c c l l l l} normal & equilibria & type of & angular & symbol & \(\operatorname{cod}\Sigma_{j}\) \\ form & at infinity & roots & stability & sequence & \\ \hline (I) & 8 & simple & hyperbolic & \((1+),(1-),(1+),(1-)\) & 0 \\ \hline (II) & 0 & none & - & \(\varnothing\) & 0 \\ \hline (III) & 4 & simple & hyperbolic & \((1+),(1-)\) & 0 \\ \hline (IV) & 2 & double & saddle-nodes & \((2+)\) & 1 \\ \hline (V) & 6 & 4 simple & 4 hyperbolic & \((2+),(1-),(1+)\) & 1 \\ & & 2 double & 2 saddle-nodes & or \((2-),(1+),(1-)\) & \\ \hline (VI) & 4 & double & saddle-nodes & \((2+),(2+)\) & 2 \\ & \((0,\,\pi/2,\,\pi,\,3\pi/2)\) & & & & \\ \hline (VII) & \(\infty\) & all & - & \(\infty\) & 3 \\ \hline \hline (VIII) & 0 & none & - & \(\varnothing\) & 0 \\ \hline (IX) & 4 & 2 simple & 2 hyperbolic & \((1+),(1-)\) & 0 \\ & \((0,\,\pi/2,\,\pi,\,3\pi/2)\) & 2 triple & 2 hyperbolic-like & & \\ \hline (X) & 2 & quadruple & saddle-nodes & \((2+)\) & 1 \\ & \((\pi/2,\,3\pi/2)\) & & & & \\ \hline \end{tabular} \end{table} Table 4. Number and angular stability of finite and infinite equilibria for normal forms in Theorem 7.2. Hyperbolic-like are weak non-hyperbolic attractors or repellors. Symbol sequences refer to the coding of Section 5 and the sign \((2+)\) may be replaced by \((2-)\) depending on the value of \(\alpha=\pm 1\). The subset \(\Sigma_{j}\) is that of Theorem 5.7. \begin{table} \begin{tabular}{c c l l} normal & equilibria & type of & angular & symbol & \(\operatorname{cod}\Sigma_{j}\) \\ form & at infinity & roots & stability & sequence & \\ \hline (I) & 0 & none & - & \(\varnothing\) & 0 \\ \hline (III) & 0 & none & - & \(\varnothing\) & 0 \\ \hline (III) & 4 & simple & hyperbolic & \((1+),(1-)\) & 0 \\ \hline (IV) & 2 & double & saddle-nodes & \((2+)\) & 1 \\ \hline (V) & 6 & 4 simple & 4 hyperbolic & \((2+),(1-),(1+)\) & 1 \\ & & 2 double & 2 saddle-nodes & or \((2-),(1+),(1-)\) & \\ \hline (VI) & 4 & double & saddle-nodes & \((2+),(2+)\) & 2 \\ & \((0,\,\pi/2,\,\pi,\,3\pi/2)\) & & & \\ \hline (VII) & \(\infty\) & all & - & \(\infty\) & 3 \\ \hline \hline (VIII) & 0 & none & - & \(\varnothing\) & 0 \\ \hline (IX) & 4 & 2 simple & 2 hyperbolic & \((1+),(1-)\) & 0 \\ & \((0,\,\pi/2,\,\pi,\,3\pi/2)\) & 2 triple & 2 hyperbolic-like & \\ \hline (X) & 2 & quadruple & saddle-nodes & \((2+)\) & 1 \\ & \((\pi/2,\,3\pi/2)\) & & & \\ \hline \end{tabular} \end{table} Table 3. Normal forms for contracting cubic nonlinearities in the list of [8, Theorem 2.6] that do not appear in Theorem 7.2 with \(\mathcal{G}(x,y)=\mathcal{L}Q(x,y)\). change of coordinates in (1) corresponds to a linear change of coordinates in \(\mathcal{G}\), this gives a list of all possible dynamical behaviour. The list of [8, Theorem 2.6] contains ten normal forms, three of which do not appear in our list because they yield dynamics that is globally equivalent to one of the forms in Table 2. They are listed in Table 3. The cubic nonlinearities in both lists were obtained following the construction in the proof of Theorem 5.1. The constant \(K\) such that \(Q\) is contracting was obtained from Corollary 7.1 as follows: in all cases, except for (IX), the binary form \(\mathcal{G}\) is written as \(\mathcal{G}(x,y)=x^{2}b_{1}(x^{2},y^{2})+y^{2}b_{3}(x^{2},y^{2})\). This yields, in the notation of Proposition 3.1, the choices \(p_{1}(x^{2},y^{2})=p_{2}(x^{2},y^{2})=-K(x^{2}+y^{2})\), hence \(p_{1}(1,0)=p_{2}(1,0)=p_{1}(0,1)=p_{2}(0,1)=-K<0\). Conditions (ii) and (iii) of the corollary become \[4K^{2}>(p_{3}(1,0)+p_{4}(1,0))^{2}\quad\text{and}\quad 4K^{2}>(p_{3}(0,1)+p_{4}( 0,1))^{2}.\] These expressions are evaluated in Table 5. For the remaining case (IX) we have \(p_{1}(x^{2},y^{2})=-K(x^{2}+y^{2})\) and \(p_{2}(x^{2},y^{2})=-K(x^{2}+y^{2})+4x^{2}\) with \(p_{3}(x^{2},y^{2})=p_{4}(x^{2},y^{2})=0\). Conditions (ii) and (iii) of the corollary are then \(-4K(4-K)>0\) and \(4K^{2}>0\), satisfied by any \(K\in(0,4)\), for instance, \(K=2\). Since \(\lambda>0\) and the nonlinearities in Systems (I)-(VII) are contracting, it follows by Theorem 2.1 that there exists a globally attracting circle. The dynamics on the circle is given by \(\dot{\theta}=g(\theta)\), where \(g(\theta)=\mathcal{G}(\cos\theta,\sin\theta)\) and coincides with the dynamics on the circle at infinity. The number of solutions of \(g(\theta)=0\), their type and stability are given in Table 4. Figure 7. Qualitative portraits on the Poincaré disc for (1) with a contracting cubic nonlinearity. On the six top figures only half the disc is shown, the other half is obtained by rotation of \(\pi\) around the origin. Numbering corresponds to normal forms in Theorem 7.2. In Case (VII) both the circle at infinity and the invariant sphere are continua of equilibria. From Table 4 it follows that the three normal forms in Table 3 that share the same symbol sequence. These are: \[(VIII)\sim(II)\qquad(IX)\sim(III)\quad\text{and}\quad(X)\sim(IV).\] Thus, these normal forms have globally equivalent dynamics. ### Cubic \(\mathbf{Z}_{2}\oplus\mathbf{Z}_{2}\) nonlinearities As in 4.1 above, we may say more in the symmetric case, writing \[Q(x,y)=\left(-x(a_{10}x^{2}+a_{11}y^{2}),-y(a_{20}x^{2}+a_{21}y^{2})\right) \tag{9}\] we start by finding when \(Q\) is contracting. **Theorem 7.3**.: _The polynomial \(Q\) of the form (9) is contracting if and only if \(a_{10}>0\), \(a_{21}>0\) and one of the following conditions holds:_ 1. \(a_{11}+a_{20}\geq 0\)_;_ 2. \(a_{10}a_{21}-(a_{11}+a_{20})^{2}/4>0\)_._ Proof.: In this case we have \[\mathcal{M}Q(x,y)=-x^{2}(a_{10}x^{2}+a_{11}y^{2})-y^{2}(a_{20}x^{2}+a_{21}y^{2}).\] Sufficiency: If \(a_{10}>0\), \(a_{21}>0\) and (a) holds then clearly \(\mathcal{M}Q(x,y)<0\) for all \((x,y)\neq(0,0)\). The case when (b) holds follows from the proof of Proposition 3.3. Necessity: The function \(\mathcal{M}Q(x,y)\) is a quadratic form \(p(u,v)\) on \((u,v)=(x^{2},y^{2})\), represented by the symmetric matrix \[M=\begin{pmatrix}-a_{10}&-(a_{11}+a_{20})/2\\ -(a_{11}+a_{20})/2&-a_{21}\end{pmatrix}.\] If \(Q\) is contracting then \(\mathcal{M}Q(x,y)<0\) for all \((x,y)\neq(0,0)\). In particular, \(\mathcal{M}Q(x,0)=-a_{10}x^{4}<0\) for \(x\neq 0\) and \(\mathcal{M}Q(0,y)=-a_{20}y^{4}<0\) for \(y\neq 0\), hence \(a_{10}>0\) and \(a_{21}>0\) and \(\operatorname{Tr}M<0\). \begin{table} \begin{tabular}{c c c c c} normal & modal & \((p_{3}(1,0)+p_{4}(1,0))^{2}\) & \((p_{3}(0,1)+p_{4}(0,1))^{2}\) & \(K^{2}\) \\ form & parameter & & & \\ \hline (I) & \(\mu<-1/3\) & \(1\) & \((-1+6\mu)^{2}>4(3\mu)^{2}>4\) & \((3\mu)^{2}\) \\ \hline (II) & \(1/3\neq\mu>-1/3\) & \(\alpha^{2}=1\) & \((1+6\mu)^{2}>4(3\mu)^{2}\) & \(\max\{(3\mu)^{2},1/2\}\) \\ & \(\alpha=\pm 1\) & & & \\ \hline (III) & \(\mu\in\mathbf{R}\) & \(1\) & \((-1+6\mu)^{2}>4(3\mu)^{2}\) & \(\max\{(3\mu)^{2},1/2\}\) \\ \hline (IV) & \(\alpha=\pm 1\) & \((-6\alpha)^{2}\) & \((-\alpha)^{2}=1\) & \(16\) \\ \hline (V) & \(\alpha=\pm 1\) & \((-\alpha)^{2}=1\) & \(\alpha^{2}=1\) & \(1\) \\ \hline (VI) & & \(0\) & \(1\) & \(1\) \\ \hline (VII) & & \(0\) & \(0\) & \(1\) \\ \hline (VIII) & \(\alpha=\pm 1\) & \(\alpha^{2}=1\) & \((-\alpha+2\alpha)^{2}=1\) & \(1\) \\ \hline (X) & \(\alpha=\pm 1\) & \(\alpha^{2}=1\) & \(0\) & \(1\) \\ \hline \end{tabular} \end{table} Table 5. Calculation of a value of \(K\) for which \(Q\) in the normal forms in Theorem 7.2 is contracting from Corollary 7.1. The case (IX) is different and \(K\) is computed in the text. The condition \(\mathcal{M}Q(x,y)<0\) for all \((x,y)\neq(0,0)\) implies that \(p(u,v)=-a_{10}u^{2}-(a_{11}+a_{20})uv-a_{21}v^{2}<0\) for all \(u\geq 0\), \(v\geq 0\) with \((u,v)\neq(0,0)\). Let \(\mu_{+}\geq\mu_{-}\) be the eigenvalues of \(M\). Since \(\operatorname{Tr}M<0\) then \(\mu_{-}<0\). There are three possibilities: 1. The quadratic form \(p(u,v)\) is negative definite, or equivalently, both \(\mu_{+}<0\) and \(\mu_{-}<0\). This implies \(\det M>0\), hence (b) holds. 2. The eigenvalues of \(M\) satisfy \(\mu_{-}<0\) and \(\mu_{+}=0\). In suitable coordinates \((\tilde{u},\tilde{v})\), we have \(p(\tilde{u},\tilde{v})=\mu_{-}\tilde{u}^{2}\), where \(\tilde{u}\) is the coordinate in the direction of the eigenvector of \(\mu_{-}\) and \(\tilde{v}\) is the coordinate in the direction of the eigenvector of zero. Thus, if \(Q\) is contracting then the eigenvector of zero does not lie in the first or third quadrants. The eigenvectors \((u,v)\) of the zero eigenvalue satisfy \(-a_{10}u-(a_{11}+a_{20})v/2=0\), then they are scalar multiples of \((u,v)=(a_{11}+a_{20},-2a_{10})\). This last vector is not in the first or the third quadrants if and only if \(a_{11}+a_{20}>0\), as in (a). 3. The eigenvalues of \(M\) satisfy \(\mu_{-}<0\) and \(\mu_{+}>0\). In suitable coordinates \((\tilde{u},\tilde{v})\), we have \(p(\tilde{u},\tilde{v})=\mu_{+}\tilde{u}^{2}+\mu_{-}\tilde{v}^{2}\), where \(\tilde{u}\) is the coordinate the direction of the eigenvector of \(\mu_{+}\) and \(\tilde{v}\) is the coordinate in the direction of the eigenvector of \(\mu_{-}\). Therefore, if \(Q\) is contracting, then the eigenvector of \(\mu_{+}\) does not lie in the (closure of) first nor in the third quadrant. The characteristic polynomial of \(M\) is \[p_{M}(\mu)=\mu^{2}+(a_{10}+a_{21})\mu+a_{10}a_{21}-(a_{11}+a_{20})^{2}/4\] and \(2\mu_{+}=-(a_{10}+a_{21})+\sqrt{\Delta}\) with \(\Delta=(a_{10}-a_{21})^{2}+(a_{11}+a_{20})^{2}\). The eigenvectors \((u,v)\) of \(\mu_{+}\) satisfy \(-a_{10}u-(a_{11}+a_{20})v/2=\mu_{+}u\) or, equivalently, \[(a_{11}+a_{20})v=(-2a_{10}-2\mu_{+})\,u=\left(a_{21}-a_{10}-\sqrt{\Delta} \right)u\] and are scalar multiples of \((u,v)=\left(a_{11}+a_{20},a_{21}-a_{10}-\sqrt{\Delta}\right)\). Suppose \(a_{11}+a_{20}<0\). Then we must have \(a_{21}-a_{10}>\sqrt{\Delta}>0\) and this is equivalent to \((a_{21}-a_{10})^{2}>\Delta=(a_{10}-a_{21})^{2}+(a_{11}+a_{20})^{2}\), or equivalently \(0>(a_{11}+a_{20})^{2}\), a contradiction. Hence, \(a_{11}+a_{20}\geq 0\). From (9) we get: \[\mathcal{L}Q(x,y)=xy\left(Ax^{2}-By^{2}\right)=xyq(x,y)\qquad A=a_{10}-a_{20} \quad\text{and}\quad B=a_{21}-a_{11}. \tag{10}\] The dynamics is completely determined by the values of \(A\) and \(B\), as the next result shows. **Proposition 7.4**.: _If \(Q\) is a contracting cubic \(\mathbf{Z}_{2}\oplus\mathbf{Z}_{2}\) equivariant vector field then the dynamics of (1) in the invariant circle is the following:_ 1. _If_ \(A=B=0\) _then_ \(\sigma(\mathcal{L}Q)=\infty\)_._ 2. _If_ \(AB=0\) _and_ \(A+B\neq 0\) _then_ \(\sigma(\mathcal{L}Q)=(1+)(1-)\) _and one of the equilibria is not hyperbolic._ 3. _If_ \(AB<0\) _then_ \(\sigma(\mathcal{L}Q)=(1+)(1-)\)_._ 4. _If_ \(AB>0\) _then_ \(\sigma(\mathcal{L}Q)=(1+)(1-)(1+)(1-)\)_._ _Moreover, in cases (III) and (IV) all equilibria are hyperbolic._ Proof.: First note that from (10) there are always equilibria on the axes, at the \(4\) points where they cross the invariant circle. Equilibria on the invariant circle are hyperbolic if and only if they are simple roots of \(\mathcal{L}Q\). (I) From Lemma 4.8 the invariant circle is a continuum of equilibria if and only if \(A=B=0\), establishing (I). The invariant circle is the ellipse \(a_{10}x^{2}+a_{11}y^{2}=\lambda\), all the trajectories are contained in lines through the origin and go from the origin (or from infinity) to a point in the ellipse. Indeed, \(\dfrac{\dot{y}}{\dot{x}}=\dfrac{y}{x}\), hence \(\dfrac{dy}{dx}=\dfrac{y}{x}\) and \(y=Kx\), where \(K\) is a real constant. See Figure 8. (II) If \(A\neq 0\) and \(B=0\) then \(q(x,y)=Ax^{2}\) so all the equilibria lie on the axes. The equilibria on the \(x=0\) axis not hyperbolic, since they are roots of multiplicity 3 of \(\mathcal{L}Q\). The case \(A=0\) and \(B\neq 0\) is analogous. When both \(A\neq 0\) and \(B\neq 0\) then \(q(x,0)\neq 0\neq q(0,y)\). Therefore the equilibria on the axes are hyperbolic. Other equilibria satisfy \(Ax^{2}=By^{2}\). There are two cases to consider. (III) If \(AB<0\) then \(q(x,y)=0\) has no solutions so all the equilibria lie on the axes. (IV) If \(AB>0\) then \(q(x,y)=0\) has solutions \(y=\pm\sqrt{Ax^{2}/B}\), corresponding to one hyperbolic equilibrium on the interior of each one of the quadrants in the plane. **Acknowledgements:** The authors are grateful to P. Gothen, R. Prohens and A. Teruel for fruitful conversations.
2310.13597
Explicit orthogonal and unitary designs
We give a strongly explicit construction of $\varepsilon$-approximate $k$-designs for the orthogonal group $\mathrm{O}(N)$ and the unitary group $\mathrm{U}(N)$, for $N=2^n$. Our designs are of cardinality $\mathrm{poly}(N^k/\varepsilon)$ (equivalently, they have seed length $O(nk + \log(1/\varepsilon)))$; up to the polynomial, this matches the number of design elements used by the construction consisting of completely random matrices.
Ryan O'Donnell, Rocco A. Servedio, Pedro Paredes
2023-10-20T15:46:15Z
http://arxiv.org/abs/2310.13597v1
# Explicit orthogonal and unitary designs ###### Abstract We give a strongly explicit construction of \(\epsilon\)-approximate \(k\)-designs for the orthogonal group \(\mathrm{O}(N)\) and the unitary group \(\mathrm{U}(N)\), for \(N=2^{n}\). Our designs are of cardinality \(\mathrm{poly}(N^{k}/\epsilon)\) (equivalently, they have seed length \(O(nk+\log(1/\epsilon))\)); up to the polynomial, this matches the number of design elements used by the construction consisting of completely random matrices. ## 1 Introduction The main new result in our work is the following: **Theorem 1.1**.: _Let \(N=2^{n}\) and let \(\mathrm{G}(n)\) denote either the orthogonal group \(\mathrm{O}(N)\) or the unitary group \(\mathrm{U}(N)\). Then for any \(k=k(n)\), there is an explicit \(\epsilon\)-approximate \(k\)-design for \(\mathrm{G}(n)\) of cardinality \(\mathrm{poly}(N^{k}/\epsilon)\); i.e., samplable using a seed of just \(O(nk+\log(1/\epsilon))\) truly random bits. Moreover, these designs are strongly explicit in the following sense: (i) each output matrix is given by an \(n\)-qubit circuit consisting of \(S=\mathrm{poly}(nk)\log(1/\epsilon)\) gates, each gate being either \(\mathrm{CNOT}\) or one of a few fixed and explicitly specified \(1\)-qubit gates; (ii) the algorithm that takes as input a seed and outputs the associated circuit runs in deterministic \(\mathrm{poly}(S)\) time._ In the unitary case, similar results in the literature only discuss the regime \(k\leq\mathrm{poly}(n)\)[11, 12], or have polynomially worse seed length [1, 13]. In contrast, our result holds for all \(k\) (even exponentially large as a function of \(n\), or larger), and achieves a seed length which matches, up to constant factors, that of a random construction. A significant motivation for our work was the orthogonal case, where the only prior works we know of are [11, 12], which we discuss below. Our Theorem 1.1 provides the efficient orthogonal designs needed for Kothari and Meka's near-optimal pseudorandom generators for spherical caps [11]. Let us now discuss the general context for our result. Derandomization.Let \(\mathcal{G}\) be a class of objects, and assume informally that each object has "size" \(N^{\Theta(1)}\) (think, e.g., of strings of length \(N\), or \(N\times N\) matrices). To choose an object from the uniform probability distribution on \(\mathcal{G}\) typically requires using \(\Omega(N)\) truly random bits. A broad goal in derandomization is to identify a useful notion of "pseudorandomness" for probability distributions on \(\mathcal{G}\), and then to show that one can sample from such a distribution using just \(r\ll N\) truly random bits.1 An additional goal is for the sampling algorithm to be _efficient_; i.e., the sampled object should be produced by a deterministic \(\mathrm{poly}(r)\)-time algorithm, given the truly random seed of length \(r\). In this case, since the sampler has only \(2^{r}\) possible outcomes yet the total number of objects is exponential in \(N\), it must be the case that the sampler represents the output objects in a "succinct" way. Informally, if it is possible to efficiently compute with objects represented in this succinct way, the sampler is said to be "strongly explicit". Exact \(k\)-wise independence.One of the most common and useful notions of pseudorandomness is that of _bounded independence_. For random objects with \(N^{\Theta(1)}\) "entries" ("coordinates"/"dimensions"), it often suffices for applications if the objects are merely "\(k\)-wise independent" for some \(k\ll N\). This means that the object looks truly random whenever only \(k\) entries are inspected. In this case one may hope that the object can be sampled using a random seed of length just \(O(k\log N)\) bits. The paradigmatic example of this comes from \(k\)-wise independent length-\(N\) Boolean strings. Using results from coding theory [1], it has long been known that \(O(k\log N)\) random bits suffice to efficiently sample a precisely \(k\)-wise independent string \(\mathbf{x}\in\{0,1\}^{N}\) (meaning that \((\mathbf{x}_{i_{1}},\ldots,\mathbf{x}_{i_{k}})\) is perfectly uniformly distributed on \(\{0,1\}^{k}\) for any \(i_{1},\ldots,i_{k}\)). For other kinds of random objects, obtaining _exact_\(k\)-wise independence seems extremely difficult. Take the case of random permutations, where \(\mathbf{\pi}\in S_{N}\) is said to be \(k\)-wise independent if \((\mathbf{\pi}(i_{1}),\ldots,\mathbf{\pi}(i_{k}))\) is uniformly distributed on \(\binom{[N]}{k}\) for any distinct \(i_{1},\ldots,i_{k}\). While simple efficient methods for generating 2- and 3-wise independent permutations using \(O(\log N)\) random bits are known, for any constant \(k\geq 4\) the best known efficient construction uses \(\Theta(N)\) random bits [14]. The situation is similar for random unitary matrices, where \(\mathbf{U}\in\mathrm{U}(N)\) is said to be drawn from a \(k\)-design if \(\mathbf{E}[\mathbf{U}_{i_{1}j_{1}}\cdots\mathbf{U}_{i_{k}j_{k}}]\) is equal to what it would be if \(\mathbf{U}\) were Haar-distributed on \(\mathrm{U}(N)\) (and similarly if any subset of the entries \(\mathbf{U}_{i_{i}j_{i}}\) in the product were replaced with their complex conjugates). Here it is known how to efficiently construct exact 2-designs using \(O(\log N)\) bits [13], and exact 3-designs using \(O(\log^{2}N)\) bits [15], but good constructions of exact \(k\)-designs for \(k\geq 4\) are lacking (see, e.g., [1]). Approximate \(k\)-wise independence.Given these issues, it is natural to seek \(\epsilon\)_-approximate_\(k\)-wise independence (\(k\)-designs). Here it is important to carefully define the precise notion of "approximate", as different natural notions are often only equivalent if one is willing to change \(\epsilon\) by a factor that is exponential in \(k\). For example, in the context of Boolean strings in \(\{\pm 1\}^{N}\), a weak notion of \((\epsilon,k)\)-wise independence is that \(|\mathbf{E}[\mathbf{x}_{i_{1}}\cdots\mathbf{x}_{i_{k}}]|\leq\epsilon\) for all \(k\)-tuples of distinct values \(i_{1},\ldots,i_{k}\). Naor and Naor [16] showed that \(O(\log(nk/\epsilon))\) random bits suffice to explicitly generate such a distribution, where we write \(n=\log_{2}N\). However, to get the stronger guarantee that every \(k\) bit positions are \(\epsilon\)-close to the uniform distribution in statistical distance, one needs \((\epsilon 2^{-k},k)\)-wise independence (see, e.g., [1]), and hence the number of random bits used in known constructions is \(O(k+\log(n/\epsilon))\). In general, for \(q\)-ary rather than 2-ary (Boolean) strings, the seed-length penalty becomes \(O(k\log q)\). So if, e.g., one wants a distribution on \(\mathbb{Z}_{N}^{N}\) in which every \(k\) coordinates have statistical difference \(\epsilon\) from uniform where \(N=2^{n}\),2 then the best known explicit constructions use \(O(kn+\log(1/\epsilon))\) random bits. Footnote 2: Cf. achieving \(\epsilon\)-approximate \(k\)-wise independent permutations from \(S_{N}\). In this work, we give a common framework for randomness-efficient generation of approximately \(k\)-wise independent distributions over _groups_, particularly subgroups of the unitary group. Our framework applies to, e.g., the group of \(q\)-ary strings \(\mathbb{Z}_{q}^{N}\) (realized as diagonal matrices with \(q\)th roots of unity as the diagonal entries), the permutation group \(S_{N}\) (realized as \(N\times N\) permutation matrices), the orthogonal group \(\mathrm{O}(N)\), and the unitary group \(\mathrm{U}(N)\) (with \(N=2^{n}\)). We will not discuss strings further in this work, as they are already very well studied. We first describe prior work on the other three groups, and then explain our new general method. Permutations.Explicit approximate \(k\)-wise independent permutations have found a wide variety of applications; e.g., in cryptography [17], hashing/dimensionality reduction [13, 17], and explicit constructions of expanders [14]. One method for creating them was initiated by Gowers [13], who showed that a random \(n\)-qubit circuit composed of \(\mathrm{poly}(n,k)\log(1/\epsilon)\) "classical" 3-qubit gates (i.e., permutations on \(\{0,1\}^{3}\)) yields an \(\epsilon\)-approximate \(k\)-wise independent permutation on \(S_{2^{n}}\). (Note that since the circuit size is polynomial rather than linear in \(nk\), the randomness-efficiency of [13] is polynomially worse than the \(O(nk+\log(1/\epsilon))\) random bits needed by a non-explicit random construction.) Gowers's technique was to lower-bound the spectral gap of the random walk on a related graph by \(1/\operatorname{poly}(n,k)\). (See [16, 17] for improvement of the spectral gap to \(1/\widetilde{O}(k^{2}n^{2})\).) Subsequently, using techniques related to space-bounded walks in graphs [14], Kaplan-Naor-Reingold [13] de-randomized this "truly random walk" to achieve efficient \(\epsilon\)-approximate \(k\)-wise independent permutations on \(S_{2^{n}}\) with seed length \(O(kn+\log(1/\epsilon))\), matching the (inexplicit) random bound. Around the same time, Kassabov [12] got the same seed length (without requiring \(N\) to be a power of \(2\)) via a sophisticated construction of a constant-size generating set for any \(S_{N}\) that makes the resulting Cayley graph an expander. Unitary matrices.Introduced to the quantum computing literature in [10], explicit \(\epsilon\)-approximate \(k\)-designs for the unitary group have had a wide variety of applications, from randomized benchmarking of quantum gate sets [11], to efficient state and process tomography [15], to understanding quantum state and unitary complexity [13, 1]. Previously, works on constructing approximate unitary designs have chiefly focused on achieving "strong explicitness" rather than on randomness-efficiency. In particular, the goal has been to show that a _truly_ random \(n\)-qubit quantum circuit composed of \(S=\operatorname{poly}(n,k)\log(1/\epsilon)\) gates (i.e. each gate is a Haar random unitary operator on a constant number of uniformly randomly chosen qubits) constitutes an \(\epsilon\)-approximate \(k\)-design for \(\operatorname{U}(2^{n})\). The breakthrough in this area came from the work of Brandao, Harrow, and Horodecki [1], who showed that \(S=O(n^{2}k^{10.5}\log(1/\epsilon))\) suffices for \(k\leq 2^{\Omega(n)}\). (See also [14] for an earlier construction using \(\operatorname{poly}(n,k)\log(1/\epsilon)\) gates when \(k=O(n/\log n)\), and [10] for a construction in the \(k=\operatorname{poly}(n)\) regime.) Further work has been done on improving the circuit depth and the exponent on \(k\); see [11]. Ours is the first work to derandomize these results and achieve a seed length that is _linear_ rather than polynomial in \(n\) and \(k\), and that works for all \(k\), thus matching the non-explicit random construction. As an example application of our result for unitary matrices, by applying [1] we get an efficient deterministic procedure for outputting \(2^{O(nk)}\) many \(n\)-qubit unitary circuits of \(\operatorname{poly}(nk)\) gates such that at least \(2^{\Omega(nk)}\) of them (a polynomially large fraction) have strong quantum circuit complexity \(\Omega(\frac{n}{\log n}k)\) (provided \(k\leq 2^{\Omega(n)}\)). Orthogonal Matrices.It is natural to think that designs for \(\operatorname{O}(N)\) and \(\operatorname{U}(N)\) should be related (and indeed orthogonal designs have played a role in randomized benchmarking for quantum circuits [13]). However there is no obvious reduction between the tasks of constructing \(\epsilon\)-approximate \(k\)-designs for the two groups. The first paper we are aware of that attempts to explicitly construct approximate orthogonal designs is [13]. That work used explicit orthogonal designs with \(O(kn+\log(1/\epsilon))\) seed length as the core pseudorandom object underlying its state-of-the-art pseudorandom generator for linear threshold functions on \(\mathbb{S}^{n-1}\). Unfortunately, there was an error in their construction of these designs.3 Fixing this error was a key motivation for the present work, and indeed our Theorem1.1 provides the crucial ingredient needed for the pseudorandom generators of [13]. Footnote 3: The error is in the interpretation of the main result of [1] that is used to establish Corollary6.1 of [13]. Corollary6.1 claims that the spectral gap established by [1] for \(\operatorname{SU}(N)\) is independent of \(N\), but this is in error [13]; indeed, as noted in [1] after their Corollary7, “the proof [in 1] does not give any estimate of the dependency of the spectral gap on \(N\).” Some of our technical ideas for handling the orthogonal group are drawn from the work of Haferkamp and Hunter-Jones, who showed (Theorem 9 of [14]) that truly random local orthogonal \(n\)-qudit circuits of size \(\operatorname{poly}(n,k)\log(q/\epsilon)\) constitute \(\epsilon\)-approximate \(k\)-designs for \(\operatorname{O}(q^{n})\), provided \(q\geq 8k^{2}\). This result has suboptimal randomness complexity because of the polynomial rather than linear dependence on \(n\) and \(k\), and only gives approximate \(k\)-designs for small values of \(k\). ### Our framework As stated earlier, we are interested in \(k\)-wise independent distributions over groups, particularly the symmetric, orthogonal, and unitary groups. For each such group G, the notion of "\(k\)-wise independence" is defined through a certain _representation_\(\rho^{k}\) of the group. Informally, we say a distribution \(\mathcal{P}\) on \(\mathrm{G}\) is approximately \(k\)-wise independent if \[\operatorname*{\mathbf{E}}_{\boldsymbol{g}\sim\mathcal{P}}[\rho^{k}(\boldsymbol{ g})]\approx\operatorname*{\mathbf{E}}_{\boldsymbol{g}\sim\mathrm{G}}[\rho^{k}( \boldsymbol{g})], \tag{1}\] where on the right-hand side \(\boldsymbol{g}\) is drawn from the Haar distribution on \(\mathrm{G}\).4 Footnote 4: Here and throughout, whenever \(\mathrm{G}\) is a compact Lie group we write \(\boldsymbol{g}\sim\mathrm{G}\) to denote that \(\boldsymbol{g}\) is drawn according to the Haar distribution; in particular, this is the uniform distribution if \(\mathrm{G}\) is finite. Let us consider our three example groups \(\mathrm{G}\), starting with the orthogonal group \(\mathrm{O}(N)\). In this case, the associated representation \(\rho^{k}\) is on \((\mathbb{C}^{N})^{\otimes k}\), and it maps \(R\in\mathrm{O}(N)\) to \(R^{\otimes k}\). In other words, specialized to the orthogonal group, Equation (1) asserts that \(\mathcal{P}\) is an approximate \(k\)-design on \(\mathrm{O}(N)\) provided \[\operatorname*{\mathbf{E}}_{\boldsymbol{R}\sim\mathcal{P}}[\boldsymbol{R}^{ \otimes k}]\approx\operatorname*{\mathbf{E}}_{\boldsymbol{R}\sim\mathrm{O}(N )}[\boldsymbol{R}^{\otimes k}]. \tag{2}\] As matrices, the entries of \(\rho^{k}(\boldsymbol{R})=\boldsymbol{R}^{\otimes k}\) are degree-\(k\) monomials in the entries of \(\boldsymbol{R}\), and thus Equation (1) (qualitatively) implies that any degree-\(k\) polynomial in the entries of \(\boldsymbol{R}\) has approximately the same expectation under \(\mathcal{P}\) as it has under the Haar distribution. This is the usual meaning of approximate \(k\)-wise independence in theoretical computer science, and is often how the notion is used in applications. For the unitary matrices \(U\) we wish to consider polynomials in both the entries of \(U\) and their complex conjugates; thus the appropriate representation of \(\mathrm{U}(N)\) is \(\rho^{k,k}\) on \((\mathbb{C}^{N})^{\otimes 2k}\) defined by \[\rho^{k,k}(U)=U^{\otimes k}\otimes\overline{U}^{\otimes k}. \tag{3}\] Actually, to unify notation we will work with \(\rho^{k,k}\) even when studying the orthogonal group \(\mathrm{O}(N)\leq\mathrm{U}(N)\); in this case of course \(\rho^{k,k}\) is equivalent to \(\rho^{2k}\), and we won't be concerned with the difference between \(k\) and \(2k\). (Note that if \(k\) is odd then the expectation of any degree-\(k\) monomial in the entries of \(\boldsymbol{R}\), \(\boldsymbol{R}\sim\mathrm{O}(N)\), is trivially \(0\).) Finally, for the symmetric group \(S_{N}\leq\mathrm{U}(N)\) we could again use \(\rho^{k,k}\), but previous work has (implicitly) used an alternative representation, which we'll call \(\mathcal{W}^{k}\). To define it, let \([N]_{(k)}\) denote the set of sequences of distinct indices \(i_{1},\ldots,i_{k}\in[N]\) and let \(\mathbb{C}^{[N]_{(k)}}\) denote the (complex) vector space with orthonormal basis vectors \(|i_{1}\cdots i_{k}\rangle\). Then the representation \(\mathcal{W}^{k}\) is defined on \(\pi\in S_{N}\) via \(\mathcal{W}^{k}(\pi)\,|i_{1}\cdots i_{k}\rangle=|\pi(i_{1})\cdots\pi(i_{k})\rangle\). This representation \(\mathcal{W}^{k}\) is the one usually associated to \(k\)-wise independence on \(S_{N}\), with the analogue of Equation (1) asserting that \(\operatorname*{\mathbf{E}}_{\boldsymbol{\pi}\sim\mathcal{P}}[(\boldsymbol{ \pi}(i_{1}),\ldots,\boldsymbol{\pi}(i_{k}))]\) is close to being uniformly distributed on \([N]_{(k)}\) for each \((i_{1},\ldots,i_{k})\in[N]_{(k)}\). A first way to try to achieve approximate \(k\)-wise independence on \(\mathrm{G}\in\{S_{N},\mathrm{O}(N),\mathrm{U}(N)\}\) is through a Markov chain. Suppose \(P\subset\mathrm{G}\) is a set (closed under inverses) of size \(\mathrm{poly}(n)\), where \(n=\log_{2}N\). Consider the random walk on \(\mathrm{G}\) that starts at \(\mathbb{1}\) and multiplies by a uniformly random element of \(P\) at each step. We may hope that after, say, \(\mathrm{poly}(nk)\log(1/\epsilon)\) steps, the resulting distribution \(\mathcal{P}\) on \(\mathrm{G}\) is close enough to the Haar distribution on \(\mathrm{G}\) that Equation (1) holds. As alluded to earlier, results of this form were previously shown for \(\mathrm{G}=S_{2^{n}}\) (starting with [10]) and for \(\mathrm{G}=\mathrm{U}(2^{n})\) (starting with [1]). One significant contribution of the present work is to generalize the latter to apply also to \(\mathrm{O}(2^{n})\) (or, more precisely and essentially equivalently, its connected subgroup \(\mathrm{SO}(2^{n})\)). Specifically, in Sections 3 to 5, our goal will essentially be to show the following: **Theorem 1.2**.: _Fix \(n\geq 4\) and let \(P_{n}\subset\mathrm{SO}(2^{n})\) denote the \(O(n^{2})\)-sized multiset of all \(n\)-qubit, \(1\)-gates circuits consisting of either \(\mathrm{CNOT}\) (on some \(2\) qubits) or \(\mathrm{Q}=\begin{bmatrix}3/5&-4/5\\ 4/5&3/5\end{bmatrix}\) on some \(1\) qubit, and then closed under negation and inverses. Then for any \(k\geq 1\),_ \[\left\|\operatorname*{\mathbf{E}}_{\boldsymbol{g}\sim\mathcal{P}_{n}}[\rho^{k,k} (\boldsymbol{g})]-\operatorname*{\mathbf{E}}_{\boldsymbol{g}\sim\mathrm{SO}(2^ {n})}[\rho^{k,k}(\boldsymbol{g})]\right\|_{\mathrm{op}}\leq 1-\frac{1}{n\cdot \mathrm{poly}(k)}. \tag{4}\] _A similar statement holds for \(\mathrm{SU}(2^{n})\) with the \(1\)-qubit \(\mathrm{H}\), \(\mathrm{S}\), and \(\mathrm{T}\) gates replacing \(\mathrm{Q}\)._ (See Theorem3.1 for more details. In Section2 we will pass from \(\mathrm{SO}(2^{n})\) and \(\mathrm{SU}(2^{n})\) to \(\mathrm{O}(2^{n})\) and \(\mathrm{U}(2^{n})\); our analysis in Sections3 to 5 is carried out in the "special" versions of these groups for technical reasons that will become clear in Section3, specifically Section3.1.) As we discuss in Section3.1, the high-level approach we take to establish Theorem1.2 extends an approach from [11]. Given the above theorem, we could improve its right-hand side to \(\epsilon/2^{nk}\) by forming \(\boldsymbol{g}\) as a product of \(n\cdot\mathrm{poly}(k)\cdot\log(2^{nk}/\epsilon)=\mathrm{poly}(nk)\cdot\log(1/ \epsilon)\) uniformly randomly elements from \(P_{n}\). (See Definition2.2, where we give a precise definition of "\(\epsilon\)-approximate \(k\)-design", for a discussion of why \(\epsilon/2^{nk}\) is the correct bound for the right-hand side.) The resulting distribution on \(\mathrm{O}(2^{n})\) would be an \(\epsilon\)-approximate \(k\)-design, but unfortunately, drawing from this distribution would require a seed of \(\mathrm{poly}(nk)\cdot\log(1/\epsilon)\) truly random bits, which leaves something to be desired from the standpoint of randomness-efficiency. To improve this and match the randomness-efficiency of the random construction, one may attempt to apply the method of "pseudorandom walks on consistently labeled graphs" from [10, 11], or "derandomized squaring" from [10]. This is the approach taken in [10] for the symmetric group, where the evolving value of \(\mathcal{W}^{k}(\boldsymbol{\pi})\left|i_{1}\cdots i_{k}\right\rangle\) can be thought of as a random walk on a graph with vertex set \([N]_{(k)}\). In the setting of Theorem1.2 there is no graph. Nevertheless, in Section6 we will show how derandomized squaring can be slightly generalized to obtain the following result (a similar generalization appeared recently in [13]): **Theorem 1.3**.: _(Abbreviated version of Theorem6.21.) Given \(c,\delta,\epsilon\), there is a strongly explicit deterministic algorithm that outputs a sequence \(\mathcal{P}\) of \(O(c/\operatorname{poly}(\delta\epsilon))\) "monomials" over the symbols \(u_{1},\ldots,u_{c},u_{1}^{\dagger},\ldots,u_{c}^{\dagger}\), each of length \(O(\log(1/\epsilon)/\operatorname{poly}(\delta))\), such that \(\left\|\mathrm{avg}_{\boldsymbol{m}\in\mathcal{P}}\{\boldsymbol{m}(\mathcal{ U})\}\right\|_{\mathrm{op}}\leq\epsilon\) whenever \(\mathcal{U}=(U_{1},\ldots,U_{c})\) is a sequence of unitaries with \(\left\|\mathrm{avg}_{i\in[c]}\{U_{i}\}\right\|_{\mathrm{op}}\leq 1-\delta\). (Here \(m(\mathcal{U})\) denotes the product of \(U_{i}\)'s and \(U_{i}^{\dagger}\)'s obtained by substituting \(u_{i}=U_{i}\) in \(m\).)_ Taking the \(\delta\) of Theorem1.3 to be the \(1/\operatorname{poly}(nk)\) of Theorem1.2, and the unitaries \(\mathcal{U}=(U_{1},\ldots,U_{c})\) to correspond to the \(1\)-gate circuits of Theorem1.2, we obtain strongly explicit \(\epsilon\)-approximate \(k\)-designs as described in Theorem1.1 for the special orthogonal and special unitary groups. A simple modification gives corresponding designs for the unitary and orthogonal groups, thus yielding Theorem1.1. ### Organization of this paper In Section2 we give the detailed argument explaining how an initial spectral gap of the sort given by Theorem1.2 and the generalized "derandomized squaring" result given by Theorem1.3 together yield efficient explicit approximate designs for the orthogonal and unitary groups. The rest of the paper is devoted to establishing the two necessary ingredients Theorem1.2 and Theorem1.3. Section3 gives our general framework for establishing the initial spectral gap for the special unitary and special orthogonal groups; as we explain there, a crucial step in this framework is establishing a spectral gap for a certain "auxiliary" \(m\)-qubit random walk which was inspired by the analysis of [11]. Similar to [11], it turns out that to analyze this auxiliary random walk, two quite different technical arguments are required depending on whether the tensor power \(k\) is "large" or "small" compared to the number of qubits \(m\); we give these two arguments in Section4 and Section5 respectively. Finally, we provide the necessary analysis of the generalized "derandomized pseudorandom walks" in Section6. ### Notation and preliminaries To give our constructions it is convenient to use the language of quantum computing, even when the group involved is the orthogonal or symmetric group. We will generally consider operators on \(\mathbb{C}^{N}\), where \(N=2^{n}\) for some \(n\in\mathbb{N}^{+}\). We identify \(\mathbb{C}^{N}=(\mathbb{C}^{2})^{\otimes n}\) and think of the tensor factors as corresponding to \(n\) qubits. **Notation 1.4**.: Let \(g\in\mathrm{U}(2^{\ell})\), thought of as an \(\ell\)-qubit "gate" and let \(e=(i_{1},\ldots,i_{\ell})\) be a sequence of \(\ell\) distinct elements of \([n]\), i.e. \(e\in[n]_{\ell}\). (Here \(\ell\) should be thought of as "much less than \(n\)", in particular we will be interested in constant \(\ell\).) We use the notation \(g_{e}\) for the operator in \(\mathrm{U}(N)\) defined by applying \(g\) on qubits (i.e., tensor factors) \(i_{1},\ldots,i_{\ell}\) (in that order) and applying the identity operator on the remaining \(n-\ell\) qubits. When \(e\in\binom{[n]}{\ell}\) is a set rather than a sequence, we assume the increasing order on its elements. We write \(A^{\dagger}\) to denote the conjugate transpose of a complex matrix \(A\), \(\left\|A\right\|_{\mathrm{op}}\) to denote the operator norm, and \(\left\|A\right\|_{1}\) to denote its Schatten 1-norm. We use bold font to denote random variables. A general framework: Explicit \(k\)-wise independent permutations, orthogonal designs, and unitary designs Let \(\mathrm{G}(n)\) be a subgroup of \(\mathrm{U}(n)\) (the key examples to keep in mind are the group of permutations on \(2^{n}\) elements, the \(2^{n}\)-dimensional orthogonal group, the \(2^{n}\)-dimensional unitary group itself, and the "special" versions of the latter two). In light of Theorem 1.3, given a probability distribution on a subset of \(\mathrm{G}(n)\), we would like to understand how fast the associated random walk mixes vis-a-vis a particular representation, namely the \(k\)-wise tensor product representation (since that representation corresponds to \(k\)-wise independence). Let \(\mathcal{P}\) be a probability distribution on \(\mathrm{G}(n)\) that is symmetric (meaning that \(\mathbf{g}^{-1}=\mathbf{g}^{\dagger}\) is distributed as \(\mathcal{P}\) when \(\mathbf{g}\) is), and let \(\rho\) be a unitary representation of \(\mathrm{G}(n)\). Note that since \(\rho\) is unitary, \(\mathbf{E}_{\mathbf{g}\sim\mathcal{P}}[\rho(\mathbf{g})]\) is a Hermitian operator with real eigenvalues lying in \([-1,1]\). Since our goal is \(k\)-wise independence, the representations that are of interest to us are \(k\)-wise tensor product representations: **Notation 2.1** (\(k\)-wise tensor product representations).: For any \(k\in\mathbb{N}^{+}\), we will write \(\rho_{2^{n}}^{k,k}\) for the (complex) representation of \(\mathrm{G}(n)\) defined by \[\rho_{2^{n}}^{k,k}(g)=g^{\otimes k,k}\coloneqq g^{\otimes k}\otimes\overline{ g}^{\otimes k}, \tag{5}\] where \(\overline{g}\) denotes the complex conjugation of matrix \(g\). There are several different definitions of \(\epsilon\)-approximate \(k\)-designs in the literature, all of which are equivalent if one is willing to lose factors of \(2^{nk}\) on \(\epsilon\). For definiteness, we choose the 1-norm definition from [10]. (One could also equivalently use the notion from Kothari-Meka [11], again up to \(2^{nk}\) factors.) **Definition 2.2**.: A distribution \(\mathcal{P}\) on a finite subset of matrices from \(\mathrm{G}(n)\) is an \(\epsilon\)_-approximate \(k\)-design for \(\mathrm{G}(n)\)_ if \[\left\|\underset{\mathbf{g}\sim\mathcal{P}}{\mathbf{E}}[\rho_{2^{n}}^{k,k}(\mathbf{g} )]-\underset{\mathbf{g}\sim\mathrm{G}(n)}{\mathbf{E}}[\rho_{2^{n}}^{k,k}(\mathbf{g})] \right\|_{1}\leq\epsilon \tag{6}\] (where \(\left\|\cdot\right\|_{1}\) denotes the Schatten 1-norm). We remark that the above condition is implied by the following operator-norm bound: \[\left\|\underset{\mathbf{g}\sim\mathcal{P}}{\mathbf{E}}[\rho_{2^{n}}^{k,k}(\mathbf{g} )]-\underset{\mathbf{g}\sim\mathrm{G}(n)}{\mathbf{E}}[\rho_{2^{n}}^{k,k}(\mathbf{g})] \right\|_{\mathrm{op}}\leq\epsilon/2^{nk}, \tag{7}\] and indeed we will establish our approximate design results by going through the operator norm. Often we will study the operator \(\mathbf{E}_{\mathbf{g}\sim\mathcal{P}}[\rho^{k}(\mathbf{g})]\) through its "Laplacian", which we define as follows: **Definition 2.3**.: We define the "Laplacian" \[L_{\mathcal{P}}(\rho)=\mathbb{1}-\underset{\mathbf{g}\sim\mathcal{P}}{\mathbf{E}} [\rho(\mathbf{g})], \tag{8}\] a self-adjoint (since \(\mathcal{P}\) is symmetric) operator satisfying the following inequalities (in the PSD order): \[0\leq L_{\mathcal{P}}(\rho)\leq 2\cdot\mathbb{1}. \tag{9}\] **Notation 2.4**.: In the preceding definition, we abuse notation as follows: In place of \(\mathcal{P}\) we may write a finite (multi)set \(P\subset\mathrm{G}(n)\), in which case the uniform distribution on \(P\) is understood. We may also write "\(\mathrm{G}(n)\)" in place of \(\mathcal{P}\), in which case the uniform (Haar) distribution is understood. Finally, if \(\mathcal{P}\) now denotes a distribution on \(\mathrm{G}(\ell)\), and \(E\subseteq[n]_{\ell}\), we write \(\mathcal{P}\times E\) for the distribution on \(\mathrm{G}(n)\) given by choosing \(\mathbf{g}\sim\mathcal{P}\), independently choosing \(\mathbf{e}\sim E\) (uniformly), and finally forming \(\mathbf{g}_{\mathbf{e}}\). **Definition 2.5**.: Given a symmetric probability distribution \(\mathcal{P}\) as in Definition 2.3, we define its "lazy" version, \(\widetilde{\mathcal{P}}\), to be the distribution which is an equal mixture of \(\mathcal{P}\) and the point distribution supported on the identity element \(\mathbbm{1}\) (note that \(\widetilde{\mathcal{P}}\) is also a symmetric distribution). Similar to Definition 2.3, we have that \(\mathbf{E}_{\mathbf{g}\sim\widetilde{\mathcal{P}}}[\rho(\mathbf{g})]\) is a Hermitian operator but now with real eigenvalues lying in \([0,1]\), and we have the PSD inequalities \[0\leq L_{\widetilde{\mathcal{P}}}(\rho)\leq\mathbbm{1}. \tag{10}\] **Fact 2.6**.: _In the setting of Definitions 2.3 and 2.5, \(L_{\mathrm{G}(n)}(\rho)\) is an orthogonal projection operator, and for any symmetric \(\mathcal{P}\) we have that_ \[\ker L_{\mathrm{G}(n)}(\rho)\subseteq\ker L_{\mathcal{P}}(\rho) \tag{11}\] _always holds (because for every \(g_{0}\) in the support of \(\mathcal{P}\) we have \(\rho(g_{0})\Pi=\Pi\), where \(\Pi=\mathbf{E}_{\mathbf{g}\sim\mathrm{G}(n)}[\rho(\mathbf{g})]\)). From this, and Inequalities (9) and (10), we also get_ \[L_{\mathrm{G}(n)}(\rho)\geq\tfrac{1}{2}\cdot L_{\mathcal{P}}(\rho), \tag{12}\] \[L_{\mathrm{G}(n)}(\rho)\geq L_{\widetilde{\mathcal{P}}}(\rho). \tag{13}\] As Inequalities (12) and (13) contain a surfeit of symbols, one may wish to read them respectively as \[\text{``(randomizing $n$ qubits)}\geq\tfrac{1}{2}\cdot(\mathcal{P}\text{-pseudorandomizing $n$ qubits)}\quad\text{[vis-a-vis $\rho$]"}, \tag{14}\] \[\text{``(randomizing $n$ qubits)}\geq(\widetilde{\mathcal{P}}\text{-pseudorandomizing $n$ qubits)}\quad\text{[vis-a-vis $\rho$]"}, \tag{15}\] with the "\(\geq\tfrac{1}{2}\)," part pronounced "is at least \(\tfrac{1}{2}\) as good as". It will be convenient to use the Laplacian operator in some of the steps in the following sections, even though we ultimately want statements about the expectation operator. To convert between the two we will use the following: **Fact 2.7**.: _For any unitary representation \(\rho\), \(L_{\mathcal{P}}(\rho)\leq\epsilon\cdot L_{\mathrm{G}(n)}(\rho)\) is equivalent to_ \[\left\|\underset{\mathbf{g}\sim\mathcal{P}}{\mathrm{E}}[\rho(\mathbf{g})]-\underset{ \mathbf{g}\sim\mathrm{G}(n)}{\mathrm{E}}[\rho(\mathbf{g})]\right\|_{\mathrm{op}}\leq\epsilon. \tag{16}\] ### Initial spectral gaps for \(S_{n}\), \(\mathrm{SO}(N)\) and \(\mathrm{SU}(N)\) Here we summarize all of the non-trivial spectral gaps that we will amplify using Theorem 1.3. **Theorem 2.8** ([1]).: _For any \(k\geq 1\), there is a (multi)set \(P_{S_{2^{n}}}\) of cardinality \(O(n^{3})\) such that_ \[\left\|\underset{\mathbf{g}\sim P_{S_{2^{n}}}}{\mathrm{E}}[\mathcal{W}_{2^{n}}^{k }(\mathbf{g})]-\underset{\mathbf{g}\sim S_{2^{n}}}{\mathrm{E}}[\mathcal{W}_{2^{n}}^{k }(\mathbf{g})]\right\|_{\mathrm{op}}\leq 1-\frac{1}{\widetilde{O}(k^{2}n^{2})}. \tag{17}\] Recall that the representation \(\mathcal{W}_{2^{n}}^{k}\) is defined on \(g\in S_{2^{n}}\) via \(\mathcal{W}_{2^{n}}^{k}(g)\left|i_{1}\cdots i_{k}\right\rangle=\left|g(i_{1}) \cdots g(i_{k})\right\rangle\). The set \(P_{S_{2^{n}}}\) mentioned above is the set of "simple 3-bit permutations". This is the set of permutations \(f_{i,j_{1},j_{2},h}\), where \(i,j_{1},j_{2}\in[n]\) are all distinct, and \(h\) is a Boolean function on \(\{0,1\}^{2}\), which maps \((x_{1},\ldots,x_{n})\in\{0,1\}^{n}\) to \((x_{1},\ldots,x_{i-1},x_{i}\oplus h(x_{j_{1}},x_{j_{2}}),x_{i+1}\ldots,x_{n})\). We establish the following in Sections 3 to 5. **Theorem 2.9** (Theorem 3.1 restated).: _For \(\mathrm{G}(n)\in\{\mathrm{SO}(2^{n}),\mathrm{SU}(2^{n})\}\), and any \(k\geq 1\), there is a (multi)set \(P_{G}\) of cardinality \(O(n^{2})\) such that_ \[\left\|\underset{\mathbf{g}\sim P_{G}}{\operatorname{\mathbf{E}}}[\rho_{2^{n}}^{k, k}(\mathbf{g})]-\underset{\mathbf{g}\sim\mathrm{G}(n)}{\operatorname{\mathbf{E}}}[\rho_{2^{ n}}^{k,k}(\mathbf{g})]\right\|_{\mathrm{op}}\leq 1-\frac{1}{n\cdot\operatorname{poly}(k)}. \tag{18}\] The sets \(P_{G}\) for \(\mathrm{SO}(2^{n})\) and \(\mathrm{SU}(2^{n})\) are described in Section 3.4.1. Our proof of Theorem 2.9 is itself a general framework that could potentially be used to obtain similar results for other subgroups of the unitary group (for example, the sympletic group), even though we only carry out the calculations for \(\mathrm{SO}(2^{n})\) and \(\mathrm{SU}(2^{n})\). ### Explicit \(k\)-wise independent permutations, orthogonal designs, and unitary designs We can finally apply Theorem 1.3, so let's write our above results in the notation of this theorem. Fix \(k\geq 1\) and consider any of the \(P\) (multi)sets described in Theorems 2.8 and 2.9. Let \(\mathcal{U}=(\rho(g)-\underset{\mathbf{g}\sim\mathrm{G}(n)}{\operatorname{ \mathbf{E}}}[\rho(\mathbf{g})]:g\in P)\) be a sequence of unitaries, where \(\rho\) is the appropriate unitary representation (\(\mathcal{W}_{2^{n}}^{k}\) for \(S_{2^{n}}\) and \(\rho_{2^{n}}^{k,k}\) for \(\mathrm{SO}(2^{n})\) and \(\mathrm{SU}(2^{n})\)). For this choice of \(\mathcal{U}\) we have \(c=|P|=\operatorname{poly}(n)\). Notice that \(\left\|\mathrm{avg}_{\delta\in[c]}\{U_{i}\}\right\|_{\mathrm{op}}\) is exactly the left hand side of the equations in Theorems 2.8 and 2.9, so we know that this average is at most \(1-\delta\) for \(\delta=1/\operatorname{poly}(n,k)\) (as observed, this is actually \(1/\widetilde{O}(k^{2}n^{2})\) for \(S_{2^{n}}\) and \(1/(n\operatorname{poly}(k))\) for \(\mathrm{SO}(2^{n})\) and \(\mathrm{SU}(2^{n})\)). Given \(\epsilon>0\), applying Theorem 1.3 (with its "\(\epsilon\)" parameter set to \(\epsilon/2^{nk}\)) we obtain a sequence \(\mathcal{P}\) of cardinality \(\operatorname{poly}(2^{nk}/\epsilon)\) that satisfies \(\left\|\mathrm{avg}_{U\in\mathcal{P}}\{U\}\right\|_{\mathrm{op}}\leq\epsilon\). Additionally, \(U\in\mathcal{P}\) is a product of at most \(\operatorname{poly}(nk)\log(1/\epsilon)\) elements of \(\mathcal{U}\), and so it can be written as \(\rho(g)-\underset{\mathbf{g}\sim\mathrm{G}(n)}{\operatorname{\mathbf{E}}}[\rho( \mathbf{g})]\), where \(g\) is a product of at most \(\operatorname{poly}(nk)\log(1/\epsilon)\) elements of \(P\). This follows since for any \(g,g^{\prime}\in P\), \[\bigg{(}\rho(g)-\underset{\mathbf{g}\sim\mathrm{G}(n)}{\operatorname{ \mathbf{E}}}[\rho(\mathbf{g})]\bigg{)}\bigg{(}\rho(g^{\prime})-\underset{\mathbf{g} \sim\mathrm{G}(n)}{\operatorname{\mathbf{E}}}[\rho(\mathbf{g})]\bigg{)}=\bigg{(} \rho(g\cdot g^{\prime})-\underset{\mathbf{g}\sim\mathrm{G}(n)}{\operatorname{ \mathbf{E}}}[\rho(\mathbf{g})]\bigg{)}, \tag{19}\] where we use the fact that \(\underset{\mathbf{g}\sim\mathrm{G}(n)}{\operatorname{\mathbf{E}}}[\rho(\mathbf{g})]\) is an orthogonal projection operator, and that \(\rho\) is a representation. We combine all of this in the following theorems: **Theorem 2.10**.: _Let \(\epsilon>0\). Then for any \(k=k(n)\), there is a set \(\mathcal{P}_{S_{2^{n}}}\) that satisfies:_ \[\left\|\underset{\mathbf{g}\sim\mathcal{P}_{S_{2^{n}}}}{\operatorname{ \mathbf{E}}}[\mathcal{W}^{k}(\mathbf{g})]-\underset{\mathbf{g}\sim S_{2^{n}}}{ \operatorname{\mathbf{E}}}[\mathcal{W}^{k}(\mathbf{g})]\right\|_{\mathrm{op}}\leq \epsilon/2^{nk}. \tag{20}\] _Additionally, this set satisfies the following properties:_ * _Its cardinality is_ \(\operatorname{poly}(2^{nk}/\epsilon)\)_._ * _Each element of_ \(\mathcal{P}_{S_{2^{n}}}\) _is given by an_ \(n\)_-qubit circuit consisting of_ \(S=\operatorname{poly}(nk)\log(1/\epsilon)\) _gates, which are elements of_ \(P_{G}\)_._ * _The algorithm that takes as input a seed and outputs the associated circuit runs in deterministic_ \(\operatorname{poly}(S)\) _time._ **Theorem 2.11**.: _Let \(\mathrm{G}(n)\in\{\mathrm{SO}(2^{n}),\mathrm{SU}(2^{n})\}\) and \(\epsilon>0\). Then for any \(k=k(n)\), there is a set \(\mathcal{P}_{G}\) that satisfies:_ \[\left\|\underset{\mathbf{g}\sim\mathcal{P}_{G}}{\operatorname{\mathbf{E}}}[\rho_{2^ {n}}^{k,k}(\mathbf{g})]-\underset{\mathbf{g}\sim\mathrm{G}(n)}{\operatorname{\mathbf{E }}}[\rho_{2^{n}}^{k,k}(\mathbf{g})]\right\|_{\mathrm{op}}\leq\epsilon/2^{nk}. \tag{21}\] _Additionally, this set satisfies the following properties:_ * _Its cardinality is_ \(\operatorname{poly}(2^{nk}/\epsilon)\)_._ * _Each element of_ \(\mathcal{P}_{G}\) _is given by an_ \(n\)_-qubit circuit consisting of_ \(S=\operatorname{poly}(nk)\log(1/\epsilon)\) _gates, which are elements of_ \(P_{G}\)_._ * _The algorithm that takes as input a seed and outputs the associated circuit runs in deterministic_ \(\operatorname{poly}(S)\) _time._ Ultimately we want designs for \(\operatorname{O}(2^{n})\) and \(\operatorname{U}(2^{n})\); we obtain them from the above via the following simple corollary. **Corollary 2.12** (Theorem 1.1 restated).: _Let \(\operatorname{G}(n)\in\{\operatorname{O}(2^{n}),\operatorname{U}(2^{n})\}\) and \(\epsilon>0\). Then for any \(k=k(n)\), there is a set \(\mathcal{P}_{G}\) that satisfies the conditions of Theorem 2.11._ Proof.: To obtain the result for \(\operatorname{O}(2^{n})\), after sampling from \(\mathcal{P}_{\operatorname{SO}}\) one samples \(\boldsymbol{b}\) as a uniformly random \(\pm 1\) and multiplies the first column of the sampled matrix by \(\boldsymbol{b}\) (which only changes the cardinality of the resulting \(\mathcal{P}_{\operatorname{O}}\) by a factor of \(2\)). For \(\operatorname{U}(2^{n})\), no augmentation of \(\mathcal{P}_{\operatorname{SU}}\) is required; i.e., we can simply take \(\mathcal{P}_{\operatorname{U}}=\mathcal{P}_{\operatorname{SU}}\).5 To see this, first recall that the representations \(\rho_{2^{n}}^{k,k}\) of \(\operatorname{U}(2^{n})\) and of \(\operatorname{SU}(2^{n})\) respectively have kernels \(K_{\operatorname{U}}=\{e^{i\alpha}1\ :\ \alpha\in[0,2\pi)\}\) and \(K_{\operatorname{SU}}\{-1,1\}\). Next, recall that \(\operatorname{U}(2^{n})/K_{1}\cong\operatorname{SU}(2^{n})/K_{2}\), since both of these are isomorphic to the projective unitary group \(\operatorname{PU}(2^{n})\) (see e.g. [20]). Now, since the push-forward of the Haar measure of a compact group \(G\) to a factor group \(G/H\) is exactly the Haar measure on \(G/H\), it follows that the set \(\mathcal{P}_{\operatorname{SU}}\) of unitaries in \(\operatorname{SU}(2^{n})\) that forms an \(\epsilon\)-approximate \(k\)-design of \(\operatorname{SU}(2^{n})\) is also an \(\epsilon\)-approximate \(k\)-design of \(\operatorname{U}(2^{n})\). Footnote 5: We thank an anonymous reviewer for this observation. It is of note that we can apply the framework of this section, through Theorem 1.3, to obtain explicit designs of any subgroup of the unitary group using any unitary representation, as long as one establishes an initial gap first, like in Section 2.1. ## 3 Establishing an initial spectral gap for special orthogonal and unitary groups In the rest of the paper we consider a sequence of groups \((\operatorname{G}(n))_{n\geq 1}\) which is either \((\operatorname{SO}(2^{n}))_{n\geq 1}\) or \((\operatorname{SU}(2^{n}))_{n\geq 1}\). We recall (see e.g. [14, Section 1.3]) that these groups have associated Lie algebras \(\mathfrak{g}_{n}\), where \[\text{for }\operatorname{G}(n)=\operatorname{SO}(2^{n}),\ \mathfrak{g}_{n}=\{H \in\mathbb{R}^{\,2^{n}\times 2^{n}}:H\text{ skew-symmetric}\}, \tag{22}\] \[\text{for }\operatorname{G}(n)=\operatorname{SU}(2^{n}),\ \mathfrak{g}_{n}=\{H \in\mathbb{C}^{2^{n}\times 2^{n}}:H\text{ skew-Hermitian},\operatorname{tr}(H)=0\}. \tag{23}\] When we need to specialize our discussion to a particular one of these two cases, we will do so explicitly; most of our arguments go through for both settings (and many go through for the more general setting in which \(\operatorname{G}(n)\) is any compact connected Lie group). As discussed in Section 2, given Theorem 6.21, in order to construct an explicit \(k\)-design for \(\operatorname{G}(n)\) it suffices to construct an explicit sequence \(\mathcal{U}=(U_{1},\dots,U_{c})\) of \(2^{n}\times 2^{n}\) matrices from \(\operatorname{G}(n)\) satisfying \(\left\|\rho_{2^{n}}^{k,k}(U_{i})\right\|_{\operatorname{op}}\leq 1\) for all \(i\) and \(\left\|\operatorname{avg}(\rho_{2^{n}}^{k,k}(\mathcal{U}))\right\|_{ \operatorname{op}}\leq 1-\frac{1}{n\cdot\operatorname{poly}(k)}\) (in fact, a spectral gap of \(\operatorname{poly}(n,k)\) would also be sufficient). Constructing such a sequence for \(\operatorname{G}(n)\) as described above is the main goal of this section and is accomplished in the following theorem: **Theorem 3.1**.: _Let \((\operatorname{G}(n))_{n\geq 1}\in\{(\operatorname{SO}(2^{n}))_{n\geq 1},( \operatorname{SU}(2^{n}))_{n\geq 1}\}\). There is a fixed positive integer \(n_{0}=4\) and a finite multiset \(P_{n_{0}}\subset\operatorname{G}(n_{0})\) such that for all sufficiently large \(n\) and all \(k\geq 1\), we have_ \[\forall k\in\mathbb{N}^{+},\quad L_{\widetilde{P}_{n_{0}}\times \binom{[n]}{n_{0}}}(\rho_{2^{n}}^{k,k})\geq\frac{1}{n\cdot\operatorname{ poly}(k)}\cdot L_{\operatorname{G}(n)}(\rho_{2^{n}}^{k,k}). \tag{24}\] (We note that even without using the "pseudorandom walks" machinery of Section 6, as discussed in Section 1.1, since Theorem 3.1 establishes an initial spectral gap of \(1-\frac{1}{n\cdot\operatorname{poly}(k)}\), simply taking a product of \(n\cdot\operatorname{poly}(k)\cdot\log(2^{nk}/\epsilon)\) uniform random draws from \(\widetilde{P}_{n_{0}}\) would yield an \(\epsilon\)-approximate \(k\)-design for \(\operatorname{G}(n)\) with seed length \(\operatorname{poly}(n,k)\cdot\log(1/\epsilon)\). By combining Theorem 3.1 with Theorem 6.21 (i.e. using pseudorandom walks) we are able to improve this to seed length \(O(nk+\log(1/\epsilon))\), thus matching the random construction.) ### Overview of the proof of Theorem 3.1 Our proof of Theorem 3.1 refines and extends an approach from [10], and combines it with arguments from [10]. In this subsection we give a high-level overview of the structure of the proof, and in the next subsection we give (a modular version of) the actual proof. Establishing the various modular pieces will comprise the rest of the paper after Section 3.2. In Theorem 4 of [10], Haferkamp and Hunter-Jones establish a spectral gap for non-local random quantum circuits with truly (Haar) random two-qudit unitary gates over the unitary group. This is done by analyzing Haar random unitary gates over \(m-1\) randomly chosen qubits from an \(m\)-qubit system; this enables them to establish a recurrence relation which lets them bound the spectral gap of \(k\)-qudit Haar random unitary gates in terms of the spectral gap of \((k+1)\)-qudit Haar random unitary gates. Our Lemma 3.2 below is a generalization and rephrasing of their recurrence relation for the special6 unitary and special orthogonal groups; it essentially says that if truly randomizing (a randomly chosen) \(m-1\) out of \(m\) qubits is "not too much worse" than truly randomizing all \(m\) qubits, then truly randomizing only a constant number of (randomly chosen) qubits out of \(m\) qubits is also not too much worse than truly randomizing all \(m\) qubits. Given this, the remaining tasks are (1) to show that indeed truly randomizing (a randomly chosen) \(m-1\) out of \(m\) qubits is "not too much worse" than truly randomizing all \(m\) qubits; and (2) to show that at the bottom level of the argument, it suffices to _pseudorandomize_ a constant number of (randomly chosen) qubits out of \(m\) qubits. Footnote 6: At the end of this subsection we explain why, even though our ultimate goal is to obtain results for the orthogonal and unitary groups, we need to work with the special versions of these groups at this point in the argument. Task (1) requires a significant amount of technical work and is the subject of Section 4 and Section 5. We follow the high-level approach of [10] by breaking the analysis into two sub-cases (Theorem 3.3 and Theorem 3.4) depending on the relative sizes of \(k\) and \(m\). In each of these sub-cases we adapt and generalize the analysis of [10] (we note that the "small-\(m\)" case of [10], for the unitary group, was based in turn on [10]) in a way which permits a unified treatment of both the special orthogonal group and the special unitary group. Task (2) is necessary because our ultimate goal statement, Theorem 3.1, requires the randomly chosen non-local gates to be drawn from a _finite_ ensemble of gates rather than being Haar random ("truly random") gates over \(n_{0}\) qubits. For this step (made formal in Lemma 3.6), following [10] we use a deep result of Bourgain and Gamburd (subsequently generalized by Benoiste and de Saxce [1]) to pass from the Haar distribution over \(\operatorname{G}(n_{0})\) to a uniform distribution over an explicit finite ensemble of \(n_{0}\)-qubit gates; see Section 3.4. The [1] results require that the Lie groups in question be compact and simple; this requirement is why we need to work with the special versions of the unitary and orthogonal groups (indeed, in the special orthogonal case we need to further pass to the projective special orthogonal group; see the proof of Corollary 3.10). ### Proof of Theorem 3.1 In order to establish the lower bound of Inequality (24) we will need to chain together some statements that go in the opposite direction from Inequality (14) and Inequality (15). We do this via the following lemma, which we prove in Section 3.3. **Lemma 3.2**.: _Fix a positive integer constant \(n_{0}\geq 4\). Suppose that for \(n_{0}<m\leq n\) we have_ \[\forall k\in\mathds{N}^{+},\quad L_{\operatorname{G}(m-1)\times\binom{[m]}{m- 1}}(\rho_{2^{m}}^{k,k})\geq\tau_{k,m}\cdot L_{\operatorname{G}(m)}(\rho_{2^{m }}^{k,k}). \tag{25}\] _Then_ \[\forall k\in\mathbb{N}^{+},\quad L_{\mathrm{G}(n_{0})\times\binom{[n]}{n_{0}}}( \rho_{2^{n}}^{k,k})\geq\left(\prod_{n_{0}<m\leq n}\tau_{k,m}\right)\cdot L_{ \mathrm{G}(n)}(\rho_{2^{n}}^{k,k}). \tag{26}\] We remark that Lemma3.2 only deals with "truly" (Haar) random gates; later we will move from \(n_{0}\)-arity "truly random" gates to "pseudorandom" gates, which are drawn uniformly at random from a finite multiset. It may be helpful to think of the lemma's conclusion (Inequality (26)) as intuitively saying that truly randomizing only constantly many (randomly chosen) qubits is "not too much worse" than truly randomizing all \(n\) qubits, vis-a-vis the \(k\)-wise tensor product representation. With the above lemma in hand, proving Theorem3.1 breaks down naturally into two steps. **First step: Passing from truly random \(m\)-qubit gates to truly random \((m-1)\)-qubit gates.** In other words, lower-bounding \(\tau_{k,m}\) for \(m=n_{0}+1,\ldots,n\). This is the main technical task where the bulk of our work is required. The analysis is done separately for "large \(m\)" and "small \(m\)" cases, similar to Lemmas6 and 7 of [11], respectively. Section4 lower bounds \(\tau_{k,m}\) for "large \(m\)": **Theorem 3.3**.: _For all \(k\leq\frac{1}{\sqrt{10}m^{2}}2^{m/2}\) we have that Inequality25 holds with \(\tau_{k,m}\geq 1-\frac{1}{m}-\frac{\sqrt{10}k}{2^{m/2}}\)._ Section5 gives a lower bound on \(\tau_{k,m}\) which will be useful for "small \(m\)": **Theorem 3.4**.: _For all \(m\geq 4\) and all \(k\in\mathbb{N}^{+}\), we have that Inequality25 holds with \(\tau_{k,m}\geq.04\)._ (We note that Theorem3.4's requirement that \(m\geq 4\) is why we take \(n_{0}=4\) in Theorem3.1.) Given Theorem3.3 and Theorem3.4, we get the desired lower bound on \(\tau_{k,n_{0}+1}\cdots\tau_{k,n}\) from a routine computation: **Lemma 3.5**.: _For any constant \(n_{0}\geq 4\), for all \(n\) and all \(k\in\mathbb{N}^{+}\) we have \(\tau_{k,n_{0}+1}\cdots\tau_{k,n}\geq\frac{1}{n\cdot\mathrm{poly}(k)}\)._ Proof.: Fix \(n_{0}\geq 4\) and take any \(n,k\geq 1\). Defining \(\ell=\lfloor 4\log_{2}(60k)\rfloor\geq 20\), by Theorem3.4 we have \[\tau_{k,n_{0}+1}\cdots\tau_{k,\ell}\geq(.04)^{\ell}=(.04)^{O(\log k)}\geq \frac{1}{\mathrm{poly}(k)}. \tag{27}\] This proves the result if \(n\leq\ell\). Otherwise, it remains to show that \[\tau_{k,\ell+1}\cdots\tau_{k,n}\geq 1/n. \tag{28}\] For \(m\geq\ell+1\) we have \(k\leq\frac{1}{60}2^{m/4}\leq\frac{1}{\sqrt{10}m^{2}}2^{m/2}\), so we are eligible to use the bound from Theorem3.3. Then using \[\frac{\sqrt{10}km}{2^{m/2}}\leq\frac{\sqrt{10}m}{60\cdot 2^{m/4}}\leq 2^{-m/5}, \quad 1-\frac{1}{m}-2^{-m/5}\geq\left(1-\frac{1}{m}\right)\exp(-2^{1-m/5}) \tag{29}\] (the last inequality using \(m\geq\ell\geq 20\)), we conclude \[\tau_{k,\ell+1}\cdots\tau_{k,n}\geq\prod_{m=\ell+1}^{n}\!\left(1-\frac{1}{m} \right)\exp(-2^{1-m/5})=\frac{\ell}{n}\exp\!\left(-\sum_{m=\ell+1}^{n}2^{1-m/5 }\right)\geq\frac{1}{n} \tag{30}\] (using \(\ell\geq 20\)), confirming Inequality28. **Second step: From "truly random" non-local \(n_{0}\)-qubit gates to "pseudorandom" non-local \(n_{0}\)-qubit gates.** The next lemma, proved in Section3.4, may be viewed as saying that (suitably) _pseudo_-randomizing constantly many randomly chosen qubits is "not much worse" than _truly_ randomizing those qubits. **Lemma 3.6**.: _There is an absolute constant \(n_{0}=4\) such that for \(n\geq n_{0}+1\), we have_ \[\forall k\in\mathbb{N}^{+},\quad L_{\bar{P}_{n_{0}}\times\binom{[n]}{n_{0}}}( \rho_{2^{n}}^{k,k})\geq\kappa_{n_{0}}\cdot L_{\mathrm{G}(n_{0})\times\binom{[n _{0}]}{n_{0}}}(\rho_{2^{n}}^{k,k}),\] _where \(\kappa_{n_{0}}\) is an absolute constant (depending only on \(n_{0}\))._ Theorem3.1 follows from Lemma3.2, Lemma3.5 and Lemma3.6. ### Proof of Lemma 3.2 **Lemma 3.7** (Restatement of Lemma 3.2).: _Fix a positive integer \(n_{0}\geq 4\). Suppose that for \(n_{0}<m\leq n\) we have_ \[\forall k\in\mathbb{N}^{+},\quad L_{\mathrm{G}(m-1)\times\binom{[m]}{m-1}}( \rho_{2^{m}}^{k,k})\geq\tau_{k,m}\cdot L_{\mathrm{G}(m)}(\rho_{2^{m}}^{k,k}). \tag{31}\] _Then_ \[\forall k\in\mathbb{N}^{+},\quad L_{\mathrm{G}(n_{0})\times\binom{[n_{0}]}{n_ {0}}}(\rho_{2^{n}}^{k,k})\geq\left(\prod_{n_{0}<m\leq n}\tau_{k,m}\right)\cdot L _{\mathrm{G}(n)}(\rho_{2^{n}}^{k,k}). \tag{32}\] Proof.: For readability we simply write \(\tau_{i}\) in this proof to stand for \(\tau_{k,i}\). Also for readability we express the lemma as \[\text{(randomizing $m-1$ out of $m$ qubits)}\geq\tau_{m}\cdot\text{( randomizing all $m$ qubits)}\quad\forall\ n_{0}<m\leq n \tag{33}\] \[\implies\quad\text{(randomizing $n_{0}$ out of $n$ qubits)}\geq\tau_{n_{0}+1}\cdots\tau_{n}\cdot\text{(randomizing all $n$ qubits)},\] with the modifier "vis-as-vis all \(\rho_{2^{m}}^{k,k_{\eta}}\) being implied. The \(m=n_{0}+1\) case of Inequality (33) is \[\text{(randomizing $n_{0}$ out of $n_{0}+1$ qubits)}\geq\tau_{n_{0}+1}\cdot\text{( randomizing all $n_{0}+1$ qubits)}. \tag{34}\] From this, by adding an ignored \((n_{0}+2)\)th qubit, we are able to conclude \[\text{(randomizing $n_{0}$ out of the first $n_{0}+1$ of $n_{0}+2$ qubits)}\] \[\geq\tau_{n_{0}+1}\cdot\text{(randomizing the first $n_{0}+1$ of $n_{0}+2$ qubits)}. \tag{35}\] To derive this implication more formally, start with Inequality (34), which says that for all \(k\in\mathbb{N}^{+}\), \[\underset{\boldsymbol{g}\sim\mathrm{G}(n_{0})}{\mathbf{E}}\left[\mathbb{1}- \boldsymbol{g}_{\boldsymbol{e}}^{\otimes k,k}\right]\geq\tau_{n_{0}+1}\cdot \underset{\boldsymbol{h}\sim\mathrm{G}(n_{0}+1)}{\mathbf{E}}[\mathbb{1}- \boldsymbol{h}^{\otimes k,k}]. \tag{36}\] We now consider tacking on a \((n_{0}+2)\)th tensor factor that is ignored by both \(\boldsymbol{g}_{e}\) and by \(\boldsymbol{h}\). Since \(A\geq B\implies A\otimes\mathbb{1}\geq B\otimes\mathbb{1}\), we can tensor-product both sides of Inequality (36) by \(\mathbb{1}^{\otimes k,k}\) (where \(\mathbb{1}\) denotes the \(2\times 2\) identity matrix) to conclude \[\underset{\boldsymbol{e}\sim\mathrm{G}(n_{0})}{\mathbf{E}}\left[\mathbb{1}- \boldsymbol{g}_{\boldsymbol{e}}^{\otimes k,k}\right]\geq\tau_{n_{0}+1}\cdot \underset{\boldsymbol{h}\sim\mathrm{G}(n_{0}+1)}{\mathbf{E}}[\mathbb{1}- \boldsymbol{h}_{f}^{\otimes k,k}], \tag{37}\] and this is the meaning of Inequality (35). Indeed, we can insert the ignored \((n_{0}+2)\)th qubit at any position, not just the last one; i.e., for any \(j\in[n_{0}+2]\), \[\underset{\boldsymbol{g}\sim\mathrm{G}(n_{0})}{\mathbf{E}}\left[\mathbb{1}- \boldsymbol{g}_{\boldsymbol{e}}^{\otimes k,k}\right]\geq\tau_{n_{0}+1}\cdot \underset{\boldsymbol{h}\sim\mathrm{G}(n_{0}+1)}{\mathbf{E}}[\mathbb{1}- \boldsymbol{h}_{f}^{\otimes k,k}.] \tag{38}\] If we now average the above (PSD-order) inequality over \(\boldsymbol{j}\sim[n_{0}+2]\) we get \[\underset{\boldsymbol{g}\sim\mathrm{G}(n_{0})}{\mathbf{E}}\left[\mathbb{1}- \boldsymbol{g}_{\boldsymbol{e}}^{\otimes k,k}\right]\geq\tau_{n_{0}+1}\cdot \underset{\boldsymbol{h}\sim\mathrm{G}(n_{0}+1)}{\mathbf{E}}[\mathbb{1}- \boldsymbol{h}_{f}^{\otimes k,k}], \tag{39}\] which we would express as \[\text{(randomizing $n_{0}$ out of $n_{0}+2$ qubits)}\geq\tau_{n_{0}+1}\cdot \text{(randomizing $n_{0}+1$ out of $n_{0}+2$ qubits)}. \tag{40}\] But the \(m=n_{0}+2\) case of our hypothesis Inequality (33) is \[\text{(randomizing $n_{0}+1$ out of $n_{0}+2$ qubits)}\geq\tau_{n_{0}+2}\cdot \text{(randomizing all $n_{0}+2$ qubits)}, \tag{41}\] so chaining this together with Inequality (40) (using the PSD-ordering fact \(A\geq B\), \(B\geq C\implies A\geq C\)) gives \[\text{(randomizing $n_{0}$ out of $n_{0}+2$ qubits)}\geq\tau_{n_{0}+1}\cdot \tau_{n_{0}+2}\cdot\text{(randomizing all $n_{0}+2$ qubits)}. \tag{42}\] Iterating this argument completes the proof of the lemma. ### Proof of Lemma 3.6 An ingredient we need for Lemma 3.6 is the existence of a suitable finite "gate set" with useful properties. This is provided by the following lemma, which follows from known universality results in quantum computing (see Section 3.4.1): **Lemma 3.8**.: _There is an absolute constant \(n_{0}=4\) for which there is a finite multiset \(P_{n_{0}}\subset\mathrm{SO}(2^{n_{0}})\), closed under negations and inverses, with two properties:_ 1. _(There is a basis in which) every matrix in_ \(P_{n_{0}}\) _has algebraic entries._ 2. _Finite products of elements of_ \(P_{n_{0}}\) _are dense in_ \(\mathrm{SO}(2^{n_{0}})\)_._ _The same statement is true for \(\mathrm{SU}(2^{n_{0}})\) (also with \(n_{0}=4\))._ Lemma 3.8 allows us to use a deep result of Benoist and de Saxce [1], which extended earlier work of Bourgain-Gamburd [1] (for the case of the special unitary group) to a broader range of groups. The main result of [1] is as follows: **Theorem 3.9**.: _([1, Consequence of Theorem 1.2].) For \(n\geq 1\) let \(\mathrm{G}(n)\subseteq\mathrm{SU}(2^{n})\) be a connected compact simple Lie group. Fix a positive integer \(n_{0}\) and suppose that \(P_{n_{0}}\subset\mathrm{G}(n_{0})\) satisfies properties (A) and (B) of Lemma 3.8. Then there exists a constant \(\kappa>0\) such that_ \[\left\|\underset{\boldsymbol{g}\sim\widetilde{P_{n_{0}}}}{\mathrm{E}}[ \mathrm{reg}(\boldsymbol{g})]-\underset{\boldsymbol{g}\sim\mathrm{G}(n_{0})}{ \mathrm{E}}[\mathrm{reg}(\boldsymbol{g})]\right\|_{\mathrm{op}}\leq 1-\kappa, \tag{43}\] _where \(\mathrm{reg}\) denotes the regular representation of \(\mathrm{G}(n_{0})\). Equivalently, \(L_{\widetilde{P_{n_{0}}}}(\mathrm{reg})\geq\kappa\cdot L_{\mathrm{G}(n_{0})}( \mathrm{reg}),\) or_ \[(\widetilde{P_{n_{0}}}\text{-pseudorandomizing in $2^{n_{0}}$ dimensions})\geq\kappa\cdot(\text{ randomizing $2^{n_{0}}$ dimensions})\quad\text{[vis-a-vis reg]}. \tag{44}\] We remark that (as noted by [1]) a weaker form of Equation (43), with the \(k\)-wise tensor product representation in place of the regular representation and \(\kappa\) depending on \(k\), has been known at least since [1]; however, the stronger quantitative bound of Equation (43) is essential for our purposes. Theorem 3.9 yields the following useful corollary: **Corollary 3.10**.: _For \(n_{0}=4\), \(\mathrm{G}(n_{0})=\mathrm{SO}(2^{n_{0}})\), and \(P_{n_{0}}\subset\mathrm{G}(n_{0})\) satisfying properties (A) and (B) of Lemma 3.8, there is a constant \(\kappa>0\) such that for all \(k\in\mathbb{N}^{+}\) we have \(L_{\widetilde{P_{n_{0}}}}(\rho_{2^{n_{0}}}^{k,k})\geq\kappa\cdot L_{\mathrm{G }(n_{0})}(\rho_{2^{n_{0}}}^{k,k})\). That is, vis-a-vis any \(\rho_{2^{n_{0}}}^{k,k}\), we have_ \[(\widetilde{P_{n_{0}}}\text{-pseudorandomizing $n_{0}$ qubits})\geq\kappa\cdot( \mathrm{G}(n_{0})\text{-randomizing $n_{0}$ qubits}). \tag{45}\] _The same is true for \(n_{0}=4\), \(\mathrm{G}(n_{0})=\mathrm{SU}(2^{n_{0}})\)._ Proof.: We first note that since all irreducible representations appear in the regular representation7, the conclusion of Theorem 3.9 also holds for any \(\rho_{2^{k}}^{k,k}\) representation. Since the special unitary group is connected, compact, and simple8, this immediately gives Corollary 3.10 in the case \(\mathrm{G}(n_{0})=\mathrm{SU}(2^{n_{0}})\). Footnote 7: For a concrete proof in the case of \(\mathrm{G}(\ell)=\mathrm{SO}(2^{\ell})\), see e.g. [12, Lem. 6.1]. Footnote 8: Recall that the Lie algebra of the special unitary group is simple, and that Benoiste and de Saxce remark, following their Theorem 1.2 in [1], that “_For us, a compact simple Lie group will be a compact real Lie group whose Lie algebra is simple._” For the special orthogonal case, while \(\mathrm{G}(n_{0})=\mathrm{SO}(2^{n_{0}})\) is not simple, the projective special orthogonal group \(\mathrm{PSO}(2^{n_{0}})=\mathrm{SO}(2^{n_{0}})/\{\pm 1\}\) is a connected compact simple Lie group. Writing \(P_{n_{0}}^{\prime}\) to denote the multiset of elements of \(\mathrm{PSO}(2^{n_{0}})\) corresponding to \(P_{n_{0}}\), Theorem 3.9 gives us that \[\left\|\underset{\boldsymbol{g}\sim\widetilde{P_{n_{0}}}}{\mathrm{E}}[\rho_{2 ^{n_{0}}}^{k,k}(\boldsymbol{g})]-\underset{\boldsymbol{g}\sim\mathrm{PSO}(2^{n_{0}} )}{\mathrm{E}}[\rho_{2^{n_{0}}}^{k,k}(\boldsymbol{g})]\right\|_{\mathrm{op}} \leq 1-\kappa. \tag{46}\] Now recalling that \(\rho_{2^{n_{0}}}^{k,k}(g)=g^{\otimes k}\otimes g^{\otimes k}\), since \(P_{n_{0}}\) is closed under negation there is no need to distinguish between \(\mathrm{PSO}(2^{n_{0}})\) and \(\mathrm{SO}(2^{n_{0}})\) in either of the expectations appearing in Equation (46), i.e. we have \[\underset{\boldsymbol{g}\sim\widetilde{P_{n_{0}}^{\prime}}}{\mathbf{E}}[\rho_{2 ^{n_{0}}}^{k,k}(\boldsymbol{g})]=\underset{\boldsymbol{g}\sim\widetilde{P_{n_{ 0}}}}{\mathbf{E}}[\rho_{2^{n_{0}}}^{k,k}(\boldsymbol{g})],\qquad\quad\underset {\boldsymbol{g}\sim\mathrm{PSO}(2^{n_{0}})}{\mathbf{E}}[\rho_{2^{n_{0}}}^{k,k}( \boldsymbol{g})]=\underset{\boldsymbol{g}\sim\mathrm{SO}(2^{n_{0}})}{\mathbf{E }}[\rho_{2^{n_{0}}}^{k,k}(\boldsymbol{g})], \tag{47}\] which gives Corollary 3.10 for the case \(\mathrm{G}(n_{0})=\mathrm{SO}(2^{n_{0}})\). With Corollary 3.10 in hand, now we are ready to prove Lemma 3.6: Proof of Lemma 3.6.: By Corollary 3.10, we have \(L_{\widetilde{P_{n_{0}}^{-}}}(\rho_{2^{n_{0}}}^{k,k})\geq\kappa_{n_{0}}\cdot L _{\mathrm{G}(n_{0})}(\rho_{2^{n_{0}}}^{k,k})\), i.e. \[\mathbb{1}-\underset{\boldsymbol{h}\sim\widetilde{P_{n_{0}}}}{\mathbf{E}}[ \rho(\boldsymbol{h})]\geq\kappa_{n_{0}}\left(\mathbb{1}-\underset{\boldsymbol {g}\sim\mathrm{G}(n_{0})}{\mathbf{E}}[\rho(\boldsymbol{g})]\right). \tag{48}\] We consider tacking on \(n-n_{0}\) tensor factors that are ignored by both \(\boldsymbol{g}\) and by \(\boldsymbol{h}\). Since \(A\geq B\implies A\otimes\mathbb{1}\geq B\otimes\mathbb{1}\), we can tensor-product both sides of Equation (48) by the identity to conclude \[\mathbb{1}-\underset{\boldsymbol{h}\sim\widetilde{P_{n_{0}}}}{\mathbf{E}}[ \rho(\boldsymbol{h}_{[n_{0}]})]\geq\kappa_{n_{0}}\left(\mathbb{1}-\underset{ \boldsymbol{g}\sim\mathrm{G}(n_{0})}{\mathbf{E}}[\rho(\boldsymbol{g}_{[n_{0}] })]\right). \tag{49}\] We can insert the ignored \(n-n_{0}\) qubits at any positions, not just the last one; averaging the resulting inequalities, we get \[\frac{1}{\binom{n}{n-n_{0}}}\sum_{1\leq i_{1}<\cdots<i_{n_{0}}\leq n}\left( \mathbb{1}-\underset{\boldsymbol{h}\sim\widetilde{P_{n_{0}}}}{\mathbf{E}}[ \rho(\boldsymbol{h}_{(i_{1},\ldots,i_{n_{0}})})]\right)\geq\kappa_{n_{0}}\cdot \frac{1}{\binom{n}{n-n_{0}}}\sum_{1\leq i_{1}<\cdots<i_{n_{0}}\leq n}\left( \mathbb{1}-\underset{\boldsymbol{g}\sim\mathrm{G}(n_{0})}{\mathbf{E}}[\rho( \boldsymbol{g}_{(i_{1},\ldots,i_{n_{0}})})]\right), \tag{50}\] which is what Lemma 3.6 asserts. #### 3.4.1 Proof of Lemma 3.8 We first consider \(\mathrm{SO}(2^{4})\); so we must show that there is a finite multiset \(P_{4}\subset\mathrm{SO}(2^{4})\), closed under inverses, that satisfies conditions (A) and (B) of Lemma 3.8. Define the \(1\)- and \(2\)-qubit gates \[\mathrm{Q}\coloneqq\begin{bmatrix}3/5&-4/5\\ 4/5&3/5\end{bmatrix}\quad\text{and}\quad\mathrm{CNOT}\coloneqq\begin{bmatrix} 1&0\\ 0&0\end{bmatrix}\otimes\begin{bmatrix}1&0\\ 0&1\end{bmatrix}+\begin{bmatrix}0&0\\ 0&1\end{bmatrix}\otimes\begin{bmatrix}0&1\\ 1&0\end{bmatrix}, \tag{51}\] and let \(P_{4}\) be the following finite subset9 of \(\mathrm{SO}(2^{4})\): Footnote 9: Recall that \(\mathrm{CNOT}\not\in\mathrm{SO}(2^{2})\), but \(\mathrm{CNOT}\otimes\mathbb{1}_{4\times 4}\in\mathrm{SO}(16)\). \[P_{4}:=\text{the closure of }\{\mathrm{Q}_{(j)}:j\in[4]\}\cup\{\mathrm{CNOT}_{(i,j)}:i,j \in[4],i\neq j\}\text{ under inverses and negations.} \tag{52}\] Clearly \(P_{4}\) satisfies (A), and (B) follows from the following result from [22, Thm. 3.1]: **Fact 3.11**.: _The \(1\)- and \(2\)-qubit gates_ \[\mathrm{Q}\coloneqq\begin{bmatrix}3/5&-4/5\\ 4/5&3/5\end{bmatrix}\quad\text{and}\quad\mathrm{CNOT}\coloneqq\begin{bmatrix} 1&0\\ 0&0\end{bmatrix}\otimes\begin{bmatrix}1&0\\ 0&1\end{bmatrix}+\begin{bmatrix}0&0\\ 0&1\end{bmatrix}\otimes\begin{bmatrix}0&1\\ 1&0\end{bmatrix} \tag{53}\] _are together universal for quantum computing with real amplitudes. More precisely, recalling Equation (52), we have that finite products of elements of \(P_{4}\) are dense in \(\mathrm{SO}(2^{4})\)._ Next we turn to \(\mathrm{SU}(2^{4})\). Define the \(1\)-qubit Hadamard gate (denoted H), phase gate (denoted S), and "\(\pi/8\) gate" (denoted T) respectively as \[\mathrm{H}\coloneqq\frac{1}{\sqrt{2}}\begin{bmatrix}1&1\\ 1&-1\end{bmatrix},\quad\mathrm{S}\coloneqq\frac{1}{\sqrt{2}}\begin{bmatrix}1& 1\\ 1&-1\end{bmatrix},\qquad\text{and}\quad\mathrm{T}\coloneqq\begin{bmatrix}1&0 \\ 0&e^{i\pi/4}\end{bmatrix}, \tag{54}\] and recall the definition of \(\mathrm{CNOT}\) from Equation (51). Now let \(P_{4}^{\prime}\) be the closure of \(\{\mathrm{H}_{(j)}:j\in[4]\}\cup\{\mathrm{S}_{(j)}:j\in[4]\}\cup\{\mathrm{T}_{ (j)}:j\in[4]\}\cup\{\mathrm{CNOT}_{(i,j)}:i,j\in[4],i\neq j\}\) under inverses and negations. It is clear that \(P_{4}^{\prime}\) is a finite set of elements of \(\mathrm{U}(2^{4})\), closed under inverses, satisfying (A). The fact that \(P_{4}^{\prime}\) satisfies (B) follows from the well-known fact (see, e.g., [20, Sec. 4.5.3]) that H, S, T and \(\mathrm{CNOT}\) together are universal for quantum computing. Finally, we obtain the desired set of elements \(P_{4}\subset\mathrm{SU}(2^{4})\) by multiplying elements of \(P_{4}^{\prime}\) by suitable complex values of unit norm to have determinant one. ## 4 Lower bounding \(\tau_{m}\) for large \(m\) In this section we prove Theorem 3.3, restated below, using simplifications of techniques introduced in [10]: **Theorem 4.1** (Restatement of Theorem 3.3).: _Let the sequence of groups \((\mathrm{G}(n))_{n\geq 1}\) be either \((\mathrm{SO}(2^{n}))_{n\geq 1}\) or \((\mathrm{SU}(2^{n}))_{n\geq 1}\). Define the following operators on \((\mathbb{C}^{2^{m}})^{\otimes 2k}\):_ \[\Pi^{(m)}=\mathop{\mathbf{E}}_{\boldsymbol{g}\sim\mathrm{G}(m)}[\rho_{2^{m}}^ {k,k}(\boldsymbol{g})],\quad\Pi_{[m]\setminus i}\otimes\mathbb{1}_{i}=( \mathbb{1}_{2k\times 2k}\text{ on the $i$th tensor factor, $\Pi^{(m-1)}$ on the remainder}). \tag{55}\] _Then for all \(k\leq\frac{1}{\sqrt{10}m^{2}}2^{m/2}\) we have_ \[\left\|\mathop{\mathrm{avg}}_{i=1}^{m}\{\Pi_{[m]\setminus i}\otimes\mathbb{1 }_{i}\}-\Pi^{(m)}\right\|_{\mathrm{op}}\leq\frac{1}{m}+\frac{\sqrt{10}km}{2^{m /2}}; \tag{56}\] _equivalently, in the notation of Theorem 3.3, \(\tau_{m}\geq 1-(\frac{1}{m}+\frac{\sqrt{10}km}{2^{m/2}})\)._ We observe: **Fact 4.2**.: \(\mathrm{Im}\,\Pi^{(m)}\) _is a subspace of \(\mathrm{Im}(\Pi_{[m]\setminus i}\otimes\mathbb{1}_{i})\) for all \(i\)._ ### Identifying the projectors To prove Theorem 4.1, we will need to have a description of the projection operator \(\Pi^{(m)}\); luckily, this is provided by known representation theory. To state the results we need some notation. **Notation 4.3**.: If \(X\in\,\mathbb{C}^{r\times r}\) is a matrix, we write \(\mathrm{vec}(X)\in\,\mathbb{C}^{r}\otimes\,\mathbb{C}^{r}\) for its vectorization; here \(\mathrm{vec}\) is the linear map that takes \(|i\rangle\langle j|\) to \(|ij\rangle\). **Fact 4.4**.: _For matrices \(R_{0},R_{1},S\in\,\mathbb{C}^{r\times r}\) it holds that \((R_{0}\otimes R_{1})\mathrm{vec}(S)=\mathrm{vec}(R_{0}SR_{1}^{\top})\)._ **Notation 4.5**.: Having fixed some \(D=2^{m}\in\mathbb{N}^{+}\), we write \[|\Phi\rangle=D^{-1/2}\sum_{a=1}^{D}|a\rangle\otimes|a\rangle=D^{-1/2}\mathrm{ vec}(\mathbb{1}_{D\times D}) \tag{57}\] for the maximally entangled state on \(\,\mathbb{C}^{D}\otimes\,\mathbb{C}^{D}\). **Notation 4.6**.: For \(k\in\mathbb{N}^{+}\), let \(\mathcal{M}_{2k}\) denote the set of all perfect matchings on \([2k]\), and let \(\mathcal{M}_{2k}^{\mathrm{bip}}\) denote the subset of all "bipartite" perfect matchings, meaning that each pair in the matching can be written as \(\{i,j\}\) with \(i\leq k\) and \(j>k\). **Notation 4.7**.: For \(M\in\mathcal{M}_{2k}\), we introduce the unit vector \[\left|\Phi_{M}\right\rangle=\bigotimes_{\{i,j\}\in M}\left|\Phi\right\rangle_{ij }\in(\mathbb{C}^{D})^{\otimes 2k}, \tag{58}\] where we abuse notation slightly by writing \(\left|\Phi\right\rangle_{ij}\) for the maximally entangled state on the \(i\)th and \(j\)th tensor components. Let us give two examples. First, with \(k=3\): \[M=\{\{1,2\},\{3,6\},\{4,5\}\}\implies\left|\Phi_{M}\right\rangle=D^{-k/2}\sum _{a,b,c=1}^{D}\left|aabccb\right\rangle=D^{-k/2}\cdot\sum_{\begin{subarray}{c }\chi:[2k]\rightarrow[D]\\ \text{all edges of $M$ monochromatic}\\ \text{for vertex-coloring $\chi$}\end{subarray}}\left|\chi\right\rangle. \tag{59}\] As a second example, with general \(k\): \[M_{0}=\{\{1,k+1\},\{2,k+2\},\ldots,\{k,2k\}\}\implies\left|\Phi_{M_{0}} \right\rangle=D^{-k/2}\text{vec}(\mathbb{1}_{D^{k}\times D^{k}}). \tag{60}\] It is not hard to show that every \(\left|\Phi_{M}\right\rangle\) with \(M\in\mathcal{M}_{2k}\) (respectively, \(M\in\mathcal{M}_{2k}^{\text{bip}}\)) is fixed by every \(\rho_{D}^{k,k}(g)\) for \(g\in\operatorname{SO}(D)\) (respectively, \(g\in\operatorname{SU}(D)\)). To illustrate this for the particular \(M_{0}\in\mathcal{M}_{2k}^{\text{bip}}\subseteq\mathcal{M}_{2k}\) from Equation60, we have that for \(g\in\operatorname{SO}(D)\leq\operatorname{SU}(D)\), \[g^{\otimes k}\otimes\overline{g}^{\otimes k}\left|\Phi_{M_{0}}\right\rangle= \frac{g^{\otimes k}\otimes\overline{g}^{\otimes k}\text{vec}(\mathbb{1}_{D^{k }\times D^{k}})}{D^{k/2}}=\frac{\text{vec}(g^{\otimes k}\text{vec}(\mathbb{1}_ {D^{k}\times D^{k}})(\overline{g}^{\otimes k})^{\top})}{D^{k/2}}=\frac{\text{ vec}(\mathbb{1}_{D^{k}\times D^{k}})}{D^{k/2}}=\left|\Phi_{M_{0}}\right\rangle, \tag{61}\] where we used Fact4.4 and \(\overline{g}^{\top}=g^{\dagger}=g^{-1}\). Given this fact, each \(\left|\Phi_{M}\right\rangle\) must be fixed by the average representation \(\Pi^{(m)}\), and thus be in \(\operatorname{Im}\Pi^{(m)}\). On the other hand, it is elementary to show (e.g., [2, Prop. 1]) that \(\operatorname{Im}\Pi^{(m)}\) is _precisely_ the set of vectors fixed by every operator in \(\{\rho_{D}^{k,k}(g):g\in\operatorname{G}(m)\}\) (recall that \(D=2^{m}\)). In turn, these are precisely the vectorizations of all matrices in the _commutant_ (centralizer) of \(\mathcal{A}=\{g^{\otimes k}:g\in\operatorname{G}(m)\}.\) Finally, the commutants of tensor product representations of our groups have been identified under the umbrella of _Schur-Weyl duality_. **Theorem 4.8**.: _By Schur-Weyl duality for \(\operatorname{U}(D)\)[22, 23, 24], \(D=2^{m}\), when \(\operatorname{G}(m)=\operatorname{U}(2^{m})\) the projector \(\Pi^{(m)}\) has image equal to the span of \(\left|\Phi_{M}\right\rangle\) for \(M\in\mathcal{M}_{2k}^{\text{bip}}\). The same is true when \(\operatorname{G}(m)=\operatorname{SU}(2^{m})\), since \(\Pi^{(m)}\) is unchanged in this case.10_ Footnote 10: Observe that because of the conjugation in the definition of \(\rho_{2^{k}}^{k,k}\), the expectation \(\Pi^{(m)}\) is the same whether the expectation is taken over \(\mathbf{g}\sim\operatorname{G}(n)=\operatorname{SU}(2^{n})\) or \(\mathbf{g}\sim\operatorname{G}(n)=\operatorname{U}(2^{n})\). _By Schur-Weyl duality for \(\operatorname{SO}(D)\)[10, 23, 24], \(D=2^{m}\), when \(\operatorname{G}(m)=\operatorname{SO}(2^{m})\) and \(k<2^{m-1}\) the projector \(\Pi^{(m)}\) has image equal to the span of \(\left|\Phi_{M}\right\rangle\) for \(M\in\mathcal{M}_{2k}\)._ **Remark 4.9**.: The condition \(k<2^{m-1}\) in the previous theorem cannot be dropped. For example, \[\operatorname*{\mathbb{E}}_{\mathbf{g}\sim\operatorname{O}(2)}[\rho_{2}^{1,1}(\bm {g})]=\text{projection onto }\tfrac{1}{\sqrt{2}}(\left|00\right\rangle+\left|11 \right\rangle), \tag{62}\] but \[\operatorname*{\mathbb{E}}_{\mathbf{g}\sim\operatorname{SO}(2)}[\rho_{2}^{1,1}(\bm {g})]=\text{projection onto }\operatorname{span}\{\tfrac{1}{\sqrt{2}}(\left|00\right\rangle+\left|11 \right\rangle),\tfrac{1}{\sqrt{2}}(\left|01\right\rangle-\left|10\right\rangle )\}. \tag{63}\] We have now identified a spanning set for \(\operatorname{Im}\Pi^{(m)}\), but working with it is complicated by the fact that it is not an orthonormal basis. It is, however, relatively "close" to being so, as we now show (following and simplifying some arguments from [1, Lem. 17] and [13, Lem. 9]). First, an elementary lemma in linear algebra: **Lemma 4.10**.: _Let \(W\in\mathbb{C}^{d\times t}\) have unit vector columns \(\left|w_{1}\right\rangle,\ldots,\left|w_{t}\right\rangle\), and suppose their Gram matrix \(W^{\dagger}W\in\mathbb{C}^{t\times t}\) is close to the identity, in the sense that \(E=W^{\dagger}W-\mathbb{1}\) has \(\left\|E\right\|_{\mathrm{op}}\leq\kappa<1\). (For example, this would hold if_ \[\left\|E\right\|_{\mathrm{1}\mapsto 1}=\max_{j\in[t]}\sum_{i\neq j}\lvert \langle w_{i}|w_{j}\rangle\rvert\leq\kappa, \tag{64}\] _since generally \(\left\|E\right\|_{\mathrm{1}\mapsto 1}\geq\rho(E)=\left\|E\right\|_{\mathrm{op}}\), as \(E\) is Hermitian.) Then \(WW^{\dagger}=\sum_{i}\left|w_{i}\right\rangle\langle w_{i}\rvert\) satisfies_ \[WW^{\dagger}\operatorname*{\approx}\Pi_{T}, \tag{65}\] _where \(\Pi_{T}\) is the projector onto \(T=\operatorname{span}\{\left|w_{1}\right\rangle,\ldots,\left|w_{t}\right\rangle\}\), and \(X\operatorname*{\approx^{\kappa}Y}\) denotes \(\left\|X-Y\right\|_{\mathrm{op}}\leq\kappa\)._ Proof.: By hypothesis, all eigenvalues \(\lambda\) of \(W^{\dagger}W\) satisfy \(\left|\lambda-1\right|\leq\kappa<1\). Hence \(WW^{\dagger}\) also has these \(t\) (nonzero) \(\lambda\)'s within \(\kappa\) of \(1\) as eigenvalues (associated to eigenvectors in \(T\)), plus possibly additional eigenvalues of \(0\) (outside \(T\)). This confirms Inequality (65). **Theorem 4.11**.: _In the setting of \(\mathrm{G}(m)=\mathrm{SO}(D)\), \(D=2^{m}\) and provided \(k^{2}\leq\frac{1}{9}D\), we have_ \[\sum_{M\in\mathcal{M}_{2k}}\left|\Phi_{M}\right\rangle\langle\Phi_{M}| \operatorname*{\approx}^{\kappa_{m}}\Pi^{(m)}, \tag{66}\] _where \(\kappa_{m}\coloneqq\frac{10}{9}\frac{k^{2}}{D}\). In the setting of \(\mathrm{G}(m)=\mathrm{SU}(2^{m})\), the same is true with \(\mathcal{M}_{2k}\) replaced by \(\mathcal{M}_{2k}^{\mathrm{bip}}\) (and one could replace \(\kappa_{m}\) by \(\frac{5}{9}\frac{k^{2}}{D}\), but we won't)._ Proof.: The result for \(\mathrm{U}(2^{m})\) (hence \(\mathrm{SU}(2^{m})\)) appears in [1], and for \(\mathrm{O}(2^{m})\) in [1], but we present here a representation theory-free proof, focusing on the \(\mathrm{SO}(2^{m})\) case. We will employ Lemma 4.10, with the \(\left|w_{i}\right\rangle\)'s being the \(\left|\Phi_{M}\right\rangle\)'s, \(M\in\mathcal{M}_{2k}\). In particular, we will establish the premise in Inequality (64) with \(\kappa=\kappa_{m}\). By symmetry of all matchings in \(\mathcal{M}\), the quantity inside the maximum is the same for every "\(\left|w_{j}\right\rangle\)"; thus, we need only bound it for one particular choice, say the \(M_{0}\) from Equation (60). Thus we need to establish \[\sum_{M\in\mathcal{M}_{2k}}\lvert\langle\Phi_{M}|\Phi_{M_{0}}\rangle\rvert=1+ \sum_{M\neq M_{0}}\lvert\langle\Phi_{M}|\Phi_{M_{0}}\rangle\rvert\leq 1+ \kappa_{m}. \tag{67}\] In computing \(\langle\Phi_{M}|\Phi_{M_{0}}\rangle\), it is easy to see (e.g., from Equation (59)) we get a contribution of \(D^{-k}\) from every vertex-coloring \(\chi:[2k]\to[D]\) that makes all edges of \(M\) and \(M_{0}\) monochromatic. Since \(M\cup M_{0}\) is a union of cycles, this is equivalent to a contribution of \(D^{\mathrm{cc}(M\cup M_{0})}\), where \(\mathrm{cc}(\cdot)\) denotes the number of connected components. Thus (cf. [1, 1]) \[D^{k}\cdot\sum_{\text{ matchings }M}\lvert\langle\Phi_{M}|\Phi_{M_{0}} \rangle\rvert=D^{k}\cdot\sum_{\text{ matchings }M}\langle\Phi_{M}|\Phi_{M_{0}}\rangle=\sum_{M}D^{\mathrm{cc}(M\cup M_{0})}. \tag{68}\] The summation on the right is just the generating function (with "indeterminate" \(D\)) for the number of connected components obtained when placing a matching (initially: \(M\)) onto the endpoints of \(k\) labeled paths (initially: \(M_{0}\)). But this is a very simple exercise. Take the first labeled path, with endpoints \(x,y\), and consider the vertex \(z\) to which \(x\) is matched. There are \(2k-1\) possibilities for \(z\), with one of them (\(z=y\)) increasing the component count by \(1\), and the other \(2k-2\) increasing the count by \(0\). Thus the generating function picks up a factor of \((D^{1}+(2k-2)\cdot D^{0})\), and we reduce \(k\) to \(k-2\). We conclude that (cf. [1, 1]) \[\sum_{M}D^{\mathrm{cc}(M\cup M_{0})}=(D+(2k-2))(D+(2k-4))\cdots(D+2)D \tag{69}\] and hence \[\sum_{M}\lvert\langle\Phi_{M}|\Phi_{M_{0}}\rangle\rvert=(1)\bigl{(}1+\tfrac{ 2}{D}\bigr{)}\bigl{(}1+\tfrac{4}{D}\bigr{)}\cdots\bigl{(}1+\tfrac{2k-2}{D} \bigr{)}\leq\exp(\tfrac{k(k-1)}{D})\leq 1+\tfrac{10}{9}\tfrac{k^{2}}{D}=1+ \kappa_{m}, \tag{70}\] the last inequality holding because we have assumed \(k^{2}\leq\frac{1}{9}D\). Thus we have indeed verified Inequality (67). The case of \(\mathrm{G}(m)=\mathrm{U}(2^{m})\) is similar; we just need to compute the generating function for bipartite matchings, meaning \(\mathcal{M}_{\mathrm{2k}}^{\mathrm{bip}}\) replaces \(\mathcal{M}\). The bound for \(\kappa_{m}\) becomes \((1)(1+\frac{1}{D})(1+\frac{2}{D})\cdots(1+\frac{k-1}{D})-1\), which is only smaller (by a factor of about \(\frac{1}{2}\)). ### Proof of Theorem 4.1 In this section we establish Theorem 4.1. We begin by proving some general facts about projectors that are nearly orthogonal to each other. **Lemma 4.12**.: _Let \(P_{1},\ldots,P_{m}\) be orthogonal projections, and write \(A=\mathrm{avg}_{i=1}^{m}\{P_{i}\}\). Then_ \[\left\|P_{i}P_{j}\right\|_{\mathrm{op}}\leq\epsilon\ \ \forall\ i\neq j\quad \implies\quad\left\|A\right\|_{\mathrm{op}}\leq\tfrac{1}{m}+\min\{\sqrt{ \epsilon},m\epsilon\}. \tag{71}\] Proof.: We have \[A^{2}=\frac{1}{m}A+\frac{1}{m^{2}}\sum_{i\neq j}P_{i}P_{j};\quad \implies\quad\left\|A\right\|_{\mathrm{op}}^{2}\leq\frac{1}{m}\left\|A\right\| _{\mathrm{op}}+\frac{m(m-1)}{m^{2}}\epsilon\leq\frac{1}{m}\left\|A\right\|_{ \mathrm{op}}+\epsilon. \tag{72}\] Solving the quadratic inequality yields \(\left\|A\right\|_{\mathrm{op}}\leq\frac{1}{2m}+\sqrt{\frac{1}{4m^{2}}+\epsilon}\), from which the result follows. **Corollary 4.13**.: _In the setting of Lemma 4.12, let \(P\) be an orthogonal projection with \(\mathrm{Im}\,P\leq\mathrm{Im}\,P_{i}\) for all \(i\). Then Inequality (71) holds with each instance of \(P_{i}\) replaced by \(\widetilde{P}_{i}=P_{i}-P\)._ Proof.: It suffices to note that \(\widetilde{P}_{i}^{2}=\widetilde{P}_{i}\), since \(P_{i}\cdot P=P\cdot P_{i}=P\). **Remark 4.14**.: The identity used in the proof easily extends to \(\widetilde{P}_{i_{1}}\widetilde{P}_{i_{2}}\cdots\widetilde{P}_{i_{k}}=P_{i_{1} }P_{i_{2}}\cdots P_{i_{k}}-P\). Also, this identity remains true if any set of tildes is removed from the LHS (except for the set of all \(k\)). Let us now study the particular orthogonal projectors involved in Theorem 4.1. We wish to employ Corollary 4.13 with \[P_{i}\coloneqq\Pi_{[m]\setminus i}\otimes\mathbb{1}_{i},\quad i=1\ldots m, \qquad P\coloneqq\Pi^{(m)}. \tag{73}\] Fact 4.2 tells us Corollary 4.13's hypothesis is satisfied. We thus obtain \[\left\|\mathop{\mathrm{avg}}_{i=1}\{P_{i}\}-\Pi^{(m)}\right\|_{\mathrm{op}} \leq\frac{1}{m}+\min\{\sqrt{\epsilon},m\epsilon\},\quad\text{for }\epsilon= \mathop{\mathrm{max}}_{i\neq j}\biggl{\{}\left\|\widetilde{P}_{i}\widetilde{P }_{j}\right\|_{\mathrm{op}}\biggr{\}}. \tag{74}\] By symmetry of the \(m\) tensor factors, we have \(\epsilon=\left\|\widetilde{P}_{1}\widetilde{P}_{m}\right\|_{\mathrm{op}}\), and hence \[\epsilon^{2}=\left\|(\widetilde{P}_{1}\widetilde{P}_{m})^{\dagger}(\widetilde{ P}_{1}\widetilde{P}_{m})\right\|_{\mathrm{op}}=\left\|P_{m}\widetilde{P}_{1}P_{m} \right\|_{\mathrm{op}}, \tag{75}\] where we used Remark 4.14 to get \(\widetilde{P}_{m}\widetilde{P}_{1}\widetilde{P}_{m}=P_{m}\widetilde{P}_{1}P_{m}\). Our goal will be to use Theorem 4.11 (recall its \(\kappa_{m}\) notation) to establish the following: **Claim:** \[\epsilon^{2}=\left\|P_{m}\widetilde{P}_{1}P_{m}\right\|_{\mathrm{op}} \leq\kappa_{m-2}+2\kappa_{m-1}+\kappa_{m}\] (76) \[=\tfrac{10}{9}k^{2}(2^{2-m}+2\cdot 2^{1-m}\cdot 2^{-m})=10k^{2}2^{-m }\eqqcolon\delta.\] (77) We will apply Theorem 4.11 for \(m-2,m-1,m\); its hypothesis will be satisfied even for \(m-2\), since we have \(k^{2}\leq\frac{1}{9}2^{m-2}\) by virtue of the assumption \(k^{2}\leq\frac{1}{10m^{4}}2^{m}\) in the theorem we're proving. Moreover, this assumption implies that \(\delta^{1/4}\leq 1/m\), meaning that Inequality (74) gives us the bound \[\left\|\mathop{\mathrm{avg}}_{i=1}^{m}\{P_{i}\}-\Pi^{(m)}\right\|_{\mathrm{op}} \leq\frac{1}{m}+\min\{\delta^{1/4},m\delta^{1/2}\}=\frac{1}{m}+m\delta^{1/2}= \frac{1}{m}+\frac{\sqrt{10}km}{2^{m/2}}, \tag{78}\] verifying Inequality (56) and completing the proof of Theorem 4.1. Thus it remains to establish Inequality (76). To establish the claim, let us write \(\mathcal{M}\) for either \(\mathcal{M}_{2k}\) or \(\mathcal{M}_{2k}^{\text{bip}}\) (depending on \(\mathrm{G}(m)\)); and, for \(M\in\mathcal{M}\) let us write \[J_{M}=|\phi_{M}\rangle\langle\phi_{M}|\,,\quad\text{where }|\phi_{M}\rangle \text{ is the }D=2\text{ case of }|\Phi_{M}\rangle\text{ from Notations \ref{eq:M} and \ref{eq:M}.} \tag{79}\] Then (up to tensor factoring reordering) we Have \(J_{M}^{\otimes m}=|\Phi_{M}\rangle\), and hence Theorem 4.11 tells us \[\sum_{M\in\mathcal{M}}J_{M}^{\otimes m}\overset{\kappa_{m}}{\approx}\Pi^{(m)}. \tag{80}\] We will also use this to derive \[\sum_{M\in\mathcal{M}}J_{M}^{\otimes(m-1)}\overset{\kappa_{m-1}}{\approx}\Pi^ {(m-1)}\quad\implies\quad\sum_{M\in\mathcal{M}}\mathbbm{1}_{1}\otimes J_{M}^{ \otimes(m-1)}\overset{\kappa_{m-1}}{\approx}P_{1}, \tag{81}\] where the implication is by tensoring with \(\mathbbm{1}_{1}\) (which doesn't change operator norm differences). Using Inequality (80) again, and the triangle inequality, we reach \[\widetilde{P}_{1}=P_{1}-P\overset{\kappa_{m-1}+\kappa_{m}}{\approx}\sum_{M \in\mathcal{M}}\mathbbm{1}_{1}\otimes J_{M}^{\otimes(m-1)}-\sum_{M\in \mathcal{M}}J_{M}^{\otimes m}=\sum_{M\in\mathcal{M}}\overline{J}_{M}\otimes J _{M}^{\otimes(m-1)}, \tag{82}\] where \(\overline{J}_{M}\coloneqq\mathbbm{1}-J_{M}\). Since \(\left\|P_{m}\right\|_{\text{op}}\leq 1\), we can further conclude \[P_{m}\widetilde{P}_{1}P_{m} \overset{\kappa_{m-1}+\kappa_{m}}{\approx}P_{m}\Bigg{(}\sum_{M \in\mathcal{M}}\overline{J}_{M}\otimes J_{M}^{\otimes(m-1)}\Bigg{)}P_{m} \tag{83}\] \[=(\Pi^{(m-1)}\otimes\mathbbm{1}_{m})\Bigg{(}\sum_{M\in\mathcal{M }}\overline{J}_{M}\otimes J_{M}^{\otimes(m-2)}\otimes J_{M}\Bigg{)}(\Pi^{(m-1 )}\otimes\mathbbm{1}_{m})\] (84) \[=\sum_{M\in\mathcal{M}}\Big{(}\Pi^{(m-1)}(\overline{J}_{M} \otimes J_{M}^{\otimes(m-2)})\Pi^{(m-1)}\Big{)}\otimes J_{M}. \tag{85}\] Writing \[Z_{M}\coloneqq\Pi^{(m-1)}(\overline{J}_{M}\otimes J_{M}^{\otimes(m-2)})\Pi^{( m-1)}, \tag{86}\] we can put Inequality (85) into Equation (75) to obtain \[\epsilon^{2}\leq\kappa_{m-1}+\kappa_{m}+\left\|\sum_{M\in\mathcal{M}}Z_{M} \otimes J_{M}\right\|_{\text{op}}. \tag{87}\] Now \(Z_{M}\) is PSD, being a conjugation (by \(\Pi^{(m-1)}\)) of a PSD matrix: the tensor product of projections \(J_{M}\) and \(\overline{J}_{M}\). Since \(0\leq J_{M}\leq\mathbbm{1}\), we therefore conclude \(0\leq Z_{M}\otimes J_{M}\leq Z_{M}\otimes\mathbbm{1}_{m}\). Summing this over \(M\) yields \[0\leq\sum_{M\in\mathcal{M}}Z_{M}\otimes J_{M}\leq\sum_{M\in\mathcal{M}}Z_{M} \otimes\mathbbm{1}_{m}=\left(\sum_{M\in\mathcal{M}}Z_{M}\right)\otimes \mathbbm{1}_{m}, \tag{88}\] and hence (from Inequality (87)) \[\epsilon^{2}\leq\kappa_{m-1}+\kappa_{m}+\left\|\sum_{M\in\mathcal{M}}Z_{M} \right\|_{\text{op}}=\kappa_{m-1}+\kappa_{m}+\left\|\Pi^{(m-1)}\Bigg{(}\sum_{M \in\mathcal{M}}\overline{J}_{M}\otimes J_{M}^{\otimes(m-2)}\Bigg{)}\Pi^{(m-1 )}\right\|_{\text{op}}. \tag{89}\] We have effectively now reduced from \(m\) tensor components to \(m-1\). Indeed, suppose we had defined the "\(m-1\)" analogues of \(P_{1},P_{2},\dots\) and \(P\), calling them \(P_{1}^{(m-1)},P_{2}^{(m-1)},\dots\) and \(P^{(m-1)}=\Pi^{(m-1)}\). Then Inequality (82) would tell us \[\widetilde{P}_{1}^{(m-1)}=P_{1}^{(m-1)}-P^{(m-1)}\mathop{\approx}\limits^{ \kappa_{m-2}+\kappa_{m-1}}_{M\in\mathcal{M}}\sum_{M\in\mathcal{M}}\overline{J} _{M}\otimes J_{M}^{\otimes(m-2)}, \tag{90}\] and putting this into Inequality (89) (using \(\left\|P^{(m-1)}\right\|_{\mathrm{op}}\leq 1\)) yields \[\epsilon^{2}\leq\kappa_{m-2}+2\kappa_{m-1}+\kappa_{m}+\left\|P^{(m-1)} \widetilde{P}_{1}^{(m-1)}P^{(m-1)}\right\|_{\mathrm{op}}. \tag{91}\] But \(P^{(m-1)}\widetilde{P}_{1}^{(m-1)}P^{(m-1)}\) is in fact \(0!\) (In the notation of Corollary 4.13 this would be "\(P\cdot\widetilde{P}_{1}\cdot P=0\)".) Thus we have established the claim, Inequality (76). ## 5 Lower bounding \(\tau_{m}\) for small \(m\) In this section we prove Theorem 3.4, restated below: **Theorem 5.1** (Restatement of Theorem 3.4).: _Let the sequence of groups \((\mathrm{G}(n))_{n\geq 1}\) be either \((\mathrm{SO}(2^{n}))_{n\geq 1}\) or \((\mathrm{SU}(2^{n}))_{n\geq 1}\). For any \(m\geq 4\) we have that_ \[\forall k\in\mathbb{N}^{+},\hskip 28.452756pt\left\|\underset{\mathbf{g}\sim \mathrm{G}(m-1)\times\binom{[m]}{m-1}}{\mathbf{E}}[\rho_{2^{m}}^{k,k}(\mathbf{g}) ]-\underset{\mathbf{g}\sim\mathrm{G}(m)}{\mathbf{E}}[\rho_{2^{m}}^{k,k}(\mathbf{g})] \right\|_{\mathrm{op}}\leq\left(1-(1-\tfrac{1}{m})\tfrac{1-2^{2-m}}{4-2^{3-m} }\right)^{1/4}\leq.96; \tag{92}\] _equivalently, in the notation of Theorem 3.4, \(\tau_{m}\geq.04\)._ ### Metrics As discussed in Section 3, for \(\mathrm{G}(m)=\mathrm{SO}(2^{m})\) or \(\mathrm{G}(m)=\mathrm{SU}(2^{m})\) we have that \(\mathrm{G}(m)\subseteq\mathrm{U}(2^{m})\) is a compact connected Lie group with associated Lie algebra \(\mathfrak{g}_{m}\), where \[\text{for }\mathrm{G}(m)=\mathrm{SO}(2^{m}),\ \mathfrak{g}_{m}=\{H \in\mathbb{R}^{2^{m}\times 2^{m}}:H\text{ skew-symmetric}\}, \tag{93}\] \[\text{for }\mathrm{G}(m)=\mathrm{SU}(2^{m}),\ \mathfrak{g}_{m}=\{H \in\mathbb{C}^{2^{m}\times 2^{m}}:H\text{ skew-Hermitian},\ \operatorname{tr}H=0\}. \tag{94}\] As per [14, Prop. 2.11.1], \(\mathrm{G}(m)\) can be given the structure of a Riemannian manifold with a bi-invariant metric. Moreover, \(\mathrm{G}(m)\) is totally geodesic within \(\mathrm{U}(2^{m})\), hence the exponential map \(\exp:\mathfrak{g}_{m}\to\mathrm{G}(m)\) is surjective and Riemannian distance \(d_{\mathrm{Rie}}\) within \(\mathrm{G}(m)\) coincides with Riemannian distance within \(\mathrm{U}(2^{m})\). This distance can be computed straightforwardly (see, e.g., [14, within Lem. 1.3]), as follows: * The Riemannian distance is bi-invariant, so \(d_{\mathrm{Rie}}(X,Y)=d_{\mathrm{Rie}}(1,Z)\) for \(Z=YX^{-1}\). * Given \(Z\in\mathrm{G}(m)\), we can choose a unique \(H\in\mathfrak{g}_{m}\) with \(\exp(H)=Z\) such that the eigenvalues of \(H\) are of the form \(\mathrm{i}\theta_{j}\) for \(\theta_{j}\in(-\pi,\pi]\). We write \(H=\log Z\) for this choice of \(H\). * Then \(d_{\mathrm{Rie}}(1,Z)=\|H\|_{\mathrm{Fro}}=(\sum_{j}\theta_{j}^{2})^{1/2}\). In other words, \[d_{\mathrm{Rie}}(X,Y)=\|\log(YX^{-1})\|_{\mathrm{Fro}}. \tag{95}\] For the sake of computation it will be convenient to work not just with the Riemannian distance \(d_{\mathrm{Rie}}\) on \(\mathrm{G}(m)\), but also the (very similar) Frobenius distance \(d_{\mathrm{Fro}}\), where \(d_{\mathrm{Fro}}(X,Y)\) denotes \(\|X-Y\|_{\mathrm{Fro}}\). In the above setup, now using bi-invariance of \(d_{\mathrm{Fro}}\), we evidently have \[d_{\mathrm{Fro}}(X,Y)=\|1-Z\|_{\mathrm{Fro}}=\left(\sum_{j}|1-\exp(\mathrm{i} \theta_{j})|^{2}\right)^{1/2}=\left(\sum_{j}(2\sin(\theta_{j}/2))^{2}\right)^{ 1/2}. \tag{96}\] For some constant \(c<.4\leq 1\) we have the following numerical inequality (for \(|\theta|\leq\pi\)): \[(2\sin(\theta/2))^{2}\leq\theta^{2}\leq(2\sin(\theta/2))^{2}+c(2\sin(\theta/2))^ {4}. \tag{97}\] Using just \(c\leq 1\), we may conclude11 Footnote 11: Here we are clarifying slightly the deduction of [1, eq. (112a)]. \[d_{\mathrm{Fro}}(X,Y)^{2}\leq d_{\mathrm{Rie}}(X,Y)^{2}\leq d_{\mathrm{Fro}}(X,Y)^{2}+d_{\mathrm{Fro}}(X,Y)^{4}. \tag{98}\] Finally, we will also use the operator-norm distance, \(d_{\mathrm{op}}(X,Y)=\left\|X-Y\right\|_{\mathrm{op}}\), which satisfies \(d_{\mathrm{op}}(X,Y)\leq d_{\mathrm{Fro}}(X,Y)\). We now move on to considering (Borel) probability measures on metric spaces (always assumed to be complete and separable). First we recall some basic definitions: **Definition 5.2**.: A pair of jointly distributed random variables \((\mathbf{X},\mathbf{Y})\) is a _coupling_ of probability distributions \(\nu_{1},\nu_{2}\) if \(\mathbf{X}\) (respectively, \(\mathbf{Y}\)) has marginal distribution \(\nu_{1}\) (respectively, \(\nu_{2}\)). **Definition 5.3**.: On the metric space \((M,d)\), the _\(L^{p}\)-Wasserstein distance_ between two measures \(\nu_{1}\) and \(\nu_{2}\) is \[W_{d,p}(\nu_{1},\nu_{2})=\inf\Bigl{\{}\mathbf{E}[d(\mathbf{X},\mathbf{Y})^{p}]^{1/p}\ :\ (\mathbf{X},\mathbf{Y})\text{ is a coupling of }(\nu_{1},\nu_{2}) \Bigr{\}}. \tag{99}\] **Notation 5.4**.: If \(\nu\) is a probability measure on metric space \(M\) and \(K\) is a Markov transition kernel on \(M\), we write \(K^{\ell}\nu\) for the probability measure on \(M\) resulting from starting with probability measure \(\nu\) and taking \(\ell\in\mathds{N}\) steps according to \(K\). ### Oliveira's theorem and its consequences We now state a key result of Oliveira [10] that says that on any _length space_ (see e.g. [1]), \(L^{2}\)-Wasserstein local contraction implies global contraction. As we only need the result in the particular case of compact, connected Lie groups (which are finite-diameter complete Riemannian manifolds), we state it only in this simpler context: **Theorem 5.5**.: _(Implied by [10, Thm. 3].) Let \((M,d)\) be a finite-diameter complete Riemannian manifold, and let \(K\) be a Markov transition kernel on \(M\) satisfying the following:_ \[W_{d,2}(K\delta_{X},K\delta_{Y})\leq(\eta+o(1))d(X,Y),\quad\text{with respect to }d(X,Y)\to 0. \tag{100}\] _(Here \(\delta_{Z}\) denotes the measure that puts all of its probability mass on \(Z\in M\).) Then for all probability measures \(\nu_{1},\nu_{2}\) on \(M\) it holds that_ \[W_{d,2}(K\nu_{1},K\nu_{2})\leq\eta\cdot W_{d,2}(\nu_{1},\nu_{2}). \tag{101}\] Iterating this yields the following: **Corollary 5.6**.: _In the setting of Theorem 5.5, for any \(\ell\in\mathds{N}^{+}\) we have_ \[W_{d,2}(K^{\ell}\nu_{1},K^{\ell}\nu_{2})\leq\eta^{\ell}\cdot W_{d,2}(\nu_{1}, \nu_{2})\leq D\eta^{\ell}, \tag{102}\] _where \(D\) is an upper bound on the diameter of \(M\)._ We now specialize this corollary to the case where \((M,d)\) is \((\mathrm{G}(m),d_{\mathrm{Rie}})\); combining it with Definition 5.3 and using also \(W_{d_{\mathrm{op}},1}\leq W_{d_{\mathrm{Fro}},1}\leq W_{d_{\mathrm{Fro}},2} \leq W_{d_{\mathrm{Rie}},2}\), we may conclude: **Corollary 5.7**.: _Let \(\mathrm{G}(m)\) be a compact connected Lie group, and let \(K\) be a Markov transition kernel on \(\mathrm{G}(m)\) such that Inequality (100) holds for \(d_{\mathrm{Rie}}\) with constant \(\eta\). Then for any probability measures \(\nu_{1},\nu_{2}\) on \(\mathrm{G}(m)\), and any \(\ell\in\mathds{N}^{+}\), there is a coupling \((\mathbf{X},\mathbf{Y})\) of the measures \(K^{\ell}\nu_{1},K^{\ell}\nu_{2}\) under which_ \[\left.\mathbf{E}[\left\|\mathbf{X}-\mathbf{Y}\right\|_{\mathrm{op}}]\leq 2D\eta^{\ell}\right. \tag{103}\] _(where \(D\) is a bound on the \(d_{\mathrm{Rie}}\)-diameter of \(\mathrm{G}(m)\), and the factor \(2\) accounts for the \(\inf\))._ Our next step is to get rid of the coupling in Corollary 5.7. To do this, we first observe that the representation \(\rho_{2^{m}}^{k,k}\) is uniformly continuous on \(\mathrm{G}(m)\) with respect to the operator-norm distance. Concretely, from the identity \[g_{1}\otimes\cdots\otimes g_{K}-h_{1}\otimes\cdots\otimes h_{K}=\sum_{i=1}^{K}g _{1}\otimes\cdots\otimes g_{i-1}\otimes(g_{i}-h_{i})\otimes h_{i+1}\otimes \cdots\otimes h_{K} \tag{104}\] and \(\left\lVert X\right\rVert_{\mathrm{op}},\left\lVert\overline{X}\right\rVert_ {\mathrm{op}}=1\) for \(X\in\mathrm{G}(m)\), as well as multiplicativity of \(d_{\mathrm{op}}\) with respect to tensor products, we may conclude that \[\left\lVert\rho_{2^{m}}^{k,k}(X)-\rho_{2^{m}}^{k,k}(Y)\right\rVert_{\mathrm{ op}}\leq 2k\left\lVert X-Y\right\rVert_{\mathrm{op}} \tag{105}\] for any \(X,Y\in\mathrm{G}(m)\). Using this, as well as the triangle inequality for \(d_{\mathrm{op}}\), in Corollary 5.7 yields: **Corollary 5.8**.: _In the setting of Corollary 5.7,_ \[\left\lVert\underset{\mathbf{X}\sim K^{\ell}\nu_{1}}{\mathbf{E}}[\rho_{2^{m}}^{k,k}(\mathbf{X})]-\underset{\mathbf{Y}\sim K^{\ell}\nu_{2}}{\mathbf{E}}[\rho_{2^{m}}^{ k,k}(\mathbf{Y})]\right\rVert_{\mathrm{op}}\leq 4kD\eta^{\ell}. \tag{106}\] (Note that in contrast with Corollary 5.7, here Corollary 5.8 does not feature any coupling between \(K^{\ell}\nu_{1}\) and \(K^{\ell}\nu_{2}\).) Now we further specialize by taking \(\nu_{1}=\delta_{1}\) (the measure with all probability on the identity element \(1\in\mathrm{G}(m)\)), taking \(\nu_{2}\) to be Haar measure, and specifying that \[K\text{ arises from left-multiplying by a random }\mathbf{g}\sim\mathcal{P}, \tag{107}\] where \(\mathcal{P}\) is some symmetric probability distribution on \(\mathrm{G}(m)\) as in Definition 2.3. Note that, whatever \(\mathcal{P}\) is, we have \(K^{\ell}\nu_{2}=\nu_{2}\) (Haar measure), and \[\underset{\mathbf{X}\sim K^{\ell}\nu_{1}}{\mathbf{E}}[\rho_{2^{m}}^{k,k}(\mathbf{X}) ]=\underset{\begin{subarray}{c}\mathbf{g}_{1},\dots,\mathbf{g}_{\ell}\sim\mathcal{P} \\ \text{independent}\end{subarray}}{\mathbf{E}}[\rho_{2^{m}}^{k,k}(\mathbf{g}_{\ell} \cdots\mathbf{g}_{1})]=\underset{\mathbf{g}\sim\mathcal{P}}{\mathbf{E}}[\rho_{2^{m} }^{k,k}(\mathbf{g}_{1})\cdots\rho_{2^{m}}^{k,k}(\mathbf{g}_{1})]=\underset{\mathbf{g} \sim\mathcal{P}}{\mathbf{E}}[\rho_{2^{m}}^{k,k}(\mathbf{g})]^{\ell}. \tag{108}\] From this and Corollary 5.8 we conclude the following: **Corollary 5.9**.: _Let \(\mathcal{P}\) be a symmetric probability distribution on \(\mathrm{G}(m)\). Given \(X,Y\in\mathrm{G}(m)\), write \(\mathcal{P}^{(X)}\) (respectively, \(\mathcal{P}^{(Y)}\)) for the distribution of \(\mathbf{g}X\) (respectively, \(\mathbf{g}Y\)) when \(\mathbf{g}\sim\mathcal{P}\). Then supposing_ \[W_{d_{\mathrm{Rie}},2}(\mathcal{P}^{(X)},\mathcal{P}^{(Y)})\leq(\eta+o(1))d_{ \mathrm{Rie}}(X,Y)\quad\text{with respect to $d_{\mathrm{Rie}}(X,Y)\to 0$,} \tag{109}\] _it follows that for any \(\ell,k\in\mathbb{N}^{+}\) we have_ \[\left\lVert\underset{\mathbf{g}\sim\mathcal{P}}{\mathbf{E}}[\rho_{2^{m}}^{k,k}( \mathbf{g})]^{\ell}-\underset{\mathbf{g}\sim\mathrm{G}(m)}{\mathbf{E}}[\rho_{2^{m}}^{k,k}(\mathbf{g})]\right\rVert_{\mathrm{op}}\leq 4kD\cdot\eta^{\ell}. \tag{110}\] Our goal for the next section will be to establish the following: **Theorem 5.10**.: _Let \(\nu_{m}\) denote the distribution \(\mathrm{G}(m-1)\times\binom{[m]}{m-1}\) on \(\mathrm{G}(m)\), thought of as inducing a Markov chain on \(\mathrm{G}(m)\) via left-multiplication. Fix any \(X,Y\in\mathrm{G}(m)\) with \(d_{\mathrm{Rie}}(X,Y)=\epsilon\leq 1\), and let \(\mathbf{X}^{\prime\prime}\) (respectively, \(\mathbf{Y}^{\prime\prime}\)) denote the result of taking two independent steps from \(X\) (respectively, \(Y\)) according to \(\nu_{m}\). Then there is a coupling of \(\mathbf{X}^{\prime\prime},\mathbf{Y}^{\prime\prime}\) under which_ \[\underset{\mathbf{E}}{\mathbf{E}}[d_{\mathrm{Rie}}(\mathbf{X}^{\prime\prime},\mathbf{Y}^{ \prime\prime})^{2}]\leq(1-\gamma_{m})\epsilon^{2}+O_{m}(\epsilon^{3}), \tag{111}\] _where \(\gamma_{m}=(1-\frac{1}{m})\gamma_{m}^{\prime}\) with \(\gamma_{m}^{\prime}=\frac{1-2^{2-m}}{4-2^{3-m}}\), and the \(O_{m}(\cdot)\) hides a constant depending only on \(m\)._ This theorem establishes the hypothesis of Corollary 5.9 with \(\mathcal{P}=\nu_{m}*\nu_{m}\) and \(\eta=\sqrt{1-\gamma_{m}}\). We can therefore easily derive the following (where the equality uses the fact that \(\underset{\mathbf{g}\sim\mathrm{G}(m)}{\mathbf{E}}[\rho_{2^{m}}^{k,k}(\mathbf{g})]\) is a projection operator): \[\left\lVert\underset{\mathbf{g}\sim\nu_{m}}{\mathbf{E}}[\rho_{2^{m}}^{k,k}(\mathbf{g}) ]^{2\ell}-\underset{\mathbf{g}\sim\mathrm{G}(m)}{\mathbf{E}}[\rho_{2^{m}}^{k,k}( \mathbf{g})]\right\rVert_{\mathrm{op}}=\left\lVert\underset{\mathbf{g}\sim\nu_{m}}{ \mathbf{E}}[\rho_{2^{m}}^{k,k}(\mathbf{g})]-\underset{\mathbf{g}\sim\mathrm{G}(m)}{ \mathbf{E}}[\rho_{2^{m}}^{k,k}(\mathbf{g})]\right\rVert_{\mathrm{op}}^{2\ell}\leq 4kD \cdot(1-\gamma_{m})^{\ell/2}. \tag{112}\] Taking \((2\ell)\)th roots and then \(\ell\to\infty\) thus yields Theorem 5.1. ### Proof of Theorem 5.10 We begin by describing the needed coupling. First, we use the same randomness to take one step from each of \(X,Y\); that is, we define \[\mathbf{X}^{\prime}=\mathbf{g}_{[m]\setminus\mathbf{i}}\cdot X,\qquad\mathbf{Y}^{\prime}=\mathbf{g}_ {[m]\setminus\mathbf{i}}\cdot Y, \tag{113}\] where \(\mathbf{i}\sim[m]\), \(\mathbf{g}\sim\mathrm{G}(m-1)\) are uniformly random and independent. To take the second steps, we first draw \(\mathbf{j}\sim[m]\). Then, based on the outcomes \(\mathbf{i},\mathbf{j},\mathbf{g}\), we will deterministically define some \[\mathbf{h}=h(\mathbf{i},\mathbf{j},\mathbf{g})\in\mathrm{G}(m-1) \tag{114}\] and then take \[\mathbf{X}^{\prime\prime}=(\widetilde{\mathbf{g}}\mathbf{h})_{[m]\setminus\mathbf{j}}\cdot \mathbf{X}^{\prime},\qquad\mathbf{Y}^{\prime\prime}=\widetilde{\mathbf{g}}_{[m]\setminus \mathbf{j}}\cdot\mathbf{Y}^{\prime}, \tag{115}\] where \(\widetilde{\mathbf{g}}\sim\mathrm{G}(m-1)\) is drawn uniformly and independently of all other random variables. This is a valid coupling, since for every outcome of \(\mathbf{i},\mathbf{j},\mathbf{g}\) the distributions of \(\widetilde{\mathbf{g}}\mathbf{h}\) and \(\widetilde{\mathbf{g}}\) are identical. Then \[d_{\mathrm{Rie}}(\mathbf{X}^{\prime\prime},\mathbf{Y}^{\prime\prime})=d_{\mathrm{Rie} }(\mathbf{h}_{[m]\setminus\mathbf{j}}\cdot\mathbf{X}^{\prime},\mathbf{Y}^{\prime}), \tag{116}\] since \(d_{\mathrm{Rie}}(\cdot,\cdot)\) is unitarily invariant. In case \(\mathbf{i}=\mathbf{j}\), we will "give up" and simply define \(\mathbf{h}=\mathbb{1}\), in which case we get \(d_{\mathrm{Rie}}(\mathbf{X}^{\prime\prime},\mathbf{Y}^{\prime\prime})=d_{\mathrm{Rie} }(\mathbf{X}^{\prime},\mathbf{Y}^{\prime})=d_{\mathrm{Rie}}(X,Y)=\epsilon\). Thus we have \[\mathbf{E}[d_{\mathrm{Rie}}(\mathbf{X}^{\prime\prime},\mathbf{Y}^{\prime\prime})^{2}]= \frac{1}{m}\epsilon^{2}+\left(1-\frac{1}{m}\right)\underset{\mathbf{i}\neq\mathbf{j}} {\mathrm{avg}}\bigg{\{}\underset{\mathbf{g}\sim\mathrm{G}(m-1)}{\mathrm{E}}\Big{[} d_{\mathrm{Rie}}\big{(}\mathbf{h}_{[m]\setminus\mathbf{j}}\cdot\mathbf{X}^{\prime},\mathbf{Y}^{ \prime}\big{)}^{2}\Big{]}\bigg{\}}. \tag{117}\] To complete the definition of \(\mathbf{h}\), we specify the function \(h\): \[\text{for }i\neq j\text{, we define }h=h(i,j,g)\text{ to minimize }d_{\mathrm{Fro}}\big{(}h_{[m]\setminus\mathbf{j}}\cdot g_{[m] \setminus i}\cdot X,g_{[m]\setminus i}\cdot Y\big{)}^{2}; \tag{118}\] in other words, \(\mathbf{h}=h(\mathbf{i},\mathbf{j},\mathbf{g})\) minimizes \(d_{\mathrm{Fro}}\big{(}\mathbf{h}_{[m]\setminus\mathbf{j}}\cdot\mathbf{X}^{\prime},\mathbf{Y} ^{\prime}\big{)}\). With this choice of \(h\), note that we have \(d_{\mathrm{Fro}}\big{(}\mathbf{h}_{[m]\setminus\mathbf{j}}\cdot\mathbf{X}^{\prime},\mathbf{Y} ^{\prime}\big{)}\leq\epsilon\leq 1\) for every outcome of \(\mathbf{i},\mathbf{j},\mathbf{g}\), since \(\mathbf{h}=\mathbb{1}\) is always an option (and \(d_{\mathrm{Fro}}(\mathbf{X}^{\prime},\mathbf{Y}^{\prime})=d_{\mathrm{Fro}}(X,Y)\leq d_ {\mathrm{Rie}}(X,Y)=\epsilon\)). Thus employing Inequality (98) we may conclude \[\mathbf{E}[d_{\mathrm{Rie}}(\mathbf{X}^{\prime\prime},\mathbf{Y}^{\prime\prime})^{2}] \leq\frac{1}{m}\epsilon^{2}+\left(1-\frac{1}{m}\right)\underset{\mathbf{i}\neq\mathbf{ j}}{\mathrm{avg}}\bigg{\{}\underset{\mathbf{g}\sim\mathrm{G}(m-1)}{\mathrm{E}}\Big{[}d_{ \mathrm{Fro}}\big{(}\mathbf{h}_{[m]\setminus\mathbf{j}}\cdot\mathbf{X}^{\prime},\mathbf{Y}^{ \prime}\big{)}^{2}\Big{]}\bigg{\}}+\epsilon^{4}. \tag{119}\] Thus to complete the proof of Theorem 5.10, it suffices to establish the following: \[\forall i\neq j,\qquad\underset{\mathbf{g}\sim\mathrm{G}(m-1)}{\mathrm{E}}\bigg{[} \underset{h\in\mathrm{G}(m-1)}{\mathrm{min}}\Big{\{}d_{\mathrm{Fro}}\big{(}h_{ [m]\setminus j},\mathbf{Y}^{\prime}\cdot(\mathbf{X}^{\prime})^{-1}\big{)}^{2}\Big{\}} \bigg{]}\leq(1-\gamma_{m}^{\prime})\epsilon^{2}+O_{m}(\epsilon^{3}). \tag{120}\] (Here we used \(d_{\mathrm{Fro}}\big{(}h_{[m]\setminus j}\cdot\mathbf{X}^{\prime},\mathbf{Y}^{\prime} \big{)}=d_{\mathrm{Fro}}\big{(}h_{[m]\setminus j},\mathbf{Y}^{\prime}\cdot(\mathbf{X} ^{\prime})^{-1}\big{)}\).) Our proof of this will not have any particular dependence on \(i,j\), so without loss of generality let us fix \(i=1\) and \(j=m\). We establish Inequality (120) via the below two lemmas. (Here and subsequently the notation "\(\mathrm{tr}_{i}\,X\)" below denotes the partial trace corresponding to tracing out the \(i\)th qubit of \(X\).) **Lemma 5.11**.: _Fix any \(Z\in\mathrm{G}(m)\) with \(d_{\mathrm{Rie}}(\mathbb{1},Z)=\epsilon\). Then_ \[\underset{h\in\mathrm{G}(m-1)}{\min}\Big{\{}d_{\mathrm{Fro}}(h\otimes\mathbb{1},Z)^{2}\Big{\}}\leq(1-\tfrac{1}{2}\|\mathrm{tr}_{m}\,B\|_{\mathrm{Fro}}^{2}) \epsilon^{2}+O_{m}(\epsilon^{3}), \tag{121}\] _where \(B=\frac{1}{\epsilon}\log Z\in\mathfrak{g}_{m}\) satisfies \(\|B\|_{\mathrm{Fro}}=1\)._ **Lemma 5.12**.: _For \(m\geq 2\) and any \(A\in\mathfrak{g}_{m}\), writing \(\delta=2^{2-m}\), we have_ \[\underset{\mathbf{g}\sim\mathrm{G}(m-1)}{\mathrm{E}}[\|\mathrm{tr}_{m}((\mathbb{1} \otimes\mathbf{g}_{[m]\setminus 1})A(\mathbb{1}\otimes\mathbf{g}_{[m]\setminus 1}^{\dagger}))\|_{\mathrm{Fro}}^{2}]\geq\frac{1-\delta}{2- \delta}\|A\|_{\mathrm{Fro}}^{2}. \tag{122}\] To see how the above two lemmas imply Inequality (120) (in the case \(i=1\), \(j=m\)), we first apply Lemma 5.11 with \(Z\) being the outcome of \(\mathbf{Y}^{\prime}\cdot(\mathbf{X}^{\prime})^{-1}\). Writing \(\mathbf{B}=\frac{1}{\epsilon}\log(\mathbf{Y}^{\prime}(\mathbf{X}^{\prime})^{-1})\) (recall that from Equation (113) this is a random matrix depending on \(\mathbf{g}\)), Lemma 5.11 tells us that \[\operatorname*{\mathbf{E}}_{\mathbf{g}\sim\operatorname{G}(m-1)}\biggl{[}\min_{h \in\operatorname{G}(m-1)}\Bigl{\{}d_{\operatorname{Fro}}\bigl{(}h_{[m]\setminus j }\otimes\mathbb{1}_{j},\mathbf{Y}^{\prime}\cdot(\mathbf{X}^{\prime})^{-1}\bigr{)}^{2} \Bigr{\}}\biggr{]}\leq(1-\tfrac{1}{2}\operatorname*{\mathbf{E}}[\|\operatorname {tr}_{m}\mathbf{B}\|_{\operatorname{Fro}}^{2}])\epsilon^{2}+O_{m}(\epsilon^{3}). \tag{123}\] But, for \(\mathbf{g}\sim\operatorname{G}(m-1)\), we have \[\mathbf{B}=\tfrac{1}{\epsilon}\log\Bigl{(}(\mathbb{1}\otimes\mathbf{g}_{[m]\setminus 1 })YX^{-1}(\mathbb{1}\otimes\mathbf{g}_{[m]\setminus 1}^{\dagger})\Bigr{)}=( \mathbb{1}\otimes\mathbf{g}_{[m]\setminus 1})\bigl{(}\tfrac{1}{\epsilon}\log(YX^{-1}) \bigr{)}(\mathbb{1}\otimes\mathbf{g}_{[m]\setminus 1}^{\dagger}). \tag{124}\] The result now follows by applying Lemma 5.12 with \(A=\frac{1}{\epsilon}\log(YX^{-1})\), which has \(\|A\|_{\operatorname{Fro}}=1\) since \(d_{\operatorname{Rie}}(X,Y)=\epsilon\). #### 5.3.1 Proof of Lemma 5.11 To prove Lemma 5.11, it suffices to show that the particular choice \[h\coloneqq\exp(-\tfrac{1}{2}\epsilon\operatorname{tr}_{m}B) \tag{125}\] satisfies Inequality (121). We observe that since \(\operatorname{G}(m)\) is either \(\operatorname{SO}(2^{m})\) or \(\operatorname{SU}(2^{m})\), recalling Equations (93) and (94) we have that \(\operatorname{tr}_{m}B\in\mathfrak{g}_{m-1}\) since \(B\in\mathfrak{g}_{m}\) (note that \(\operatorname{tr}B=0\) implies \(\operatorname{tr}(\operatorname{tr}_{m}B)=0\)), and hence indeed \(h\in\operatorname{G}(m-1)\) as required. Now we must bound \[d_{\operatorname{Fro}}(h\otimes\mathbb{1},Z)^{2}=\langle h\otimes\mathbb{1}-Z,h\otimes\mathbb{1}-Z\rangle=2\operatorname{tr}\mathbb{1}-\langle h\otimes \mathbb{1},Z\rangle-\langle Z,h\otimes\mathbb{1}\,\rangle=2\operatorname{tr} \mathbb{1}-2\Re\langle h\otimes\mathbb{1},Z\rangle. \tag{126}\] Recalling \(Z=\exp(\epsilon B)\) where \(\|B\|_{\operatorname{Fro}}=1\), we abuse notation slightly by writing \[Z=1+\epsilon B+\epsilon^{2}B^{2}/2+O_{m}(\epsilon^{3}), \tag{127}\] where "\(O_{m}(\epsilon^{3})\)" stands for some matrix \(E\) satisfying \(\|E\|_{\operatorname{Fro}}\leq C\epsilon^{3}\), with \(C\) a constant depending only on \(m\) that may change from line to line. We may similarly expand \(h\otimes\mathbb{1}=\exp(-\tfrac{1}{2}\epsilon\operatorname{tr}_{m}B)\otimes \mathbb{1}\), and upon substituting into Equation (126) and simplifying, we obtain \[(\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq operator that swaps the qubits in \(S\) with the corresponding subset of qubits \(1^{\prime},\ldots,m^{\prime}.\) Now for any \((m-1)\)-qubit operator \(C\) we have \[\|C\|_{\mathrm{Fro}}^{2}=\mathrm{tr}(C^{\dagger}C)=\mathrm{tr}((C_{L}^{\dagger} \otimes C_{L^{\prime}})\cdot\mathrm{SWAP}_{L,L^{\prime}}). \tag{131}\] In turn, if \(C=\mathrm{tr}_{m}\,B\) for some operator \(B\) on \(m\) qubits, we conclude \[\|\mathrm{tr}_{m}\,B\|_{\mathrm{Fro}}^{2}=\mathrm{tr}(\mathrm{tr}_{m,m^{\prime }}(B^{\dagger}\otimes B)\cdot\mathrm{SWAP}_{L,L^{\prime}})=\mathrm{tr}((B^{ \dagger}\otimes B)\cdot(\mathrm{SWAP}_{L,L^{\prime}}\otimes\mathbb{1}_{m,m^{ \prime}})). \tag{132}\] Next, if \(B=HAH^{\dagger}\) for unitary \(H\), we may use the cyclic property of trace to conclude \[\|\mathrm{tr}_{m}\,B\|_{\mathrm{Fro}}^{2}=\mathrm{tr}((A^{\dagger}\otimes A) \cdot W)=\langle A\otimes A^{\dagger},W\rangle,\qquad W\coloneqq(H\otimes H)( \mathrm{SWAP}_{L,L^{\prime}}\otimes\mathbb{1}_{m,m^{\prime}})(H^{\dagger} \otimes H^{\dagger}). \tag{133}\] (The above formula, specialized to \(m=3,\) essentially appears as [1, Eqn. (103)].) Finally, suppose \(H=\mathbb{1}\otimes g\) for some \((m-1)\)-qubit unitary \(g\). For notational clarity we break up the system \(L\) into subsystems "\(1\)" and \(K=\{2,\ldots,m-1\}\), writing \(H=\mathbb{1}_{1}\otimes g_{K,m}\) and \[H\otimes H=\mathbb{1}_{1,1^{\prime}}\otimes(g_{K,m}\otimes g_{K^{\prime},m^{ \prime}}). \tag{134}\] Putting this into the definition of \(W\), we see that the two qubits labeled \(1\) and \(1^{\prime}\) are simply swapped by \(W\), and we have \[W=\mathrm{SWAP}_{1,1^{\prime}}\cdot\widehat{W},\qquad\widehat{W}\coloneqq(g_{K,m}\otimes g_{K^{\prime},m^{\prime}})S(g_{K,m}^{\dagger}\otimes g_{K^{\prime}, m^{\prime}}^{\dagger}),\qquad S\coloneqq(\mathrm{SWAP}_{K,K^{\prime}}\otimes \mathbb{1}_{m,m^{\prime}}). \tag{135}\] Recalling Fact 4.4, we see that \[\mathrm{vec}(\widehat{W})=\rho_{2^{m-1}}^{2,2}(g)\cdot\mathrm{vec}(S). \tag{136}\] In other words, \(\widehat{W}\) is the action of \(g\) on \(S\) under representation \(\rho_{2^{m-1}}^{2,2}\), when we suitably use the "matricized" interpretation of this representation. Finally, we are interested in the case that \(\mathbf{g}\sim\mathrm{G}(m-1)\) is chosen "uniformly" (Haar measure on \(\mathrm{G}(m-1)\)); then we conclude from the above equations that \[\underset{\mathbf{g}\sim\mathrm{G}(m-1)}{\mathbf{E}}[\|\mathrm{tr}_{m}((\mathbb{1 }\otimes\mathbf{g})A(\mathbb{1}\otimes\mathbf{g}^{\dagger}))\|_{\mathrm{Fro}}^{2}]= \langle A\otimes A^{\dagger},\mathrm{SWAP}_{1,1^{\prime}}\cdot S_{0}\rangle, \qquad\mathrm{vec}(S_{0})\coloneqq\underset{\mathbf{g}\sim\mathrm{G}(m-1)}{\mathbf{ E}}[\rho_{2^{m-1}}^{2,2}(\mathbf{g})]\cdot\mathrm{vec}(S). \tag{137}\] We now compute \(S_{0}\) (we note that a similar calculation for \(\mathrm{G}(m-1)=\mathrm{U}(2^{m-1})\) is given in [1, Eqn. (61)]): **Proposition 5.13**.: _Let \(D=2^{m-1}\), and define the following operators acting across systems \(K\cup\{m\}\), \(K^{\prime}\cup\{m^{\prime}\}\):_ \[Q_{2}=D\cdot|\Phi\rangle\langle\Phi|\,,\quad Q_{3}=\mathbb{1},\quad Q_{4}= \mathrm{SWAP} \tag{138}\] _(where \(|\Phi\rangle=D^{-1/2}\sum_{a\in\{0,1\}^{m-1}}|a\rangle\otimes|a\rangle\) is the maximally entangled state). Then_ \[\mathrm{G}(m-1) =\mathrm{SO}(D) \Longrightarrow S_{0} =c_{2}\cdot Q_{2}+c_{3}\cdot Q_{3}+c_{4}\cdot Q_{4}, \tag{139}\] \[\mathrm{G}(m-1) =\mathrm{SU}(D) \Longrightarrow S_{0} =c_{3}^{\prime}\cdot Q_{3}+c_{4}^{\prime}\cdot Q_{4}, \tag{140}\] _where the non-negative constants \(c_{2},c_{3},c_{4},c_{3}^{\prime},c_{4}^{\prime}\) are given by_ \[c_{2}=\frac{D/2-1}{(D-1)(D+2)},\quad c_{3}=\frac{3D/2+1}{(D-1)( D+2)},\quad c_{4}=\frac{(D/2-1)(D+3)}{(D-1)(D+2)}, \tag{141}\] \[c_{3}^{\prime}=\frac{3D/2}{(D-1)(D+1)},\quad c_{4}^{\prime}= \frac{D^{2}/2-2}{(D-1)(D+1)}\geq c_{4}. \tag{142}\] Proof.: We recall from Theorem 4.8 that12 Footnote 12: Note that here we are using \(m\geq 4\). \[\underset{\mathbf{g}\sim\mathrm{G}(m-1)}{\mathbf{E}}[\rho_{2^{m-1}}^{2,2}(\mathbf{g})] =\text{projection onto the span of }\{|\varphi_{M}\rangle:M\in\mathcal{M}\}, \tag{143}\] where we use the following notation: \[M_{12}=\{\{1,2\},\{3,4\}\},\quad M_{13}=\{\{1,3\},\{2,4\}\},\quad M_{14}=\{\{1,4 \},\{2,3\}\}; \tag{144}\] \[|\varphi_{M_{12}}\rangle=\operatorname{vec}(Q_{2})=\sum_{x,y\in\{0,1\}^{m-1}} |x,x,y,y\rangle\,, \tag{145}\] \[|\varphi_{M_{13}}\rangle=\operatorname{vec}(Q_{3})=\sum_{x,y}|x,y,x,y\rangle\,, \quad|\varphi_{M_{14}}\rangle=\operatorname{vec}(Q_{4})=\sum_{x,y}|x,y,y,x \rangle\,; \tag{146}\] \[\operatorname{G}(m-1)=\operatorname{SO}(2^{m-1})\implies\mathcal{M}=\{M_{12}, M_{13},M_{14}\},\qquad\operatorname{G}(m-1)=\operatorname{SU}(2^{m-1}) \implies\mathcal{M}=\{M_{13},M_{14}\}. \tag{147}\] Let us further define \[|\psi_{10}\rangle=\sum_{x\in\{0,1\}^{m-1}}|x,x,x,x\rangle\quad\text{and} \quad|\psi_{1j}\rangle=|\varphi_{M_{1j}}\rangle-|\psi_{10}\rangle\,, \tag{148}\] so that the \(|\psi_{1j}\rangle\)'s are pairwise orthogonal, with \(\langle\psi_{10}|\psi_{10}\rangle=D\) and \(\langle\psi_{1j}|\psi_{1j}\rangle=D(D-1)\) for \(j>1\). Then, since from Equation (135) we have \[\operatorname{vec}(S)=\sum_{\begin{subarray}{c}x=(x^{\prime},a)\in\{0,1\}^{ m-2}\times\{0,1\}\\ y=(y^{\prime},b)\in\{0,1\}^{m-2}\times\{0,1\}\end{subarray}}|(x^{\prime},a),(y ^{\prime},b),(y^{\prime},a),(x^{\prime},b)\rangle\,, \tag{149}\] we can easily compute \[\langle\psi_{10}|\operatorname{vec}(S)=D,\quad\langle\psi_{12}|\operatorname{ vec}(S)=0,\quad\langle\psi_{13}|\operatorname{vec}(S)=D,\quad\langle\psi_{14}| \operatorname{vec}(S)=D(D/2-1). \tag{150}\] From this we conclude that the projection of \(\operatorname{vec}(S)\) onto the span of the four \(|\psi_{1j}\rangle\)'s (which is also the span of \(|\psi_{10}\rangle\) and the three \(|\varphi_{M_{1j}}\rangle\)'s) is \[|\sigma\rangle\coloneqq|\psi_{10}\rangle+\frac{1}{D-1}\,|\psi_{13}\rangle+ \frac{D/2-1}{D-1}\,|\psi_{14}\rangle\,. \tag{151}\] Now one may easily verify that the following vector \(|\tau\rangle\) is orthogonal to each \(|\varphi_{M_{1j}}\rangle=|\psi_{10}\rangle+|\psi_{1j}\rangle\): \[|\tau\rangle=-(D-1)\,|\psi_{10}\rangle+|\psi_{12}\rangle+|\psi_{13}\rangle+| \psi_{14}\rangle\,. \tag{152}\] Thus we can bring \(|\sigma\rangle\) into the span of the three \(|\varphi_{M_{1j}}\rangle\)'s by adding a suitable multiple of \(|\tau\rangle\) to zero out the \(|\psi_{10}\rangle\) component as follows: \[|\sigma\rangle+c\,|\tau\rangle =(1-(D-1)c)\,|\psi_{10}\rangle+c\,|\psi_{12}\rangle+\left(\frac{1} {D-1}+c\right)|\psi_{13}\rangle+\left(\frac{D/2-1}{D-1}+c\right)|\psi_{14}\rangle \tag{153}\] \[=\left(\frac{D/2-1}{D-1}-(D+2)c\right)|\psi_{10}\rangle+c\,|\varphi _{M_{12}}\rangle+\left(\frac{1}{D-1}+c\right)|\varphi_{M_{13}}\rangle+\left( \frac{D/2-1}{D-1}+c\right)|\varphi_{M_{14}}\rangle\,, \tag{154}\] and taking \(c=\frac{D/2-1}{(D-1)(D+2)}\) we finally get that \[\text{the projection of }\operatorname{vec}(S)\text{ onto the span of }\,|\varphi_{M_{12}}\rangle\,,|\varphi_{M_{13}}\rangle\,,| \varphi_{M_{14}}\rangle\,\text{ is }c_{2}\,|\varphi_{M_{12}}\rangle+c_{3}\,|\varphi_{M_{13}}\rangle+c_{4}\,| \varphi_{M_{14}}\rangle\,. \tag{155}\] One can repeat the above using \(|\tau^{\prime}\rangle=-(D-1)\,|\psi_{10}\rangle+|\psi_{13}\rangle+|\psi_{14}\rangle\) in place of \(|\tau\rangle\) to similarly deduce \[\text{the projection of }\operatorname{vec}(S)\text{ onto the span of }\,|\varphi_{M_{13}}\rangle\,,|\varphi_{M_{14}}\rangle\,\text{ is }c_{3}^{\prime}\,|\varphi_{M_{13}}\rangle+c_{4}^{\prime}\,|\varphi_{M_{14}}\rangle\,. \tag{156}\] The proof is complete. Now we compute: \[\langle A\otimes A^{\dagger},\mathrm{SWAP}_{1,1^{\prime}}\cdot Q_{4}\rangle= \langle A\otimes A^{\dagger},\mathrm{SWAP}_{[m],[m]^{\prime}}\rangle=\|A\|_{ \mathrm{Fro}}^{2} \tag{157}\] (similar to Equation (131)), and \[\langle A\otimes A^{\dagger},\mathrm{SWAP}_{1,1^{\prime}}\cdot Q_{3}\rangle= \langle A\otimes A^{\dagger},\mathrm{SWAP}_{1,1^{\prime}}\cdot\mathbb{1}_{[m] \setminus 1,[m]^{\prime}\setminus 1^{\prime}}\rangle=\|\operatorname{tr}_{[m] \setminus 1}A\|_{\mathrm{Fro}}^{2}\geq 0 \tag{158}\] (similar to Equation (132)). Finally, since (as can be easily verified) \[\mathrm{SWAP}_{1,1^{\prime}}\cdot Q_{2}=\sum_{\begin{subarray}{c}a,b\in\{0,1 \}\\ x,y\in\{0,1\}^{m-1}\end{subarray}}\left|(a,x),(b,x)\right\rangle\langle(b,y),(a, y)|\,, \tag{159}\] we may conclude that \[\langle A\otimes A^{\dagger},\mathrm{SWAP}_{1,1^{\prime}}\cdot Q_{2}\rangle= \sum_{a,b,x,y}\langle(b,y)|A^{\dagger}|(a,x)\rangle\,\langle(a,y)|A|(b,x) \rangle\geq-\|A\|_{\mathrm{Fro}}^{2} \tag{160}\] by Cauchy-Schwarz. Putting these conclusions together with Equation (137) and Proposition 5.13, we get that for both \(\mathrm{G}(m-1)=\mathrm{SO}(2^{m-1})\) and \(\mathrm{G}(m-1)=\mathrm{SU}(2^{m-1})\) it holds that \[\operatorname*{\mathbf{E}}_{\boldsymbol{g}\sim\mathrm{G}(m-1)}[\| \operatorname{tr}_{m}((\mathbb{1}\otimes\boldsymbol{g})A(\mathbb{1}\otimes \boldsymbol{g}^{-1}))\|_{\mathrm{Fro}}^{2}]\geq(c_{4}-c_{2})\|A\|_{\mathrm{Fro} }^{2}=\frac{D/2-1}{D-1}\|A\|_{\mathrm{Fro}}^{2}=\frac{1-2^{2-m}}{2-2^{2-m}}\|A \|_{\mathrm{Fro}}^{2}, \tag{161}\] completing the proof of Lemma 5.12. ## 6 Pseudorandom products of operators In this section we generalize the "derandomized squaring" technique of Rozenman and Vadhan [14] so that it may be applied to random walks on groups, where the goal is to show rapid mixing of a particular representation. We remark that the proofs are not really different from those in [14], and that a similar generalization appeared recently in [16]. **Notation 6.1**.: Throughout we will be considering noncommutative polynomials, with real coefficients, over symbols \(u_{1},\ldots,u_{c},u_{1}^{\dagger},\ldots,u_{c}^{\dagger}\). (These symbols will eventually be substituted by square matrices.) If \(p\) is such a polynomial, its _adjoint_\(p^{\dagger}\) is formed in the natural way (i.e., \((u_{i}^{\dagger})^{\dagger}=u_{i}\) and \((u_{i}u_{j})^{\dagger}=u_{j}^{\dagger}u_{i}^{\dagger}\), etc.), and we call \(p\)_self-adjoint_ if \(p^{\dagger}=p\). **Notation 6.2**.: We will also consider _polynomial sequences_\(S=(s_{1},\ldots,s_{m})\), where each \(s_{i}\) is a polynomial in the \(u_{j}\)'s. (Usually \(s_{i}\) will in fact be a _monomial_.) **Notation 6.3**.: If \(\mathcal{U}=(U_{1},\ldots,U_{c})\) is a sequence of matrices, we write \(S(\mathcal{U})=(s_{1}(\mathcal{U}),\ldots,s_{m}(\mathcal{U}))\), where \(s_{j}(\mathcal{U})\) is the matrix resulting from substituting \(u_{i}=U_{i}\) for each \(i\in[c]\). **Notation 6.4**.: Given a polynomial sequence \(S\) we write \(\operatorname{avg}(S)\), or \(\operatorname{avg}\circ S\), for the polynomial \(\frac{1}{m}\sum_{j=1}^{m}s_{j}\). **Definition 6.5**.: If \(p\) is a polynomial over \(u_{1},\ldots,u_{c}\), we define \[\|p\|=\sup_{r}\{\|p(\mathcal{U})\|_{\mathrm{op}}:\mathcal{U}=(U_{1},\ldots,U_{ c}),\ U_{j}\in\mathbb{C}^{r\times r},\ \|U_{j}\|_{\mathrm{op}}\leq 1\ \forall j\}, \tag{162}\] the largest operator norm that \(p\) can achieve when \(u_{1},\ldots,u_{c}\) are substituted with square matrices of bounded operator norm. More generally, if \(S=(s_{1},\ldots,s_{m})\) is a sequence of polynomials we write \(\|S\|=\max(\|s_{1}\|,\ldots,\|s_{m}\|)\). **Definition 6.6**.: A _directed graph_\(G=(V,E)\) will consist of a finite _sequence_ of vertices \(V\), and a finite _sequence_ of edges \(E\) from \(V\times V\) (so parallel edges and self-loops are allowed). Such a graph is _undirected_ if \(E\) can be partitioned into pairs of the form \(\{(i,j),(j,i)\}\). We say \(G\) is _\(d\)-out-regular_ if for each \(i\in V\) we have exactly \(d\) elements of the form \((i,j)\) in \(E\); one can analogously define in-regularity, and the two concepts are the same for undirected graphs. Note that if \(G\) is an undirected \(d\)-regular graph, then \(|E|=d|V|\) (contrary to usual convention, as \(E\) is still composed of directed edges). **Definition 6.7**.: Given a graph \(G=(V,E)\), where \(V=(1,2,\ldots,m)\), and given a polynomial sequence13 Footnote 13: One would have hoped for the more natural-looking ordering \(s_{i}^{\dagger}s_{j}\), but alas we are forced to follow standard conventions: the length-2 path \((i,j)\) means “first \(i\), then \(j\)”; but, with operators acting on the left, \(s_{i}^{\dagger}s_{j}\) means “first do \(s_{j}\), then do \(s_{i}^{\dagger_{n}}\). \[(s_{j}^{\dagger}s_{i})_{(i,j)\in E}. \tag{163}\] **Remark 6.8**.: If \(G\) is undirected, the polynomial \(\mathrm{avg}(q_{G}\circ S)\) is self-adjoint. **Fact 6.9**.: _We always have \(\|q_{G}\circ S\|\leq\|S\|^{2}\), and hence \(\|S\|\leq 1\implies\|q_{G}\circ S\|\leq 1\)._ **Definition 6.10**.: If \(G=(V,E)\) is a \(d\)-out-regular directed graph with \(V=(1,2,\ldots,c)\), then the normalized adjacency matrix of \(G\) is \[A_{G}\coloneqq\frac{1}{d}\sum_{(i,j)\in E}|j\rangle\langle i|=c\cdot\mathrm{ avg}(q_{G}\circ\mathcal{U}),\quad\text{where }\mathcal{U}=(\langle 1|\,,\ldots, \langle c|\rangle. \tag{164}\] **Fact 6.11**.: _Let \(G=(V,E)\) be an out-regular directed graph with \(V=(1,2,\ldots,m)\) and let \(\mathcal{W}=(W_{1},\ldots W_{m})\) be a sequence of matrices from \(\mathbb{C}^{r\times r^{\prime}}\).14 Then_ Footnote 14: We only really care about \(r^{\prime}=r\), but we allow \(r^{\prime}\neq r\) for the sake of comparison with Equation (164), where \(r=1\) and \(r^{\prime}=c\). \[\mathrm{avg}(q_{G}(\mathcal{W}))=\frac{1}{m}\mathcal{W}^{\dagger}(A_{G}\otimes 1 _{r\times r})\mathcal{W}, \tag{165}\] _where we identify \(\mathcal{W}\) with \(\sum_{j=1}^{m}|j\rangle\otimes W_{j}\), the \(mr\times r^{\prime}\) matrix formed by stacking the \(W_{j}\)'s into a column. (For \(r^{\prime}=r\), this identity is essentially the formula \(w^{\dagger}Aw=\sum_{ij}w_{i}^{\dagger}A_{ij}w_{j}\), but with entries from the ring \(\mathbb{C}^{r\times r}\).)_ **Notation 6.12**.: We let \(\mathrm{K}_{m}\) denote the complete (regular) undirected graph with self-loops on \(m\) vertices, which has \(V=(1,2,\ldots,m)\) and \(E=((1,1),(1,2),\ldots,(1,m),(2,1),\ldots,(m,m))\). We may write \(\mathrm{K}\) in place of \(\mathrm{K}_{m}\) if the context is clear. **Fact 6.13**.: _If \(S=(s_{1},\ldots,s_{m})\) is a polynomial sequence,_ \[\mathrm{avg}(q_{\mathrm{K}_{m}}\circ S)=\mathrm{avg}(S)^{\dagger}\mathrm{avg} (S), \tag{166}\] _the Hermitian-square of \(\mathrm{avg}(S)\). Hence if \(\mathcal{U}\) is a sequence of matrices, \(\left\|\mathrm{avg}(q_{\mathrm{K}}\circ S(\mathcal{U}))\right\|_{\mathrm{op}}= \left\|\mathrm{avg}(S(\mathcal{U}))\right\|_{\mathrm{op}}^{2}\)._ **Definition 6.14**.: Recall that a regular undirected graph \(G\) is said to be a _(2-sided) \(\mu\)-expander_ if \(\left\|A_{G}-A_{\mathrm{K}}\right\|_{\mathrm{op}}\leq\mu\). **Fact 6.15**.: _Since \(A_{\mathrm{K}}\) is the projection onto the \(1\)-dimensional subspace spanned by \(\sum_{j}|j\rangle\), and since \(A_{G}\) also fixes this subspace, \(G\) being a \(\mu\)-expander is equivalent to \(\left\|A_{G}-(1-\mu)A_{\mathrm{K}}\right\|_{\mathrm{op}}\leq\mu\)._ The following result is essentially the same as [12, Thm. 4.4]: **Proposition 6.16**.: _Let \(G\) be a \(\mu\)-expander on vertex set \(V=(1,2,\ldots,m)\), let \(S=(s_{1},\ldots,s_{m})\) be a polynomial sequence with \(\|S\|\leq 1\), and let \(\mathcal{U}=(U_{1},\ldots,U_{c})\) be a sequence of matrices in \(\mathbb{C}^{r\times r}\) with \(\left\|U_{j}\right\|_{\mathrm{op}}\leq 1\) for all \(j\). Then_ \[\left\|\mathrm{avg}(q_{G}\circ S(\mathcal{U}))\right\|_{\mathrm{op}}\leq(1-\mu) \left\|\mathrm{avg}(S(\mathcal{U}))\right\|_{\mathrm{op}}^{2}+\mu. \tag{167}\] Proof.: Write \(\mathcal{W}=S(\mathcal{U})=(W_{1},\ldots,W_{m})\), so \(\left\|W_{j}\right\|_{\mathrm{op}}\leq 1\) for all \(j\). Using Fact 6.11 twice, we derive \[\left\|\mathrm{avg}(q_{G}(\mathcal{W}))-(1-\mu)\mathrm{avg}(q_{\mathrm{K}}( \mathcal{W}))\right\|_{\mathrm{op}}=\frac{1}{m}\big{\|}\mathcal{W}^{\dagger}( \Delta\otimes\mathbb{1}_{r\times r})\mathcal{W}\big{\|}_{\mathrm{op}},\quad \text{where }\Delta=A_{G}-(1-\mu)A_{K}. \tag{168}\] We have \(\left\|\Delta\right\|_{\mathrm{op}}\leq\mu\) by Fact 6.15, and \(\left\|\mathcal{W}\right\|_{\mathrm{op}}\leq\sqrt{\sum_{j}\left\|W_{j}\right\| _{\mathrm{op}}}\leq\sqrt{m}\). So by submultiplicativity of operator norm, the right-hand side above is at most \(\mu\), and the proof is complete from Fact 6.13 and the triangle inequality. Iterating this, and using Fact 6.9 to conclude that \(\left\|q_{G_{t}}\circ\cdots\circ q_{G_{1}}\circ S\right\|\leq 1\) whenever \(\left\|S\right\|\leq 1\), yields: **Proposition 6.17**.: _Let \(S=(s_{1},\ldots,s_{m})\) be a polynomial sequence with \(\left\|S\right\|\leq 1\) and let \(\mathcal{U}=(U_{1},\ldots,U_{c})\) be a matrix sequence with \(\left\|U_{j}\right\|_{\mathrm{op}}\leq 1\) for all \(j\). Moreover, let \(G_{1},G_{2},\ldots,G_{t}\) be a sequence of regular graphs, where \(G_{i}=(V_{i},E_{i})\) is a \(\mu_{i}\)-expander with \(V_{i+1}=E_{i}\) (and \(V_{1}=(1,2,\ldots,m)\)). Then_ \[\left\|\mathrm{avg}(q_{G_{t}}\circ q_{G_{t-1}}\circ\cdots\circ q_{G_{1}}\circ S (\mathcal{U}))\right\|_{\mathrm{op}}\leq f_{\mu_{t}}\circ f_{\mu_{t-1}}\circ \cdots\circ f_{\mu_{1}}(\left\|\mathrm{avg}(S(\mathcal{U}))\right\|_{\mathrm{ op}}), \tag{169}\] _where \(f_{\mu}(\lambda)=(1-\mu)\lambda^{2}+\mu\). In particular, if \(m=c\), \(S=(u_{1},\ldots,u_{c})\), and we write \(Q=q_{G_{t}}\circ\cdots\circ q_{G_{1}}\) and \(F_{(\mu_{1},\ldots,\mu_{t})}=f_{\mu_{t}}\circ\cdots\circ f_{\mu_{1}}\), then_ \[\left\|\mathrm{avg}(Q\circ\mathcal{U})\right\|_{\mathrm{op}}\leq F_{(\mu_{1}, \ldots,\mu_{t})}(\left\|\mathrm{avg}(\mathcal{U})\right\|_{\mathrm{op}}). \tag{170}\] The work [10] also contains calculations very similar to the following (wherein the special number.11 is chosen due to certain explicit expander constructions): **Proposition 6.18**.: _For \(0<\delta,\epsilon\leq 1\), we have \(F_{\vec{\mu}}(1-\delta)\leq\epsilon\) for any sequence \(\vec{\mu}\) that entrywise satisfies_ \[(0,\ldots,0)\leq\vec{\mu}\leq(\vec{\mu}^{(1)},\vec{\mu}^{(2)}),\qquad\vec{\mu }^{(1)}\coloneqq(\underbrace{.11,\ldots,11}_{\ell_{1}\ times}),\qquad\vec{\mu}^{(2 )}\coloneqq\tfrac{1}{4}(2^{-2},2^{-4},2^{-8},\ldots,2^{-2^{\ell_{2}}}), \tag{171}\] _where \(\ell_{1}\geq\log_{2^{s}}(1/\delta)+3\) (note: \(2^{.8}\approx 1.74\)) and \(\ell_{2}\geq\log_{2}\log_{2}(1/\epsilon)\)._ Proof.: Since \(f_{\mu}(\lambda)\) is nondecreasing on \([0,1]\) for both \(\mu\) and \(\lambda\), it suffices to analyze all upper bounds as if they were equalities. It is easy to check that \(f_{.11}(1-\delta)\leq 1-2^{.8}\delta\) for all \(0\leq\delta\leq.03\), and hence \[\ell\geq\log_{2^{s}}(1/\delta)-6\quad\implies\quad f_{.11}^{\circ\ell}(1- \delta)\leq 1-.03/1.75\leq.985. \tag{172}\] Also, \(f_{.11}^{\circ 9}(.985)\leq 1/4\), and hence \(F_{\vec{\mu}^{(1)}}(1-\delta)\leq 1/4\). The proof is now complete by observing that \(F_{\vec{\mu}^{(2)}}(1/4)\leq\tfrac{1}{2}2^{-2^{-\ell_{2}}}\). Regarding explicit construction of expander graphs, taking \(p=29\) and \(509\) in [1, Thm. 1.2] and adding a self-loop to every vertex yields: **Theorem 6.19**.: _For \((d,\mu)=(32,.45)\) and also \((d,\mu)=(512,.11)\), there is a strongly explicit algorithm for constructing \(n\)-vertex, \(d\)-regular, \(\mu\)-expander graphs (for all sufficiently large \(n\))._ By repeatedly squaring the \(32\)-regular graphs above, one can also conclude the following (in which it is possible that \(d=d(n)>n\)): **Corollary 6.20**.: _For any easy-to-compute \(j=j(n)\in\mathbb{N}\), there is a strongly explicit (\(\mathrm{polylog}(n,d)\) time) algorithm for constructing \(n\)-vertex, \(d\)-regular, \(\mu\)-expander graphs (for all sufficiently large \(n\)) where, for \(k=2^{j}\), we have \(d=32^{k}\) and \(\mu=\mu(n)=.45^{k}\leq\tfrac{1}{4}2^{-k}=\tfrac{1}{4}d^{-1/5}\) (the inequality holding provided \(j\geq 4\))._ Putting together Corollary 6.20, Proposition 6.18, and Proposition 6.17 yields the following: **Theorem 6.21**.: _There is a strongly explicit, space-minimal algorithm with the following behavior on inputs \(c\) and \(0<\delta,\epsilon<1\) (where we assume \(c=2^{i_{1}}\), \(\delta=16^{-i_{2}}\), and \(\epsilon=2^{-2^{i_{3}}}\) for some \(i_{1},i_{2},i_{3}\in\mathbb{N}\) sufficiently large). The algorithm outputs a sequence \(Q\) of \(N=O(c/(\delta^{11.25}\epsilon^{10}))\) monomials over symbols \(u_{1},\ldots,u_{c}\) and \(u_{1}^{\dagger},\ldots,u_{c}^{\dagger}\), each of length \(L=8\log_{2}(1/\epsilon)/\delta^{1.25}\), with the following property:_ _For any sequence \(\mathcal{U}=(U_{1},\ldots,U_{c})\) of matrices in \(\mathbb{C}^{r\times r}\) satisfying \(\left\|U_{i}\right\|_{\mathrm{op}}\leq 1\) for all \(i\) and \(\left\|\mathrm{avg}(\mathcal{U})\right\|_{\mathrm{op}}\leq 1-\delta\), it holds that \(\left\|\mathrm{avg}(Q\circ\mathcal{U})\right\|_{\mathrm{op}}\leq\epsilon\)._ _Here "strongly explicit and space-minimal" means that, given a monomial index \(i\in[N]\) and a monomial position index \(j\in[L]\), the algorithm runs in deterministic \(\mathrm{polylog}(c/\delta\epsilon)\) time and \(O(\log(c/\delta\epsilon))\) space and outputs the \(j\)th symbol of the \(i\)th monomial in \(Q\)._ Proof.: Given \(c,\delta,\epsilon\), the desired \(Q\) is \(q_{G_{\epsilon}}\circ\cdots\circ q_{G_{1}}\circ(u_{1},\ldots,u_{c})\), where \(G_{1},\ldots,G_{t}\) is a sequence as in Proposition 6.17, with: * \(\ell_{1}=\log_{2\cdot s}(1/\delta)+3=\frac{5}{4}\log_{2}(1/\delta)+3\), \(\ell_{2}=\log_{2}\log_{2}(1/\epsilon)\), and \(t=\ell_{1}+\ell_{2}\); * \(G_{1},\ldots,G_{\ell_{1}}\) are 512-regular.11-expanders, with \(G_{j}\) on \(512^{j-1}c\) vertices, as in Theorem 6.19; * \(G_{\ell_{1}+1},\ldots,G_{\ell_{1}+\ell_{2}}\) are as in Corollary 6.20, with \(G_{\ell_{1}+j}\) being a \(32^{k}\)-regular, \(\frac{1}{4}2^{-k}\)-expander (for \(k=2^{\min(j,4)}\)) on \(32^{k+32}N_{0}\) vertices (once \(j\geq 4\)), where \(N_{0}=512^{\ell_{1}}c\) is \(|E(G_{\ell_{1}})|\). The length of \(Q\) is \[N=|E(G_{t})|=32^{2^{\ell_{2}+1}+32}N_{0}=2^{160}\cdot 2^{5\cdot\log_{2}(1/ \epsilon)\cdot 2}\cdot 2^{9(\log_{24^{/5}}(1/\delta)+3)}=2^{187}\cdot c/ \delta^{11.25}\epsilon^{10}, \tag{173}\] and each monomial in \(Q\) has length \(2^{t}=8\log_{2}(1/\epsilon)/\delta^{1.25}\). The desired bound \(\left\|\mathrm{avg}(Q(\mathcal{U}))\right\|_{\mathrm{op}}\leq\epsilon\) follows from Propositions 6.17 and 6.18. Finally, the time and space bounds are easy to verify, as computation of the \(j\)th symbol of the \(i\)th monomial of \(Q\) simply amounts to determining the \(i\)th edge of \(G_{t}\), and then following a path down a binary tree of height \(t\), where at each node one has to compute the the \(a\)th edge of a particular \(G_{b}\). **Remark 6.22**.: As in [12, Thm. 5.8], if \(\delta\) is not small but is rather already of the form \(\delta=1-\lambda\) for small \(\lambda\), one can retain only the last \(\ell_{2}-\log_{2}\log_{2}(1/\lambda)\) or so expanders and obtain \(L=O(\log(1/\epsilon)/\log(1/\lambda))\); we omit details. When using Theorem 6.21, we will often want to disregard a certain "trivial" subspace; we will then employ the following simple observation: **Fact 6.23**.: _In the setting of Theorem 6.21, say each \(U_{j}\) may be written as \(U_{j}=R_{j}\oplus U_{j}^{\prime}\), where \(R_{j}\) acts on subspace \(T\) and \(U_{j}^{\prime}\) acts on its orthogonal complement \(T^{\perp}\) in \(\mathbb{C}^{r}\). Then \(\mathrm{avg}(Q(\mathcal{U}))=\mathrm{avg}(Q(\mathcal{R}))\oplus\mathrm{avg}(Q (\mathcal{U}^{\prime}))\), where \(\mathcal{R}=(R_{1},\ldots,R_{c})\) and \(\mathcal{U}^{\prime}=(U_{1}^{\prime},\ldots,U_{c}^{\prime})\)._ For example, suppose \(G=(V,E)\) is a \(d\)-regular undirected graph on \(V=(1,2,\ldots,n)\) with normalized adjacency matrix expressed as \[A_{G}=\mathrm{avg}(P_{1},\ldots,P_{d}), \tag{174}\] where \(P_{1},\ldots,P_{d}\) are \(n\times n\) permutation matrices. Each \(P_{i}\) and \(P_{i}^{\dagger}\) has operator norm 1 and fixes the one-dimensional space \(T=\mathrm{span}\{|1\rangle+\cdots+|n\rangle\}\). If we write \(P_{i}=\mathrm{proj}_{T}\oplus U_{i}^{\prime}\) where \(U_{i}^{\prime}\) is the action of \(P_{i}\) on \(T^{\perp}\), then \[A_{G}=\mathrm{proj}_{T}\oplus\mathrm{avg}(U_{1}^{\prime},\ldots,U_{d}^{\prime}) \tag{175}\] and we are in a position to apply Fact 6.23 and Theorem 6.21 together. The result is a sequence \(Q\) of "walks", each of the form \(P_{i_{L}}^{\dagger}P_{i_{L-1}}\cdots P_{i_{2}}^{\dagger}P_{i_{1}}\). Applying one such walk to any starting vertex \(|v\rangle\) leads to a valid walk of length \(L\) in \(G\) (with the steps \(P_{i}^{\dagger}\) being valid since \(G\) is undirected). If we write \(\widetilde{G}\) for the \(|Q|\)-regular undirected graph on \(V\) wherein each \(v\in V\) has an edge to all its walk outcomes, the result is that \[A_{\widetilde{G}}=\mathrm{avg}(Q\circ(P_{1},\ldots,P_{d}))=\mathrm{proj}_{T} \oplus\mathrm{avg}(Q(U_{1}^{\prime},\ldots,U_{d}^{\prime})). \tag{176}\] Hence if \(G\) is a \((1-\delta)\)-expander, we obtain that \(\widetilde{G}\) is an \(\epsilon\)-expander with \(|Q|=O(d/(\delta\epsilon)^{O(1)})\) and walks of length \(O(\log(1/\epsilon)/\delta^{O(1)})\). As shown in [10, 11], given any simple, connected, \(n\)-vertex, undirected graph, there is a very simple transformation preserving connectivity that produces a \(4\)-regular undirected graph (together with the associated \(P_{1},\ldots,P_{4}\) as in Equation (174)) that has \(\delta\geq 1/\operatorname{poly}(n)\); by taking \(\epsilon=1/\operatorname{poly}(n)\), one can use these pseudorandom walks to establish Reingold's Theorem \(\mathsf{SL}=\mathsf{L}\)[10].
2307.06233
On the Importance of Denoising when Learning to Compress Images
Image noise is ubiquitous in photography. However, image noise is not compressible nor desirable, thus attempting to convey the noise in compressed image bitstreams yields sub-par results in both rate and distortion. We propose to explicitly learn the image denoising task when training a codec. Therefore, we leverage the Natural Image Noise Dataset, which offers a wide variety of scenes captured with various ISO numbers, leading to different noise levels, including insignificant ones. Given this training set, we supervise the codec with noisy-clean image pairs, and show that a single model trained based on a mixture of images with variable noise levels appears to yield best-in-class results with both noisy and clean images, achieving better rate-distortion than a compression-only model or even than a pair of denoising-then-compression models with almost one order of magnitude fewer GMac operations.
Benoit Brummer, Christophe De Vleeschouwer
2023-07-12T15:26:04Z
http://arxiv.org/abs/2307.06233v1
# On the Importance of Denoising when Learning to Compress Images ###### Abstract Image noise is ubiquitous in photography. However, image noise is not compressible nor desirable, thus attempting to convey the noise in compressed image bitstreams yields sub-par results in both rate and distortion. We propose to explicitly learn the image denoising task when training a codec. Therefore, we leverage the Natural Image Noise Dataset, which offers a wide variety of scenes captured with various ISO numbers, leading to different noise levels, including insignificant ones. Given this training set, we super-vise the codec with noisy-clean image pairs, and show that a single model trained based on a mixture of images with variable noise levels appears to yield best-in-class results with both noisy and clean images, achieving better rate-distortion than a compression-only model or even than a pair of denoising-then-compression models with almost one order of magnitude fewer GMac operations. ## 1 Introduction Image sensors capture noise along with useful image information. This noise increases with the camera's ISO sensitivity setting, but noise is virtually always present to some extent and it is both incompressible and undesirable. Lossy image compressors inherently perform some image denoising because removing random noise is often the most effective way to reduce entropy in a signal, but without proper training (or algorithm design) the resulting image size is still inflated and the results look sub-par, as shown in Figure 8 (and also attested by Figure S1 in Supplementary Material, and by our experiments in Figure 3a). This increase in bitrate is readily observable in both conventional and learned codecs. A learned lossy image compression scheme is trained by forwarding the image through an autoencoder (AE) and backpropagating from the loss, whose components are the bitrate and the distortion [5]. The bitrate is computed from an optimized, i.e. trained, cumulative distribution function, and the distortion quantifies the difference between the output of the autoencoder and the input image, typically by computing the mean square error (MSE). Any image compression scheme can attain better rate-distortion by having the noise removed first. An image denoiser can typically be trained to reconstruct clean images from noisy inputs, using a dataset of paired images where static scenes are captured using a progressively faster shutter speed [8]. In this work, we consider joint compression and denoising. Adding a denoising functionality essentially comes down to feeding the network with a potentially noisy image and comparing its output with a clean image that may have a better quality than what was initially input to the network. The goal is to generate an image that is of higher quality than the one used as input, while decreasing the necessary bitrate close to that of a clean image. Meanwhile, there is no added complexity because the inference process and the network architecture remain unchanged. The network is trained with both noisy images and some clean images as input such that it removes noise while retaining the ability to compress clean input images efficiently. Our experiments analyzes the impact of image noise on the rate-distortion of different standard and learned compression methods, the benefit of performing denoising prior to compression, and denoising while compressing. Our original supervision strategy, introduced to promote the reconstruction of clean images when the learned codec is fed with noisy ones, appears to be effective. The resulting joint denoising and compression models perform properly on clean images as well as noisy ones, effectively replacing a standard compression models for general-purpose image compression and substantially improving the rate-distortion on noisy images. As illustrated in the second line of Figure 8 (comparison between second and third columns), it is shown to significantly improve rate-distortion performance (using non-noisy images as ground-truth) compared to relying on the implicit noise removal induced by the conventional adoption of a perception-based loss function during training. It also reaches slightly better rate-distortion than a computationally heavy two-step procedure, involving one AE for denoising, followed by another AE-based network for compression. This paper is organized as follows: Section 2 summarizes the work on which this paper builds. The main concepts behind our joint denoising and compression supervision are introduced in Section 3. The implementation details are given in Section 4 followed by the results, and Section 5 summarizes the impact of the present work. ## 2 Background **Learned Lossy Image Compression** is typically based on the seminal work of Johannes Balle et al. [5]; a convolutional autoencoder [17] with generalized divisive normalization (GDN) [4], and an entropy model which is jointly optimized to capture the latent distribution. This model has been extended with a parametrized hyperprior [6] or with a competition between multiple priors [9], which allows for manipulating an image-dependent latent distribution. Our experiments build onto the initial architecture from [5] completed with multiple sets of latent distributions learned in [9]. **Image Noise** occurs as the sensitivity of an image sensor is increased to make up for non-ideal lighting conditions. When insufficient light is provided or the dynamic range is too wide, the ISO and/or shutter speed settings are increased accordingly. Pixels take on random, less accurate values, and fewer detail is visible as a result. Different image restoration techniques have been developed to tackle image denoising, including Wavelet [22, 13] and non-local means based methods [11], BM3D [15], and recent deep-learning based methods [8, 14, 18, 12]. Image noise does not reflect the physical components of the observed scene. It is a consequence of the imperfect acquisition process, and thereby should be ignored when possible. Hence, **targeting the reconstruction of a denoised image is the proper way to proceed to get a faithful representation of reality** (even if it implies to not perfectly render the captured signal). The **Natural Image Noise Dataset** (NIND) [8] and Smartphone Image Denoising Dataset (SIDD) [3] provide sets of clean-noisy image pairs which are appropriate to train a denoising neural network. NIND is made of mul Figure 1: Visualization of a clean (top) and noisy (bottom) test image from NIND [8]. From left to right: (i) ground-truth/noisy input, (ii) compression autoencoder trained with standard supervision [9], relying on the adoption of a perception-based loss function to mitigate the impact of noise, (iii) our proposed joint denoising and compression model trained with Natural Image Noise Removal supervision using both clean and low noise images (“JDC-Cn.8”). tiple pictures of many static scenes captured on a tripod to ensure spatial consistency; the clean ground-truth images are taken in ideal conditions with a camera's base ISO sensitivity to capture as much light as is necessary and to obtain the best possible quality, and matching versions of the same scene are captured with a variety of increasing shutter speed and ISO settings, which result in increased noise and lower image quality. These noisy images are typically fed as the input of the denoising neural network while training it to reconstruct the scene as if it were taken in ideal conditions. **Denoising and Compression** have been studied as a combined problem in the wavelet domain [13, 10], and more recently the idea of learning denoising and compression jointly with neural networks was approached by Testolina et al. [23] in the context of the JPEG AI codec. The decoder in [23] is extended such that it consists of twice as many layers, and Poissonian-Gaussian noise is applied to the input training data. This approach delegates the denoising task to the decoder, in line with the JPEG AI requirements of using a universal encoder and specialized decoders. However, as shown in our experimental section, this architecture results in no appreciable bitrate reduction because it is only the encoder that can ensure that incompressible noise does not reach the bitstream. Moreover, training a denoiser with synthetic noise tends to produce a poor model on real data [8, 20]. Testolina et al. introduce a promising joint denoising and compression (JDC) scheme but the resulting rate-distortion of their model falls short of that obtained using our proposed supervision based on pairwise naturally noisy / clean images. ## 3 Jointly Learned Denoising and Compression An effective denoising network can be trained to reconstruct clean images given noisy images as input, using a dataset of paired noisy-clean images such as NIND [8] and SIDD [3]. We propose to adopt a similar principle to train an autoencoder originally designed for image compression. A joint denoising and compression autoencoder [9] is trained to generate a clean image from either a matching noisy image in a paired dataset or from the same clean image. The aim is to obtain a decoded image whose quality is potentially higher than that of the input image, while saving the space that would otherwise be wasted in encoding noise. Different methods are proposed to train such a joint denoising and compression model using Natural Image Noise Removal (NINR) supervision. They are described in Section 3.1 and Figure 2 illustrates the general training process. ### Our Proposed NIN Supervision Strategies Four different strategies are envisioned and compared to implement our novel Natural Image Noise Removal (NINR) supervision paradigm. They are listed in Table 1 and described below. **Noisy Pairs (JDC-N)** The simplest joint denoising and compression implementation consists of training with all noisy-clean image pairs available in the dataset(s). **Clean and Noisy Pairs (JDC-CN)** This method considers some clean-clean image pairs, in addition to the noisy-clean image pairs, to ensure that the network's performance does not degrade when the input images contain no unwanted noise. The dataset of clean images can be selected as a set of images which has been assessed and promoted by human reviewers, such as the Wikimedia Commons Featured Pictures [1], then further refined by eliminating images whose metadata indicates a high ISO value in order to ensure the absence of noise. **Clean and Low-noise Pairs (JDC-Cn)** To specialize the model to the most frequent input image noise levels, we have also considered placing a threshold on the training data noise level. Our experiments reveal that it is beneficial to filter out the most noisy input training images because the overall rate-distortion degrades when the network is trained to perform more extreme denoising. Such extreme denoising would require added complexity on the encoder and, although possible, extreme denoising is outside the scope of a combined denoiser whose aim is to improve rate-distortion by removing noise that is inherently present in most photographs rather than learning to see in the dark, as proposed in [14]. The paired image noise dataset is analyzed prior to training such that the multi-scale structural similarity (MS-SSIM) [24] score between each noisy crop and its clean ground-truth is stored in a file, and the training dataset can be initialized such that all training crops exceed a set quality threshold. The effect of different noise thresholds is analyzed in the ablation study. **Building Pairs from a Universal Denoiser (JDC-UD)** A fourth training method consists of running a pre-trained blind denoising model [21, 8] on all the training data to generate the ground-truth images, and computing the training loss between the input images and the denoised images. This method effectively performs knowledge distillation [16] from a powerful universal denoising network to the joint denoising and compression network. All input images are considered noisy and the training dataset is virtually limitless because the ground-truth images are generated (in advance), thus entire image datasets are used without filtering. ## 4 Experiments ### Practical Implementation Details These experiments are based on the PyTorch implementation of the autoencoder base codec introduced in [9]. Source code is provided as Supplementary Material and available on [https://github.com/trougnouf/compression](https://github.com/trougnouf/compression). The training loss of the compression autoencoder is computed as \(\text{Loss}=\text{bitrate}(\hat{x})+\lambda\times\text{MSE}(\hat{x},x)\), where \(\hat{x}\) is the decoded image and \(x\) is the clean ground-truth which, as explained in Section 3, may differ from the input image, and \(\lambda\) balances the rate/distortion trade-off of the model. The combined denoising and compression autoencoder [9] is trained with batches of four noisy images from NIND [8] and one clean image from the Wikimedia Commons Featured Pictures [1] dataset whose ISO value does not exceed 200, with a crop size of 256 as is typically done in the learned compression literature [9, 5, 6]. The pre-trained "**universal denoiser**" used to train the JDC-UD model is the U-Net-like blind denoising model published with NIND, which was updated such that its training includes clean-clean image pairs. The CLIC professional test set [2] is used to assess the models on other clean images. The JDC model defined in Testolina et al. (2021) [23] is trained entirely as described with Poissonian-Gaussian \begin{table} \begin{tabular}{|l|l|l|} \hline **Method** & **Training input** & **Expected output** \\ \hline JDC-N & Noisy image from clean–noisy paired dataset [8] & Clean ground-truth \\ \hline \multirow{2}{*}{JDC-CN} & Noisy image from clean–noisy paired dataset, & Clean ground-truth, \\ & clean image from high quality dataset [1] & clean input \\ \hline \multirow{2}{*}{JDC-Cn} & Weakly noisy image from clean–noisy paired dataset, & Clean ground-truth, \\ & clean image from a high quality dataset & clean input \\ \hline JDC-UD & Arbitrary input image [1, 8] & Provided by a universal denoiser \\ \hline Testolina [23] & Clean image from high quality dataset + artificial noise & Clean input \\ \hline \end{tabular} \end{table} Table 1: Data pairs considered in this paper to train a joint denoising and compression model. JDC-Cn is also referred to with its training noise threshold (e.g. JDC-Cn.8 is trained with MS-SSIM \(\geq 0.8\), see the text for details). Figure 2: Denoising and compression joint training: the distortion loss is computed between the reconstructed image \(\hat{x}\) and a clean image \(x\). The input image \(y\) may be noisy. The network [9] is made of four (transposed) convolutions with stride of 2 and kernel size of 5, each followed by a GDN activation [4] except for the last (transposed) convolution. (Best viewed in color.) artificial noise [19] (with noise parameters \(a=0.2^{2},b=0.04^{2}\)), the encoder from Balle et al. (2018) [6], and their proposed decoder which has twice as many layers as the one recommended by Balle et al. An additional JDC-Cn model is trained with the larger decoder proposed by Testolina et al. in order to assess their proposed network architecture separately from the training method. Most models are trained for six million steps with \(\lambda=4096\) to yield the highest bitrate, then the \(\lambda\) value is halved and training continues for three million steps for each \(\lambda\) value all the way down to \(\lambda=256\), like in [9]. Both the JDC-Cn.8-Tdec method that is matched with the decoder defined by Testolina et al. and the JDC-N model trained with only noisy-clean image pairs have had an additional model trained with \(\lambda=8192\) in order to reach the other methods' highest bitrate. Likewise, the whole method defined by Testolina et al. [23] was trained with up to \(\lambda=16384\). The standard codecs comparisons are made by encoding images using GraphicsMagick 1.3.37 (JPEG), the JPEG XL encoder v0.6.1, and the BPG Image Encoder version 0.9.8. The "**standard autoencoder**" is that defined [9]. ### Results #### 4.2.1 On the Importance of Denoising. The first experiment measures the impact of denoising prior to compression with different compression methods. Figure 2(a) plots the rate-distortion curves obtained without specific denoising of the input images, for a variety of codecs. We observe that for all codecs compression is an effective denoising method at the lowest bitrate. However, at reasonable bitrates, all conventional codecs (learned or not) tend to reproduce the noise. This is in contrast with our proposed joint denoising and compression paradigm, which continuously increases quality when the bitrate increases.Denoising before compression might be considered to solve the quality issue when using conventional codecs. As shown in Figure 2(b), this bridges (most of) the quality gap compared to our proposed JDC method, but at the cost of a significantly increased complexity (see Section 4.2.3). #### 4.2.2 Our Joint Denoising and Compression Scheme Figure 3 also introduces the proposed joint denoising and compression models, JDC-CN (no noise threshold) and JDC-Cn.8 (training noise limited to MS-SSIM \(\geq 0.8\)). These models are trained like the dedicated denoising model in that the input training batch is made of four noisy images and one clean image, and the model is tasked with Figure 3: Lossy compression of noisy (MS-SSIM \(\in[0.7,1.0)\)) test images from NIND [8] with respect to their matching clean ground-truth. **(a)** Original (noisy) images are provided as input. Standard methods (JPEG, JPEG XL, BPG, and a standard compression autoencoder [9]) perform some implicit denoising at low bitrates, but image quality degrades as the bitrate increases since the noisy signal is reconstructed. **(b)** A universal denoiser is applied before presenting the image to the standard method. This greatly improves rate-distortion but adds of an order of magnitude of complexity. Our joint denoising and compression (JDC) autoencoder with Natural Image Noise Removal supervision allows noise-free reconstruction without prior denoising. reconstructing the clean ground-truths. This single JDC model generally achieves better rate-distortion than a duo of denoising then compression neural networks (except at high bitrate), while using significantly less computational overhead. The best results shown are obtained with the "JDC-Cn.8" model which is trained with paired images whose input noise is limited to MS-SSIM \(\geq 0.8\) as well as with unpaired clean images to promote generalization. \(T\) is the method trained with artificial noise described by Testolina et al. [23], which performs worse than a duo of models as is also shown in their results. Figure 4: MS-SSIM rate-distortion curve of different compression methods on (a) high quality images from the CLIC protest set [2] and (b) nearly noiseless test images from NIND [8]. Figure 5: Lossy compression of noisy test images from NIND [8] with different joint denoising and compression methods including JDC-Cn trained with different MS-SSIM thresholds, JDC-CN trained with no such threshold, JDC-N trained with no clean images, JDC-UD trained with knowledge distillation from a universal denoiser, and JDC-Cn.8 trained with the larger decoder defined by Testolina et al. [23]. **(a)** The JDC-Cn methods tend to perform well, especially when the quality threshold is set between 0.6 and 0.8, but the rate-distortion worsens when the training quality threshold increases to 0.9 (i.e. little noise is seen by the model during training). The JDC-UD model yields similarly lower performance, and so does the larger decoder despite using the same training scheme as JDC-Cn.8. **(b)** The noise level is more extreme than what is likely to occur in photographs. This further shows that compressing with more noise than is ever seen during training (e.g. JDC-Cn.9) yields worse rate-distortion. The model trained without clean image pairs (JDC-N) does not perform as well even when the test images are noisier. #### 4.2.3 Computational Complexity Computational cost is measured in terms of billion multiply-accumulate operations (GMac) and runtime on an AMD Threadripper 3960X CPU. The dedicated denoising U-Net performs 812 billion multiply-accumulate operations (GMac) per megapixel (MP) in 65.8 sec./MP, whereas the JDC model's compression encoder performs 92.8 GMAC/MP [9] in 2.9 sec./MP. The dual model approach (denoising then compression) thus performs a total of 904.8 GMac/MP, whereas a single joint denoising and compression model operates with 10.3% of that computational complexity. #### 4.2.4 Handling Clean Images JDC-C models are trained with both noisy-clean and clean-clean paired images in order to better generalize and maintain good rate-distortion on clean images. Figure 4a and Figure 7 show the behavior of different JDC training strategies when compressing clean images. JDC models trained with some clean input images or with the JDC-UD knowledge distillation technique yield a rate-distortion similar to the model trained for compression only, even when no minimum training noise threshold is set, thus incorporating clean images in the training data (JDC-CN) reinstates a good rate-distortion. Limiting the input noise to MS-SSIM\(\geq 0.8\) (JDC-Cn.8) further improves rate-distortion such that it is slightly better with a JDC model than a standard model at low bitrates, and slightly worse at high bitrates due to the perception-distortion tradeoff [7] where only reconstruction fidelity matters. The JDC-N model trained with only noisy-clean image pairs performs significantly worse on clean images, and the model trained with artificial noise ("\(T\)") performs worst. Figure 4b shows a common use-case where the amount of noise is low (MS-SSIM\(\in[0.95,1)\)). Prior denoising still improves rate-distortion, and joint denoising and compression methods yield the most significant rate-distortion benefits. All compression methods benefit from prior or joint denoising even when the level of noise is minor. Traditional compression schemes benefit the most from prior denoising, Figure 6: Visualization of a clean (top) and noisy (bottom) test image from NIND [8] encoded with a target bitrate of 0.13 bpp using different trained compression autoencoders. From left to right: ground-truth/noisy input, standard autoencoder [9], autoencoder from Testolina et al. (2021) [23] (trained on artificial noise; increasing bitrate did not yield quality improvements on the ground-truth), joint model trained with knowledge distillation from a universal denoiser (JDC-UD), and joint model trained with both clean and low noise images (JDC-Cn.8). Images are best visualized after zooming in on a screen. and joint denoising outperforms prior denoising in learned methods. #### 4.2.5 Ablation Study The effect of different training noise thresholds is analyzed when compressing noisy images. In Figure Figure 4(a) the test noise is limited to MS-SSIM \(\in[0.7,1)\) which is qualitatively fairly noisy, as shown in Figure 8. None of the methods perform significantly better or worse than the denoise and compress duo of models. It is worth noting that the three worst JDC training schemes are the knowledge distillation JDC-UD model, the model trained with a quality threshold of MS-SSIM \(\geq 0.9\), and the decoder defined by Testolina et al. which contains twice as many layers. The JDC-Cn models trained with an MS-SSIM threshold of 0.6 and 0.8 yield the best rate-distortion. A visualization of the different denoising and compression methods at low bitrates is shown as Figure 6. In Figure 4(b), the testing noise is increased to MS-SSIM \(\in[0.5,1)\), showing how the models behave under extreme input noise. The results are largely the same; the model trained with MS-SSIM \(\geq 0.9\) struggles even more due to the increased noise and its performance is close to that of the JDC-UD method, the model trained with MS-SSIM \(\geq 0.8\) does not perform as well whereas the model trained with MS-SSIM \(\geq 0.6\) is still competitive, and it remains beneficial to train with clean image pairs as well. ## 5 Conclusion Denoising images improves rate-distortion whenever there is noise present, regardless of the compression method. Denoising can be performed prior to compression using a dedicated denoiser (as is typically done in professional image development workflow) with no adaptation to the compression scheme. A joint model that is trained to perform denoising and compression simultaneously yields further improvements in rate-distortion.As a result, a joint denoising and compression model performs 8.9 times fewer GMAC operations than a U-Net denoiser followed by a compression encoder. Since the JDC model only differs from standard learned compression models by the adopted supervision strategy, it can be implemented using any of the compression architectures available in the literature (such as [5, 6, 9]). Our proposed Natural Image Noise Removal supervision strategy thus provides a fundamental and generic contribution that is expected to become popular in future works related to learned compression. In practice, joint denoising and compression models may be trained using a dataset of noisy-clean image pairs with natural noise, such as NIND [8] and SIDD [3]. Performance Figure 7: Visualization of a clean test image compressed using different methods with a target bitrate of 0.23 bpp. Standard methods (JPEG, JPEG XL, BPG) tend to produce block artifacts. The method defined by Testolina et al. [23] produces oversmoothed results on clean images. Training with only noisy images can produce the same level of quality at the cost of increased bitrate. Other methods—training with only clean images, training with both clean and noisy images, and knowledge distillation from a powerful denoiser—perform well on clean images. The input image has been processed in the darktable software with (among other methods) non-local means [11] profiled denoising. is improved by setting a quality threshold on the training images, such as MS-SSIM \(\geq 0.8\) or MS-SSIM \(\geq 0.6\) depending on the maximum expected noise. The rate-distortion curve is preserved on clean images and improved in any case by incorporating clean-clean image pairs in the training data. An alternative method consists of performing knowledge distillation [16] by using the output of a dedicated denoiser as ground-truth images during training. This has the benefit of allowing a virtually limitless training dataset because a paired dataset is no longer required, but it requires pre-processing of the training images and results in a slightly worse rate-distortion. ## 6 Acknowledgements This research has been funded by the Walloon Region. Computational resources have been provided by the supercomputing facilities of the Universite catholique de Louvain (CISM/UCL) and the Consortium des Equipements de Calcul Intensif en Federation Wallonie Bruxelles (CECI) funded by the Fond de la Recherche Scientifique de Belgique (F.R.S.-FNRS) under convention 2.5020.11 and by the Walloon Region. Figure 8: Noise is both incompressible and undesirable, as shown in this clean–noisy image pair from the Natural Image Noise Dataset (NIND) [8], where the same scene has been captured with increasingly faster shutter speed. The JPEG compressed (Q=97) image size increases from 3.7 MB for the ground-truth to 9.9 MB when encoding the noisy image. Even the ground-truth image (center) contains some background noise and artifacts such as chromatic aberrations, and JPEG compresses it down to 2.6 MB when it is denoised with a trained denoiser from [8] prior to compression (left). Figure 9: Lossy compression of noisy (MS-SSIM \(\in[0.7,1.0)\)) test images from NIND [8] with respect to their matching clean ground-truth. This figure combines Figure 3a and Figure 3b in from the text. Figure 10: Lossy compression of noisy (MS-SSIM \(\in[0.7,1.0)\)) test images from NIND [8] with standard methods (JPEG, JPEG XL, BPG, and a standard compression autoencoder [9] abbreviated as “std AE”): rate-distortion with respect to the clean ground-truth images when encoding noisy images (\(\blacksquare\)), same compression schemes applied after the test images were denoised with a trained “universal denoiser” prior to compression (\(D\)). Compression alone performs some implicit denoising and the image quality is higher than that of the noisy input given sufficient bitrate, but image quality degrades as the bitrate increases and the noisy signal is eventually reconstructed. Applying a universal denoiser before the compression (dashed lines) greatly improves rate-distortion at the cost of added complexity.
2306.11466
Comprehensive Training and Evaluation on Deep Reinforcement Learning for Automated Driving in Various Simulated Driving Maneuvers
Developing and testing automated driving models in the real world might be challenging and even dangerous, while simulation can help with this, especially for challenging maneuvers. Deep reinforcement learning (DRL) has the potential to tackle complex decision-making and controlling tasks through learning and interacting with the environment, thus it is suitable for developing automated driving while not being explored in detail yet. This study carried out a comprehensive study by implementing, evaluating, and comparing the two DRL algorithms, Deep Q-networks (DQN) and Trust Region Policy Optimization (TRPO), for training automated driving on the highway-env simulation platform. Effective and customized reward functions were developed and the implemented algorithms were evaluated in terms of onlane accuracy (how well the car drives on the road within the lane), efficiency (how fast the car drives), safety (how likely the car is to crash into obstacles), and comfort (how much the car makes jerks, e.g., suddenly accelerates or brakes). Results show that the TRPO-based models with modified reward functions delivered the best performance in most cases. Furthermore, to train a uniform driving model that can tackle various driving maneuvers besides the specific ones, this study expanded the highway-env and developed an extra customized training environment, namely, ComplexRoads, integrating various driving maneuvers and multiple road scenarios together. Models trained on the designed ComplexRoads environment can adapt well to other driving maneuvers with promising overall performance. Lastly, several functionalities were added to the highway-env to implement this work. The codes are open on GitHub at https://github.com/alaineman/drlcarsim-paper.
Yongqi Dong, Tobias Datema, Vincent Wassenaar, Joris van de Weg, Cahit Tolga Kopar, Harim Suleman
2023-06-20T11:41:01Z
http://arxiv.org/abs/2306.11466v2
Comprehensive Training and Evaluation on Deep Reinforcement Learning for Automated Driving in Various Simulated Driving Maneuvers ###### Abstract Developing and testing automated driving models in the real world might be challenging and even dangerous, while simulation can help with this, especially for challenging maneuvers. Deep reinforcement learning (DRL) has the potential to tackle complex decision-making and controlling tasks through learning and interacting with the environment, thus it is suitable for developing automated driving while not being explored in detail yet. This study carried out a comprehensive study by implementing, evaluating, and comparing the two DRL algorithms, Deep Q-networks (DQN) and Trust Region Policy Optimization (TRPO), for training automated driving on the _highway-env_ simulation platform. Effective and customized reward functions were developed and the implemented algorithms were evaluated in terms of on-lane accuracy (how well the car drives on the road within the lane), efficiency (how fast the car drives), safety (how likely the car is to crash into obstacles), and comfort (how much the car makes jerks, e.g., suddenly accelerates or brakes). Results show that the TRPO-based models with modified reward functions delivered the best performance in most cases. Furthermore, to train a uniform driving model that can tackle various driving maneuvers besides the specific ones, this study expanded the _highway-env_ and developed an extra customized training environment, namely, _ComplexRads_, integrating various driving maneuvers and multiple road scenarios together. Models trained on the designed _ComplexRads_ environment can adapt well to other driving maneuvers with promising overall performance. Lastly, several functionalities were added to the _highway-env_ to implement this work. The codes are open on GitHub at [https://github.com/alaineman/drclearsim-paper](https://github.com/alaineman/drclearsim-paper). ## I Introduction Artificial intelligence (AI) is making huge improvements in various fields, one of which is automated driving [1]. One typical type of AI that is well-suitable for developing automated driving models is Deep Reinforcement Learning (DRL) [2]. DRL makes use of the advantage of deep neural networks regarding feature extraction and the advantage of reinforcement learning regarding learning from interacting with the environment. DRL exhibits excellent performance in various decision-making tasks, e.g., _GO_[3] and playing video games [4] and it has been employed in various automated driving tasks [5, 6, 7], e.g., lane-keeping, lane-changing, overtaking, ramp merging, and driving through intersections. For the lane-keeping task, Sallab et al. [8, 9] developed DRL-based methods for delivering both discrete policies using Deep Q-network (DQN) and continuous policies using Deep Deterministic Actor-Critic Algorithm (DDAC) to follow the lane and to maximize the average velocity when driving on the curved race track on Open Racing Car Simulator (TORCS). Similarly, for the lane-changing task, Wang et al. [10] trained a DQN-based model to perform decision-making of lane-keeping, lane changing to the left/right, and acceleration/deceleration, so that the trained agent can intelligently make a lane change under diverse and even unforeseen scenarios. Furthermore, Zhang et al. [11] proposed a bi-level lane-change behavior planning strategy using DRL-based lane-change decision-making model and negotiation-based right-of-way assignment model to deliver multi-agent lane-change maneuvers. For the overtaking task, Kaushik et al. [12] adopted Deep Deterministic Policy Gradients (DDPG) to learn overtaking maneuvers for an automated vehicle in the presence of multiple surrounding cars in a simulated highway scenario. They verified that their curriculum learning resembled approach can learn to smooth overtaking maneuvers, largely collision-free, and independent of the track and number of cars in the scene. For the ramp merging task, Wang and Chan [13] employed a Long-Short Term Memory (LSTM) neural network to model the interactive environment conveying internal states containing historical driving information to a DQN which then generated Q-values for action selection regarding on-ramp merging. Additionally, for negotiating and driving through intersections, Isele et al. [14] explored the effectiveness of the DQN-based DRL method to handle the task of navigating through unsignalestrees. Finally, Guo and Ma [15] developed a real-time learning and control framework for signalized intersection management, which integrated both vehicle trajectory control and signal optimization using DDPG-based DRL learning directly from the dynamic interactions between vehicles, traffic signal control and traffic environment in the mixed connected and automated vehicle (CAV) environment. It is observed that although many studies have utilized DRL for various driving tasks, most of them focus only on one specific driving maneuver. Seldom do they evaluate the DRL model performance across different maneuvers and neither do they explore the adaptability of DRL models trained on one specific environment but tested in other various maneuvers. This study tries to fill this research gap by implementing, evaluating, and comprehensively comparing the performance of two DRLs, i.e., DQN and TRPO, in various driving scenarios. Customized effective reward functions were developed and the implemented DRLs were evaluated in terms of various aspects considering driving safety, efficiency, and comfort level. This study also constructed a new simulation environment, named _'ComplexRoads'_ (shown in Fig 1), integrating various driving maneuvers and multiple road scenarios. The _ComplexRoads_ served to train a uniform driving model that can tackle various driving tasks. And to verify this, the models trained only on _ComplexRoads_ were tested and evaluated in the specific driving maneuvers. Intensive experimental results demonstrated the effectiveness of this customized training environment. To advance the learning capability for the developed DRL-based AI models, i.e. encouraging relational insight, besides designing _ComplexRoads_, several built-in functions of the highway-env package were also upgraded. Notable modifications are summarized as follows: the tracking of the 'current' lane with respect to the car (training agent) was upgraded to take into account the lane heading to eliminate confusing transitions when driving off-road. Furthermore, the distance between the car and its current lane was upgraded to a signed value to allow for orientation distinction. Similarly, the lane heading difference, LHD for short, was adjusted to also be a signed value. These improvements yield increased learning abilities for both on-road driving, returning to on-road driving when off-road, and a general sense of 'awareness' given an arbitrary environment. ## II Methodology ### _System Framework_ The general DRL learning cycle is an iterative learning process based on the agent's performance in the environment influenced by the agent's actions. In mathematical terms, automated driving can be modeled as a Markov Decision Process (MDP) [16]. MDP captures the features of sequential decision-making. The components of an MDP include environments, agents, actions, rewards, and states. In this study, the system framework which illustrates the corresponding MDP is depicted in Fig 2. The system generally consists of five main elements, i.e., environment, agent, action, state, and reward, which will be elaborated in detail in this section. ### _DRL MDP Elements_ _Environment_: To simulate the MDP, this study adopted the _highway-env_ platform [17], which is a Python-based package that offers a variety of driving environments. As a widely used platform, ample research has been conducted using the _highway-env_, such as [18, 19]. In the _highway-env_, six dedicated driving scenarios are available, i.e., Highway, Merge, Roundabout, Intersection, Racetrack, and Parking. Users can also customize environments by specifying the number of lanes, the size of a given roundabout, and other parameters. In this study, all the driving scenarios, except for the Highway and Parking, are covered. For training and evaluating a uniform driving model, this study designed a new simulation environment, named _'ComplexRoads'_ (shown in Fig 1). _'ComplexRoads'_ integrates two highway merging scenarios, two four-way intersections, two roundabouts, and several segments of multi-straight lanes. The DRL models trained only on _ComplexRoads_ were tested and evaluated in the specific driving maneuvers originally available on _highway-env_. _Agent_: A kinematic bicycle model is used to represent the vehicle as the agent of MDP. Despite its simplicity, a kinematic bicycle model is able to represent actual vehicle dynamics [20]. _Action_: An action taken by the agent in the proposed MDP is an element from the contracted _Action Space_. In this study, the two dimensions of the Action Space \(\mathcal{A}\) are: acceleration (throttle) and steering angle of the front wheels. Depending on the DRL algorithm \(\mathcal{A}\) is either of the form \(\left[-\frac{\pi}{2},\frac{\pi}{2}\right]\times\left[-5,5\right]\) for algorithms requiring a continuous action space, or \(\{\delta_{1},\ldots,\delta_{n}\}\times\{\alpha_{1},\ldots\alpha_{m}\}\) in the \(n\times m\) discrete case. Hence, \((\delta,\alpha)\in\mathcal{A}\), where steering is denoted by \(\delta\) and acceleration is denoted by \(\alpha\). _State_: As illustrated in Fig 2, the state in the proposed MDP includes the ego AV's state, e.g., location \((x,y)\), velocity \((v_{x},v_{y})\), and heading direction, together with the surrounding vehicles state and road conditions and is directly accessible at each time frame to the ego car, either in absolute terms or relative to itself. _Reward_: The customized _Reward_ function is elaborated in detail in the following subsection \(C\). Fig. 1: The layout of _ComplexRoads_ environment Fig. 2: The system framework-illustration of the DRL MDP. ### _Reward Function_ For training the models, this study used the reward function already present in the _highway-env_ package (referred to as the baseline reward and is illustrated in the middle of Fig 2) and the own modified and upgraded reward function. The model performances were compared to demonstrate that the upgraded reward is better than the baseline reward. During the training it is observed that in the early stages, the trained agent car would sometimes drive off the road. To make the training more efficient in handling the off-road driving and stimulating the agent to return to driving on-road, one specific contribution in this study is to adjust the distance measure between the agent and the lane, in addition with constructing the lane heading difference measure illustrated in the following paragraphs. Let \(c\) denote the ego car agent and \(\mathcal{L}\) the corresponding lane. A lane is a collection of lane points \(l\in\mathcal{L}\). Now define \(l^{\prime}\) as the lane point with the shortest Euclidean distance to the car, meaning \[l^{\prime}:=\operatorname*{arg\,min}_{l\in\mathcal{L}}d(c,l^{\prime}) \tag{1}\] and define the orientation \(\omega\) of the car \(c\) with respect to a lane point \(l\) as follows \[\omega(c,l)=\begin{cases}1&\text{if car is located left of $l^{1}$}\\ -1&\text{otherwise}\end{cases} \tag{2}\] Then, this study defines the distance between the ego car and the lane as the shortest distance from the ego car \(c\), to any point \(l\) on lane \(\mathcal{L}\), meaning \[d(c,\mathcal{L})=\omega(c,l^{\prime})d(c,l^{\prime}) \tag{3}\] The car heading and lane point heading are denoted by \(c_{\varphi}\) and \(l_{\varphi}\) respectively, both values are within angle range \((-\pi,\pi]\). Now, the lane heading difference (LHD) is defined as \[\mathrm{LHD}=\begin{cases}l_{\varphi}-c_{\varphi}+2\pi&\text{if $l_{\varphi}-c_{ \varphi}<-\pi$}\\ l_{\varphi}-c_{\varphi}-2\pi&\text{if $l_{\varphi}-c_{\varphi}>\pi$}\\ l_{\varphi}-c_{\varphi}&\text{otherwise}\end{cases} \tag{4}\] An important remark to this setup is the fact that if \(\text{sgn}(\mathrm{LHD})\cdot\text{sgn}(d(c,\mathcal{L}))<0\) then the car is heading for the lane. Similarly, if \(\text{sgn}(\mathrm{LHD})\cdot\text{sgn}(d(c,\mathcal{L}))>0\) the car is deviating (further) from the lane. Finally, denote the velocity of the ego car \(c\) by \(c_{v}\), the reward function \(R:\mathbb{R}^{3}\rightarrow\mathbb{R}\), with regard to _state_\(S\), is defined as \[R_{S}(c,\mathcal{L})=\begin{cases}\frac{\cos([\mathrm{LHD})]\cdot c_{\varphi} }{20\cdot\max(1,|d(c,\mathcal{L})|}&\text{if $c_{v}\geq 0$}\\ 0&\text{otherwise}\end{cases} \tag{5}\] where \(\mathrm{LHD}\) is the lane heading difference between the ego car and the closest lane point. However, if the car crashes during the simulation, the reward is automatically set as -10, regardless of the _state_. The reward function, as defined in Equation 5, rewards the car for its 'effective' speed on the road, defined by the cosine of the angular difference between the direction the car is driving in and the direction in which the road goes, multiplied by the speed of the car. With this design, both an increase in the driving speed and driving in line with the road heading will result in high rewards. Moreover, the value is divided by the lane offset to punish the car for driving off-road and also divided by 20 to scale the reward function to remain close to 1 under optimal circumstances. ### _DRL Algorithms_ Regarding DRL algorithms, TRPO [21] and DQN [22] were customized and implemented. Details of the DRLs including hyperparameter settings are elaborated in the supplementary at [https://shorturl.at/oLP57](https://shorturl.at/oLP57), while Section IV presents the results comparing trained DRLs' performances. ### _Evaluation of the Models_ To evaluate and compare the model performance, one needs a set of indicators and metrics. For which this study implemented a performance logger that measures and stores various indicators when testing a model in a given environment. These indicators are measured for a set amount of runs and the logger then prints the average values over all the runs. The measured indicators are: 1) Speed, 2) Peak jerk, 3) Total jerk, 4) Total distance, 5) Total steering, 6) Running time, 7) Lane time (rate of time the car is running within the road), and 8) Rate of collision. The jerk is defined as the difference between the current and the previous action of a vehicle, consisting of both the steering angle and the acceleration. The magnitude of the total jerk reflects the degree to which the vehicle's motion Fig. 3: Four different off-road scenarios showcasing available environment observations of the ego car. Both lane heading and car heading are portrayed by vectors. The lane distance and \(\mathrm{LHD}\), for the ego car \(c\) with respect to the lane point \(l^{\prime}\). The sign is orientation based: if the car is located left of the road, the Euclidean distance is perceived as positive, and negative if located right of the road. changes abruptly and frequently, where a higher value of the total jerk implies a less comfortable driving. The jerk is defined by equations in 6: \[\begin{split} J_{\text{acceleration}}=\frac{a_{t-1}-a_{t}}{a_{\text{ max}}-a_{\text{min}}}\\ J_{\text{steering}}=\frac{w_{t}-w_{t-1}}{w_{\text{max}}-w_{\text{ min}}}\\ J_{\text{total}}=\frac{J_{\text{acceleration}}+J_{\text{steering}} }{2}\end{split} \tag{6}\] The total steering is defined as the total sum of steering the car performs in the course of an evaluation, measured in angles. A higher amount of steering could, to certain extent, imply less efficient driving with unnecessary steering. The online rate is defined as the amount of time the evaluated car spends driving on the lane, divided by the total amount of time the car spends driving. The collision rate is defined as the total amount of collisions the car makes, divided by the total amount of evaluation trials. ## III Experiments This study conducted intensive experiments to train and evaluate DRL models using TRPO and DQN algorithms on four environments provided by _highway-env_, and also the newly self-designed _ComplexRoads_. The models were trained using both the original standard reward function provided by _highway-env_ (which served as the baseline) and the customized reward function. The hyperparameters used for training can be found in the appendix at [https://shorturl.at/oLP57](https://shorturl.at/oLP57). The models were trained on the supercomputer Delft Blue [23]. For every environment, ten models were trained and saved for 10,000 and 100,000 iterations. When finishing training, the model performance was tested for 10 runs. During the performance testing, constraints such as a maximum running time, minimum speed and if a crash had occurred were adopted. To obtain an overall assessment, the average of all these 10 testing results was calculated. To get an idea of how well the models perform regarding an uniform driving model, they were not only tested in their trained environments, but also cross-evaluated in other different environments. With the cross-evaluating, the effectiveness of the newly designed environment _ComplexRoads_ can be verified. The experiment testing results are summarized and discussed in Section IV. ## IV Results and discussion Tables I, II, III, IV, and V present the average performances of the DRL models trained on five environments and evaluated on the same respective environment. For every model variant in one specific environment, this study trained it for 10 times and also evaluated it for 10 times to get the average performance indicators. This paper writes "1*" when the number is rounded to 1, but not quite equal to 1. With the letters "B" and "M", this paper refers to whether the baseline reward function or modified reward function was used in training the model. Meanwhile, Tables VI, VII, VIII, IX, XI, and X present the average performances of the implemented DRL models trained in their own environment, but evaluated in other different environments. This is for evaluating how adaptive these models are. In order to save space, these tables leave out some of the 'less important' indicators, which can be found in the appendix at [https://shorturl.at/oLP57](https://shorturl.at/oLP57). One needs to note that for the environment of Merge and the self-designed _ComplexRoads_, no baseline reward functions are available, so only the models trained by the modified and upgraded reward (indicated with "-M") were evaluated. Also, for cross-environments evaluating, only models with the modified reward were evaluated. While there might be various ways to express that one model outperforms another, it is important to prioritize safety as the main concern. Therefore, the measured values that this study considers the most important are the online rate and the collision rate, which reflect driving safety. Other values, such as speed or jerk, are less important but can be compared in cases where the online and collision rates are similar. From Tables I, II, III, IV and V, one can see that in most cases the DQN with modified reward function (DQN-M) and the TRPO with modified reward function (TRPO-M) outperform the DQN and TRPO models with the baseline reward functions, especially with regards to the online rate. Between the DQN and TRPO models, the models trained by TRPO tend to perform somewhat better in most cases. Furthermore, looking at Tables VI, VII, VII, IX, XI and X, it is observed that the models trained on _ComplexRoods_ indeed tend to perform better than the other models in the cross-evaluation, especially in keeping a high online rate. This is due to various traffic situations represented in the _ComplexRoods_ environment, as well as the fact that the starting location of the car during training on _ComplexRoods_ was randomized, meaning that the car can experience various driving situations. This will also prevent the model from merely'memorizing' the environment, but instead learning better to master the maneuvers to interact with the randomly generated environments. Due to the size of _ComplexRoods_, training on it was very computationally intensive, especially with a large amount of simulated surrounding cars. Non-ego cars get destinations assigned randomly and drive around scripted, meaning they follow deterministic driving rules to drive 'perfectly' and receive a new destination upon reaching the previous one. Thus, this study opted to train the model with a relatively few surrounding cars, meaning that the model does not get to interact with other cars as often as in the other environments. Due to this, it resulted in a higher collision rate when evaluated in the other environments with more surrounding cars. When the computational resource is abundant, by adding more surrounding cars into the _ComplexRoods_ environment, this reduced awareness of the ego car can be reduced. All in all, it is verified that the designed _ComplexRoods_ indeed contributes to the training of a more flexible and adaptive driving model. All the testing scenarios and results are better demonstrated in the appendix with the demo videos also provided at [https://shorturl.at/oLP57](https://shorturl.at/oLP57). ## V Conclusion This study first summarized the utilization of DRL in every specific automated driving task, e.g., lane-keeping, lane-changing, overtaking, and ramp merging, then customized and implemented two widely used DRLs, i.e., DQN and TRPO to tackle various driving maneuvers and carried out a comprehensive evaluation and comparison on the model performance. Based on highway-env, a modified and upgraded reward function was designed for training the DRL models. Furthermore, a new integrated training environment, _ComplexRoods_, was constructed, together with several built-in functions were upgraded. Through various experiments, it is verified that the models trained using the modified reward generally outperformed those with the original baseline reward and the newly constructed _ComplexRoods_ demonstrated effective performance in training a uniform model that can tackle various driving tasks rather than one specific maneuver. As a preliminary study, the findings will provide meaningful and instructive insights for future studies towards developing automated driving with DRL and simulation. \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Indicator** & **Intersection** & **Racetrack** & **Roundabout** \\ \hline speed & 9.87 & 9.85 & 9.38 \\ \hline tot. distance & 14.2 & 437 & 349 \\ \hline runtime & 22.2 & 632 & 486 \\ \hline online rate & 0.886 & 0.399 & 0.159 \\ \hline col. rate & 0.06 & 0.16 & 0.38 \\ \hline \end{tabular} \end{table} TABLE X: TRPO-M trained on Merge evaluated in other various environments \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Indicator** & **Intersection** & **Racetrack** & **Roundabout** \\ \hline speed & 9.69 & 29.7 & 7.38 \\ \hline tot. distance & 50.9 & 304 & 113 \\ \hline runtime & 79.3 & 154 & 239 \\ \hline online rate & 0.996 & 0.971 & 0.849 \\ \hline col. rate & 0.67 & 0.6 & 0.76 \\ \hline \end{tabular} \end{table} TABLE XI: TRPO-M trained on Racetrack evaluated in other various environments \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Indicator** & **Racetrack** & **Merge** & **Roundabout** \\ \hline speed & 10.7 & 30.6 & 10.1 \\ \hline tot. distance & 156 & 335 & 22.3 \\ \hline runtime & 224 & 164 & 32.6 \\ \hline online rate & 0.954 & 0.955 & 0.968 \\ \hline col. rate & 0.97 & 0.2 & 0.05 \\ \hline \end{tabular} \end{table} TABLE VIII: DQN-M trained on Roundabout evaluated in other various environments \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Indicator** & **Racetrack** & **Merge** & **Roundabout** \\ \hline speed & 9 & 30.9 & 8.91 \\ \hline tot. distance & 137 & 477 & 236 \\ \hline runtime & 253 & 228 & 345 \\ \hline online rate & 0.999 & 0.97 & 0.527 \\ \hline col. rate & 0.57 & 0.1 & 0.68 \\ \hline \end{tabular} \end{table} TABLE IX: TRPO-M trained on Intersection evaluated in other various environments \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **Indicator** & **Racetrack** & **Roundabout** & **Merge** & **Intersection** \\ \hline speed & 10.2 & 8.3 & 30.6 & 10 \\ \hline tot. distance & 180 & 200 & 377 & 59.3 \\ \hline runtime & 275 & 349 & 185 & 89.5 \\ \hline online rate & 0.998 & 0.602 & 0.935 & 0.998 \\ \hline col. rate & 0.92 & 0.79 & 0.3 & 0.52 \\ \hline \end{tabular} \end{table} TABLE VI: DQN-M trained on _ComplexRoods_ evaluated in other various environments \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Indicator** & **Racetrack** & **Merge** & **Roundabout** \\ \hline speed & 9.99 & 8.95 & 29.8 & 10.3 \\ \hline tot. distance & 130 & 195 & 339 & 59.7 \\ \hline runtime & 222 & 289 & 172 & 87.3 \\ \hline online rate & 21* & 0.647 & 0.996 & 0.999 \\ \hline col. rate & 0.82 & 0.76 & 0.1 & 0.51 \\ \hline \end{tabular} \end{table} TABLE VII: TRPO-M trained on _ComplexRoods_ evaluated in other various environments \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Indicator** & **Racetrack** & **Merge** & **Intersection** \\ \hline speed & 10.7 & 30.6 & 10.1 \\ \hline tot. distance & 156 & 335 & 22.3 \\ \hline runtime & 224 & 164 & 32.6 \\ \hline online rate & 0.954 & 0.955 & 0.968 \\ \hline col. rate & 0.97 & 0.2 & 0.05 \\ \hline \end{tabular} \end{table} TABLE IX: TRPO-M trained on Intersection evaluated in other various environments One feature that was implemented by this study but was removed due to time constraints and lack of computational resources, was training the cars to reach a specific destination in the designed _ComplexRoods_ environment, which requires more interactions of training and perhaps the implementation of a path finding and optimization algorithm. In particular, providing a metric distance from the ego car to the destination, incentivized the car to take road options which seem to directly reduce the distance, which means the car often choose poorly and got punished by the distance increasing before decreasing. While this approach might work for grid like city structures it confuses the learning process in general. Nevertheless alternative direct navigation reward designs are a very interesting direction for further research.
2302.03595
Exploring quantum mechanical advantage for reservoir computing
Quantum reservoir computing is an emerging field in machine learning with quantum systems. While classical reservoir computing has proven to be a capable concept of enabling machine learning on real, complex dynamical systems with many degrees of freedom, the advantage of its quantum analogue is yet to be fully explored. Here, we establish a link between quantum properties of a quantum reservoir, namely entanglement and its occupied phase space dimension, and its linear short-term memory performance. We find that a high degree of entanglement in the reservoir is a prerequisite for a more complex reservoir dynamics that is key to unlocking the exponential phase space and higher short-term memory capacity. We quantify these relations and discuss the effect of dephasing in the performance of physical quantum reservoirs.
Niclas Götting, Frederik Lohof, Christopher Gies
2023-02-07T17:07:28Z
http://arxiv.org/abs/2302.03595v2
# Exploring quantum mechanical advantage for reservoir computing ###### Abstract Quantum reservoir computing is an emerging field in machine learning with quantum systems. While classical reservoir computing has proven to be a capable concept of enabling machine learning on real, complex dynamical systems with many degrees of freedom, the advantage of its quantum analogue is yet to be fully explored. Here, we establish a link between quantum properties of a quantum reservoir, namely entanglement and its occupied phase space dimension, and its linear short-term memory performance. We find that a high degree of entanglement in the reservoir is a prerequisite for a more complex reservoir dynamics that is key to unlocking the exponential phase space and higher short-term memory capacity. We quantify these relations and discuss the effect of dephasing in the performance of physical quantum reservoirs. Quantum reservoir computing, quantum machine learning, quantum entanglement, dynamical systems ## Introduction Machine learning models based on artificial neural networks (ANNs) are well established and have transformative potential on a global scale. These models typically rely on the optimization of thousands or even billions of parameters [1]. Such huge networks are optimized with very large sets of training data using excessive amounts of energy in the process. An alternative approach lies in the implementation of ANNs as physical systems [2]. Classical reservoir computing (RC) is a field that has emerged from neuromorphic computing and aims at using the natural dynamics of complex dynamical system for information processing tasks [3]. It uses complex dynamics of the internal degrees of freedom of the system, termed the reservoir, that can take on different forms [4] such as mechanical systems [5] or strongly scattering media in optical setups [6]. In contrast to conventional machine learning, the internal degrees of freedom within the reservoir need not be optimized, but are random. The training process is only a linear regression on the readout weights that are applied to measurements of a small number of degrees of freedom of the reservoir. The capability of physical reservoir computing has been proven in several key experiments [7; 8]. RC with quantum mechanical properties has only recently become a research objective [9; 10; 11]. The advantages that quantum mechanics brings to the table are two-fold: Quantum input can be processed natively by a computing platform that is governed by the laws of quantum mechanics itself. As such, the quantum reservoir computing (QRC) concept has been shown to be capable of classification of entangled input states [12; 13]. The second advantage lies in the dimensionality of the Hilbert space that grows exponentially with system size. In principle, the exponential scaling is able to outgrow the parameter space of any classical system, wherein lies the promise for applications in machine learning. Only now, different aspects of QRC are beginning to be investigated [14; 15], such as the role of the Hilbert space dimension, [16; 17], its robustness to noise [18], and the origin of non-linearity in QRC from the underlying linear quantum dynamics [16; 19; 20]. The degree to which dissipation, normally considered a disadvantage in quantum computing and quantum machine learning, can aid the information processing capabilities of QRC has been investigated [9; 21]. Superposition and entanglement [22] are at the heart of quantum mechanics, and from our physical intuition of the role of entanglement in quantum computing, we derive the hypothesis that the existence of non-classical correlations is key to the performance of QRC. Furthermore, while Figure 1: Quantum dynamics of a 3-qubit reservoir system. (a) Readout nodes \(\langle\sigma_{\pm}^{(i)}\rangle\). The input map \(S_{k}\) sets the input qubit to a well defined pure state \(|\psi_{s_{k}}\rangle\) at time intervals \(h\Delta_{t}=5\). Magenta lines indicate the average over \(N=50\) randomly sampled Hamiltonians (cyan). The weight vector \(\mathbf{W}\) combines the node trajectories into the output signal \(y\). (b) Logarithmic negativity \(E_{\mathrm{N}}\) for all possible bipartitions. After each input the input qubit is completely disentangled from the other qubits. The unitary dynamics (re)-entangles the whole system on a time scale proportional to \(J_{0}\). (c) \(E_{\mathrm{N}}\) of partition \(1|23\) after input shows an larger entangling rate for increasing \(J_{0}\). the exponential scaling of the Hilbert space with system size holds many prospects, it is _a priori_ not clear how and even if a given QRC system leverages the full available phase space for computation. In this letter we approach these questions and quantify the relation of entanglement between different parts of the QRC system and its utilization of the available quantum phase space for computation. To that end we introduce a measure of the effective phase space dimension of the quantum reservoir dynamics and investigate the effect on the linear short-term memory capacity as a measure of the QRC performance. We find that the degree of entanglement in the system is directly linked to the dimension of the used phase space, which here consistently remains below the theoretical maximum. We furthermore discuss the role of dephasing mechanisms for the complexity of the reservoir dynamics and its effect on the memory capacity of the QRC. _Quantum systems as reservoir. --_ Following the approach of Ref. [9], the physical system that we consider as a quantum reservoir is given by a _transverse-field Ising model_[23; 24] of \(N\) qubits with the Hamiltonian (\(\hbar=1\)) \[H=h\sum_{i=1}^{N}\sigma_{\mathrm{z}}^{(i)}+\sum_{i\neq j}J_{ij}\,\sigma_{ \mathrm{x}}^{(i)}\sigma_{\mathrm{x}}^{(j)}, \tag{1}\] with \(2h\) the single-qubit energy, which we choose to be equal for all qubits, and \((J_{ij})\) the symmetric qubit coupling matrix. The values \(J_{ij}\) are sampled randomly from the real interval \([-1,1]\) and are then normalized in such a way that the maximal absolute-value eigenvalue of the matrix \((J_{ij})\) is given by the parameter \(J_{0}\), which we refer to as the _spectral radius of the coupling_ or the _coupling strength_. In this way, \(J_{0}\) provides the ability to consistently tune the time scale on which the system evolves, even with randomly selected couplings. While the physical implementation of the reservoir is a system of \(N\) qubits, the amount of independent internal properties exploitable as reservoir nodes is much larger. Each spin degree of freedom and correlations thereof are affected non-trivially by the system dynamics and act as reservoir nodes. These correlations are key to unlocking the exponential (\(4^{N}-1\)) scaling of the phase space dimension of the quantum reservoir and, being sufficiently pronounced, have no classical counterpart [25]. The combined properties of exponential scaling and non-classicality are key prerequisites for a possible quantum advantage. To operate the QRC, a method to input information into the physical system is needed. Here, the discrete-time input signal \(s_{k}\in[0,1]\) is injected into the reservoir system via state initialization. In our model this is realized mathematically by the completely positive trace preserving (CPTP) map [9] \[\rho\mapsto|\psi_{s_{k}}\rangle\langle\psi_{s_{k}}|\otimes\mathrm{Tr}_{1}[ \rho], \tag{2}\] where \(\mathrm{Tr}_{1}\) denotes the partial trace over the input qubit, taken to be qubit \(1\), and the input-encoding pure state is given by \(|\psi_{s_{k}}\rangle=\sqrt{1-s_{k}}|0\rangle+\sqrt{s_{k}}|1\rangle\). This operation corresponds to a projective measurement of qubit \(1\) and discarding the measurement outcome, which is modelled by taking the partial trace, and subsequently preparing the input qubit in the state \(|\psi_{s_{k}}\rangle\). The resulting time evolution in the time interval \(\Delta_{t}\) between two successive inputs is \(\rho(t+\Delta_{t})=U_{\Delta_{t}}S_{k}(\rho(t))U_{\Delta_{t}}^{\dagger}\), where \(S_{k}\) is the superoperator encoding the input operation, and \(U_{\Delta_{t}}=\exp(-\mathcal{B}H\Delta_{t})\) is the unitary time evolution determined by the system Hamiltonian. As the reservoir's readout signal we consider the expectation values of the spin components \(\langle\sigma_{\mathrm{z}}^{(i)}\rangle\). Their exemplary temporal behavior is shown in Fig. 1(a). Marked by the grey dashed lines are the times at which the input is injected into the first qubit. It is evident how this directly affects its state as the input qubit is set to \(|\psi_{s_{k}}\rangle\) and the value of \(\langle\sigma_{\mathrm{z}}^{(1)}\rangle\) changes abruptly to the encoded input \(s_{k}\). The measurement process of the \(\langle\sigma_{\mathrm{z}}^{(i)}\rangle\) is interpreted in an ensemble picture neglecting backaction. Protocols taking the backaction into account, either by rewinding or spatial multiplexing, as well as the influence of finite ensembles, or schemes involving weak measurements, have been put forward in the literature [26; 27; 28]. We employ a \(V\)-fold temporal multiplexing of the \(N\) readout signals by dividing the time interval between successive inputs and sampling the readout nodes at time intervals \(\Delta_{t}/V\). This method allows us to train on \(NV\)_virtual_ readout nodes and has been shown to improve reservoir performance significantly [9]. In our experiments we choose \(V=10\). More detailed information on the technical implementation is provided in the _Supplementary Material_. Furthermore, we only use the spin-z components \(\langle\sigma_{\mathrm{z}}^{(i)}\rangle\) of the \(N\) qubits in the network of the readout nodes for simplicity. We refrain form additionally recording two- and multi-qubit correlations of the form \(\langle\sigma_{\mathrm{z}}^{(i)}\dots\sigma_{\mathrm{z}}^{(j)}\rangle\), even though these would come for free in a measurement of the single qubits expectation value in a physical implementation [29]. In general a variety of different state properties are thinkable, the feasibility of which will depend on the concrete physical implementation of the reservoir [19]. The training process of the QRC in this setup is equivalent to that of a classical reservoir computer [30] in that the multiplexed readout signals are multiplied by the weight vector \(\mathbf{W}=(w_{0},w_{1},\dots)^{\intercal}\) to receive the output signal as illustrated in Fig. 1(a) (see _Supplementary Material_ for more details). The components of \(\mathbf{W}\) are the only parameters in our QRC approach that are being trained. _Entanglement in QRC. --_ In this letter we investigate how the presence of entanglement correlates with the quantum reservoir's memory capacity as a measure of its performance. In order to quantify entanglement we consider the logarithmic negativity \[E_{\mathrm{N}}(\rho)=\log_{2}\left\|\rho^{\Gamma_{\mathrm{A}}}\right\|_{1}, \tag{3}\] \(\left\|\cdot\right\|_{1}\) denoting the trace norm, while \(\rho^{\Gamma_{\mathrm{A}}}\) is the partial transpose of \(\rho\) with respect to subsystem A [25; 31; 32]. As an entanglement measure it is easy to compute and provides a sufficient condition to rule out separability between two subsystems, as \(E_{\rm N}(\rho)=0\) for all separable states [33; 34]. This implies that, while an entangled state can exhibit a logarithmic negativity of zero, a finite logarithmic negativity is a definite sign of entanglement. There are other possible ways to quantify entanglement, but they are either computationally hard to obtain, or do not generalize easily to mixed states of more than two qubits, such as entropy of entanglement [35] and concurrence [36]. We furthermore refrain from discussing bound entangled states, as we are interested in the geometric properties of the occupied phase space instead of information-theoretical properties like distillable entanglement [37]. Entanglement negativity is routinely employed in the investigation of many-body systems [38; 35], quantum information applications [39; 40], and quantum field theories [41]. Furthermore, it has been shown that genuine multipartite entanglement is detected by the simultaneous non-separability of all bipartitions of the system [42]. The negativity time evolution for a QRC with input at intervals of \(\hbar\Delta_{t}=5\) is shown in Fig. 1(b), averaged over 50 Hamiltonians with differing coupling matrices, for all possible bipartitions of the three-qubit system. At every input injection, the drop of \(E_{\rm N}^{1|23}\) to 0 is clearly visible, whereas the two other bipartitions show a finite negativity, as qubit 2 and 3 remain entangled. In systems with more qubits, we find the remaining negativity at the input injection steps for bipartitions other than \((1|\ldots)\) to be even larger, as the impact of the first qubit on the rest of the system becomes less pronounced. Here, we use the average negativity over all bipartitions of the QRC qubits as a measure of the entanglement present in the system at any point in time. As an averaged quantity, it allows us to deal with the statistical fluctuations that come with the randomly sampled system Hamiltonians used here. To obtain a single value for the negativity during the whole process of performing a memory task, the negativity is also averaged over all times, including the input and build-up stages. We use this procedure to define a measure of entanglement \(\bar{E}_{\rm N}\), which is a good indicator for the mean entanglement in the system during task execution. By tuning the coupling strength \(J_{0}\), we control the build-up rate of entanglement, shown in Fig. 1(c), which, at a constant input rate, leads to a direct control of \(\bar{E}_{\rm N}\). In quadrant I of Fig. 2 this connection is clearly visible, as with \(J_{0}\) increasing from 0.1 to 0.5, we observe a monotone increase of the mean entanglement \(\bar{E}_{\rm N}\). _Phase space dimension. --_ QRC aims at exploiting the exponential scaling of the phase space with the system size to leverage the opportunities of noisy intermediate-scale quantum (NISQ) computers for real-world tasks. Here, we investigate if a given Hamiltonian system actually utilizes all of that phase space efficiently, or if the quantum dynamics is confined to a lower-dimensional manifold [43; 44; 45]. To access this information we employ a measure called the _covariance dimension_, for which we view the system's state given by the density matrix \(\rho\) as a point in the real vector space \(\mathbb{R}^{d}\) with \(d=4^{N}\) (see _Supplementary Material_). Accordingly, the quantum dynamics of the system corresponds to a trajectory in that space. The covariance dimension is determined by the following procedure: Let the signal \(\mathbf{X}=(\mathbf{x}_{0},\mathbf{x}_{1},\ldots)\) be the matrix with columns \(\mathbf{x}_{i}\) representing points in \(\mathbb{R}^{d}\) along the trajectory of the system sampled at time intervals \(\Delta_{t}/V\). We choose an index point \(\mathbf{x}_{i_{0}}\) randomly and determine a cluster of at least \(d+1\) nearest neighbors that are combined into a matrix \(\mathbf{X}_{i_{0}}\). The covariance matrix of the cluster is then given by \[C_{\mathbf{X}_{i_{0}}}=\frac{1}{N_{d}-1}(\mathbf{X}_{i_{0}}-\bar{\mathbf{X}}_ {i_{0}})(\mathbf{X}_{i_{0}}-\bar{\mathbf{X}}_{i_{0}})^{\intercal}, \tag{4}\] where \(N_{d}\) is the number of points in the cluster and \(\bar{\mathbf{X}}_{i_{0}}\) indicates the mean of the cluster. The covariance dimension \(d_{\rm c}(i_{0})\) of each particular cluster is found by performing a principal components analysis (PCA) and determining the number of principal components of \(C_{\mathbf{X}_{i_{0}}}\) that are larger than a cutoff value \(\varepsilon_{\rm c}\). In general, the cutoff value is related to the amount of noise in the reservoir dynamics and has an influence on the detectable covariance dimension. Here, we choose a value of \(\varepsilon_{\rm c}=10^{-6}\). The covariance dimension of the whole signal is found by averaging over many random index points \(i_{0}\), i.e. \[D_{\rm c}=\frac{1}{N_{I}}\sum_{i_{0}\in I}d_{\rm c}(i_{0}), \tag{5}\] where \(I\) is the set of random indices and \(N_{I}\) is the number of elements in this set. A sketch of this concept is given in Figure 2: All relations between coupling strength \(J_{0}\), mean logarithmic negativity \(\bar{E}_{\rm N}\), covariance dimension \(D_{\rm c}\) and linear short-term memory capacity \(C_{\rm STM}\). In general, we find that increasing the coupling strength results in an increase of all other shown quantities. Fig. 3 for a reservoir signal confined to a Mobius strip. In this example the signal is embedded in a three-dimensional space while the PCA of each cluster would reveal the two dimensions of the submanifold the signal is confined to. In quadrant II of Fig. 2 the relation of the covariance dimension \(D_{\rm c}\) to \(\bar{E}_{\rm N}\) is shown. We infer that systems with weak coupling strengths - and corresponding weak mean entanglement - only explore a small fraction of the theoretically available \(4^{3}-1=63\) phase space dimensions. In the investigated regime of coupling strengths we find a monotone relation between the mean negativity and the occupied phase space dimensions. We explain our observation by the fact that stronger coupling enhances the rate of change of the system's state vector, thus allowing it to explore a higher dimensional submanifold of the state space before collapsing again due to the input injection. The _Supplementary Material_ provides more information on the statistical distributions of the clusters' dimensions and of the covariance eigenvalues leading to the results shown in Fig. 2. _QRC performance._ -- In order to test our initial hypothesis of a positive correlation between reservoir entanglement and QRC performance, we investigate the linear short-term memory \(C_{\rm STM}\) as a simple but fundamental task in reservoir computing [3; 46]. Given an input sequence \((s_{k},s_{k-1},s_{k-2},\dots)\), the reservoir is tasked to produce the target sequence \(\hat{y}^{\tau}=(s_{k-\tau},s_{k-1-\tau},s_{k-2-\tau},\dots)\) with \(k,\tau\in\mathbb{N}\). The linear short-term memory for the time delay \(\tau\) is then given by the squared Pearson correlation coefficient \[C_{\rm STM}^{\tau}=\frac{\text{cov}^{2}(y,\hat{y}^{\tau})}{\sigma_{y}^{2} \sigma_{\hat{y}^{\tau}}^{2}}, \tag{6}\] where \(y\) is the reservoir output signal obtained after the QRC was trained on this particular task. Furthermore, \(\sigma_{y}\) is the standard deviation of \(y\), and \(\text{cov}(y,\hat{y}^{\tau})\) is the covariance between \(y\) and \(\hat{y}^{\tau}\). Per definition, \(C_{\rm STM}^{\tau}\) lies in the interval \([0,1]\), with \(0\) indicating no memory capacity at all, and \(1\) a perfect reconstruction of the delayed input signal. As any reservoir computer has to fulfill the fading memory property [47], we can expect \(C_{\rm STM}^{\tau}\) to vanish for larger \(\tau\), enabling us to define the total memory capacity \[C_{\rm STM}=\sum_{\tau=0}^{\infty}C_{\rm STM}^{\tau}. \tag{7}\] For the perfect, noise-free system we are investigating so far, \(C_{\rm STM}\) has to be at least \(1\), as the capacity \(C_{\rm STM}^{0}\) for \(\tau=0\) is always \(1\). In the bottom part of Fig. 2, we show the memory capacity of the reservoir in relation to all aforementioned properties, while details on parameters such as training and test set sizes used for these results are given in the _Supplementary Material_. One can see that the weakly coupled 3-qubit QRC already has a \(C_{\rm STM}\) larger than one, implying an intrinsic memory capacity of the quantum network. Upon increasing the coupling strength, the memory capacity grows with the negativity and covariance dimension until it saturates around \(C_{\rm STM}=8\). We see this as a strong indicator for that the memory capacity of QRCs benefits from the mean entanglement of their quantum states and the dimension of the submanifold they evolve on. It is, however, an open question why the plateau in memory capacity emerges at larger \(J_{0}\) that will be addressed in future research. _Effect of dephasing on QRC performance._ -- As any physical system is subject to various degrees of dephasing by interaction with the environment, we investigate its effect on QRC entanglement und the corresponding change in performance. Already the ideal QRC scenario discussed so far possesses an intrinsic dephasing mechanism introduced by the input injection stage, which erases three quarters of the reservoir's state vector, pairing the injected information with the remaining quarter. To investigate the effect of dephasing more systematically, we introduce an additional dephasing to our system by subjecting all qubits in the QRC sequentially to the single-qubit dephasing map \[\rho\mapsto\left(\frac{1+e^{-2\gamma\Delta_{\rm z}/V}}{2}\right)\rho+\left( \frac{1-e^{-2\gamma\Delta_{\rm z}/V}}{2}\right)\sigma_{\rm z}^{(i)}\rho \sigma_{\rm z}^{(i)} \tag{8}\] with the dephasing rate \(\gamma\). In contrast to the dephasing induced by the input injection, this form of pure qubit dephasing is applied at each step of the time evolution, emulating a continuous interaction with the environment. The difference between both dephasing effects is illustrated in Fig. 4(a). Here, we consider a situation, at which the input is injected at time intervals \(h\Delta_{t}=5\). While the input-injection induced dephasing is applied discretely corresponding to a fixed strength, the rate of the continuous dephasing can be tuned via the parameter \(\gamma\). We choose an interval from no additional dephasing (\(\gamma=0\)) to strong dephasing (\(\gamma=0.25\)) and observe, as one may expect, that stronger dephasing hinders entanglement build-up, leading to smaller values of the mean entanglement \(\bar{E}_{\rm N}\). For a quantitative analysis, Fig. 4(b) shows the relation of covariance dimension \(D_{\rm c}\) and mean entanglement \(\bar{E}_{\rm N}\) for different dephasing rates \(\gamma\). We find that the relation between \(D_{\rm c}\) and \(\bar{E}_{\rm N}\) persists also in the presence of additional dephasing, i.e. dephasing decreases Figure 3: Visualization of the covariance dimension using the example of a Möbius strip (cyan). While embedded in three-dimensional space, the strip itself is two dimensional, which is revealed by PCA of individual clusters (magenta). the mean entanglement _and_ the covariance dimension to the same degree. As a result, the general shape of \(D_{\rm c}\) as a function of \(\bar{E}_{\rm N}\) changes only marginally, from which we infer a general functional dependence, the origin of which poses an open question for future work. When investigating the linear short-term memory capacity for varying dephasing strengths, we observe an interesting effect: for most coupling strengths up to about \(J_{0}=0.4\), a weak, but non-zero dephasing rate is found to increase \(C_{\rm STM}\). While the effect for strong coupling is only marginal, it gets more pronounced for weaker coupling strengths, leading to a more than \(20\,\%\) increase of the memory capacity at \(J_{0}=0.2\), as can be seen in Fig. 4(c). For such weak couplings, the memory performance of the QRC benefits even from higher dephasing rates. In any case, we conclude that stronger coupling and, with it, stronger mean entanglement and more occupied phase space dimensions lead to better memory performance in the analyzed coupling strength interval for a fixed value of the dephasing rate. _Conclusion and outlook. --_ We provide first results that relate the "quantumness" of a physical system to its performance as a QRC. We show that stronger mean entanglement and more occupied phase space dimensions are beneficial to its performance, measured in terms of the memory capacity of the QRC and can be tailored via the coupling strength within the quantum network. Especially in the weak coupling regime, we find that subjecting the QRC to a small, but non-zero, dephasing can even yield a performance increase, contrasting the common perception from gate-based quantum computing and quantum machine learning. The connection between strictly quantum properties of the reservoir and their role in QRC performance stir hope for using quantum mechanical systems in analog machine learning. For QRC to become a relevant near-term technology, we must develop a clear understanding of its potential and limitations. Future investigations will have to go beyond idealized systems and focus on actual NISQ implementations, such as ANNs based on photonic lattices. ## Acknowledgements This project has been supported by the Deutsche Forschungsgemeinschaft (DFG) and the Agence nationale de la recherche (ANR) via the project _PhotonicQRC_ (Gi1121/6-1). F. Lohof acknowledges funding by the central research development fund (CRDF) of the University of Bremen.
2304.13408
Quantum-circuit algorithms for many-body topological invariant and Majorana zero mode
The topological state of matter is a potential resource to realize long-term fault-tolerant quantum computers beyond the near-term noisy intermediate-scale quantum devices. To achieve the realization, we need a deep understanding of topological behaviors in real quantum computers. However, quantum-circuit algorithms to analyze topological properties have still been insufficient. Here we propose three quantum-circuit algorithms, (i) to find the ground state in the selected parity subspace, (ii) to determine the many-body topological invariant, and (iii) to visualize the zero-energy edge mode. To demonstrate these algorithms, we adopt the interacting Kitaev chain as a typical model of many-body topological superconductors in one dimension. The algorithms are applicable to not only one-dimensional topological superconductors but other topological states including higher-dimensional systems.
Takanori Sugimoto
2023-04-26T09:41:58Z
http://arxiv.org/abs/2304.13408v1
# Quantum-circuit algorithms for many-body topological invariant ###### Abstract The topological state of matter is a potential resource to realize long-term fault-tolerant quantum computers beyond the near-term noisy intermediate-scale quantum devices. To achieve the realization, we need a deep understanding of topological behaviors in real quantum computers. However, quantum-circuit algorithms to analyze topological properties have still been insufficient. Here we propose three quantum-circuit algorithms, (i) to find the ground state in the selected parity subspace, (ii) to determine the many-body topological invariant, and (iii) to visualize the zero-energy edge mode. To demonstrate these algorithms, we adopt the interacting Kitaev chain as a typical model of many-body topological superconductors in one dimension. The algorithms are applicable to not only one-dimensional topological superconductors but other topological states including higher-dimensional systems. ## I Introduction The recent development of quantum computers makes us expect quantum supremacy or at least quantum advantage in the near future [1; 2; 3; 4]. Particularly, noisy intermediate-scale quantum (NISQ) devices based on the gate-type unitary operations are on the point of entering an unexplored region beyond the limit of numerical calculation that classical computers can approach in a feasible amount of time [5; 6; 7; 3]. In parallel, various quantum-circuit (QC) algorithms consisting of the quantum gates and supported by auxiliary calculations on classical computers have rapidly appeared for general use, e.g., quantum approximate optimization algorithm [8; 9; 10; 11; 12; 13; 14; 15], quantum Fourier transformation [16], quantum singular-value decomposition [17; 18; 19], and quantum machine learning [20; 21; 22; 23; 24; 25; 26; 27; 28; 29]. On the other hand, QC algorithms for condensed-matter research are not sufficient except for recently-proposed essential algorithms for the eigensystem of the model Hamiltonian, called the variational quantum eigensolver (VQE) [30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41], and for the dynamics (temperature-dependence) using the real (imaginary) time evolution [42; 43; 44; 45; 46; 47; 48]. In fact, for analyzing topological properties, only a few algorithms have been proposed so far [49; 50; 26; 51]. To overcome the limitation of coherence time and to realize a fault-tolerant quantum computer (FTQC), a deep understanding of the topological state of matter in real quantum computers is crucial. Therefore, in this paper, we propose QC algorithms composed of three steps, (i) to find the ground state in selected parity subspace, (ii) to determine the many-body topological invariant, and (iii) to visualize the zero-energy edge mode. For the demonstration of the algorithms, we adopt the interacting Kitaev chain as a one-dimensional topological superconductor with many-body interaction [52; 53; 54; 55; 56; 57; 58]. The Kitaev chain is a typical model of topological state belonging to the BDI class, i.e., conserving time-reversal, particle-hole, and chiral (sublattice) symmetries [59; 60]. In addition, introducing the many-body interaction induces the topological transition from non-trivial to trivial phases in terms of topological invariant. Besides, the topological superconducting state has a zero-energy edge mode composed of Majorana fermions (MF), called the Majorana zero mode (MZM). Since the MZM is a potential resource of the braiding type of topological quantum computing for the long-term FTQC [61; 3; 62], visualization of its behavior is also important from both fundamental and engineering points of view. The rest of this paper consists of the following contents. In Sec. II, we define the model Hamiltonian of the interacting Kitaev chain as spinless-fermion representation, and present the Majorana and spin counterparts of it. As the first step of the QC algorithms, we adapt the VQE technique to find the ground state in the selected parity subspace in Sec. III, and show the numerical results of the algorithm. In Sec. IV, we briefly explain the topological invariant in the tight-binding (TB) and many-body (MB) model, i.e., without and with the interaction term, respectively. After the explanation, a QC algorithm to determine the MB topological invariant is proposed with the numerical results of it for various points in the model-parameter space, including topologically trivial and non-trivial states. Additionally, the MZM is numerically visualized by using the ground states in two different parity subspaces in several model-parameter points located in the topological phases. The numerical calculations in this paper have been done on the QC simulator, qulacs [63], in the classical computer. Finally, we summarize the present study and give a discussion about the advantages and disadvantages of our algorithms, with some caveats when we execute the algorithms in real NISQ devices. ## II Model In this section, we introduce the model Hamiltonian of one-dimensional topological superconductor, the so-called Kitaev chain with the attractive interaction on neighboring bonds [52; 55; 56; 57; 58]. The model Hamiltonian with the open boundary condition (OBC) for the \(N\)-site system is defined by, \[\mathcal{H}_{\mathrm{K}}= -t\sum_{j=1}^{N-1}\left(c_{j}^{\dagger}c_{j+1}+\mathrm{H.c.} \right)-\Delta\sum_{j=1}^{N-1}\left(c_{j}^{\dagger}c_{j+1}^{\dagger}+\mathrm{H.c.}\right)\] \[-V\sum_{j=1}^{N-1}\left(n_{j}-\frac{1}{2}\right)\left(n_{j+1}- \frac{1}{2}\right)-\mu\sum_{j=1}^{N}\left(n_{j}-\frac{1}{2}\right), \tag{1}\] where \(c_{j}\), \(c_{j}^{\dagger}\), and \(n_{j}\) denote annihilation, creation, and number operators of spinless fermion at \(j\)th site, respectively. In addition, the hopping integral, the superconducting pairing potential, and the Coulomb potential between neighboring sites are given by \(t\), \(\Delta\), and \(V\), respectively, with the chemical potential \(\mu\). In this paper, we focus on the attractive region for the Coulomb potential \(V>0\), to avoid the trivial phase of the repulsive region, where the translational symmetry is spontaneously broken [56; 57; 58]. Without the interaction, we can easily understand the topological invariant and the MZM, based on the one-particle picture (see Section IV for the topological invariant and Section V for the MZM). The Kitaev chain mathematically corresponds to the \(S=1/2\) XYZ spin chain, \[\mathcal{H}_{\mathrm{S}}=-\sum_{\alpha=x,y,z}J_{\alpha}\sum_{j=1}^{N-1}S_{j}^ {\alpha}S_{j+1}^{\alpha}-h_{z}\sum_{j=1}^{N}S_{j}^{z}, \tag{2}\] via the Jordan-Wigner (JW) transformation, \[c_{j}=S_{j}^{-}e^{\imath\varphi_{j}},\ c_{j}^{\dagger}=S_{j}^{+}e^{\imath \varphi_{j}},\ n_{j}=S_{j}^{z}+\frac{1}{2}. \tag{3}\] The JW phase is defined by \(\varphi_{j}=\pi\sum_{i=1}^{j-1}(S_{i}^{z}+\frac{1}{2})\) with the imaginary unit \(\imath=\sqrt{-1}\), and \(S_{j}^{\alpha}(S_{j}^{\pm})\) represents the \(\alpha=x,y,z\) component of \(S=1/2\) spin operator (the ladder operator of spin) at \(j\)th site with the natural unit \(\hbar=1\). The anisotropic exchange interaction is denoted by \(J_{\alpha}\), and \(h_{z}\) represents the magnetic field along \(z\) axis. The coupling terms in the Kitaev chain and the XYZ spin chain have the following relations: \[t=\frac{J_{x}+J_{y}}{4},\ \Delta=\frac{J_{x}-J_{y}}{4},\ V=J_{z},\ \mu=h_{z}. \tag{4}\] Since the QC is compatible to the spin representation, we mainly use the spin representation of the Hamiltonian in this paper. To understand topological properties in the Kitaev chain, the MF representation is important. The MF representation of the Kitaev chain (1) is given by, \[\mathcal{H}_{\mathrm{M}}= -\imath g_{-}\sum_{j=1}^{N-1}\gamma_{j}^{s}\gamma_{j+1}^{a}+ \imath g_{+}\sum_{j=1}^{N-1}\gamma_{j}^{a}\gamma_{j+1}^{s}\] \[+\zeta\sum_{j=1}^{N-1}\gamma_{j}^{s}\gamma_{j}^{a}\gamma_{j+1}^{ s}\gamma_{j+1}^{a}-\imath\eta\sum_{j=1}^{N}\gamma_{j}^{s}\gamma_{j}^{a}, \tag{5}\] with the coupling constants \(g_{\pm}=(t\pm\Delta)/2\), \(\zeta=V/4\), and \(\eta=\mu/2\). The symmetric \((\gamma_{j}^{s})\) and antisymmetric \((\gamma_{j}^{a})\) modes of MF at \(j\)th site are defined by, \[\gamma_{j}^{s}=c_{j}^{\dagger}+c_{j},\quad\gamma_{j}^{a}=\imath\left(c_{j}^{ \dagger}-c_{j}\right). \tag{6}\] Note that the MF operators obey the fermionic anti-commutation relation \(\{\gamma_{j}^{\tau},\gamma_{j^{\prime}}^{\tau^{\prime}}\}=2\delta_{j,j^{ \prime}}\delta_{\tau,\tau^{\prime}}\) with the Hermicity \((\gamma_{j}^{\tau})^{\dagger}=\gamma_{j}^{\tau}\) for \(\tau(\tau^{\prime})=s,a\). Figure 1(a) shows the schematic of the MF Hamiltonian (5). A pair of symmetric (red ball) and antisymmetric (blue ball) MFs corresponds to a spinless fermion. When \(g_{-}=\eta=\zeta=0\), the \(g_{+}\) diagonal coupling only remains, so that two MFs \(\gamma_{1}^{s}\) and \(\gamma_{N}^{a}\) at the edges are decoupled from the system. The pair of MFs, are regarded as a spinless fermion with zero energy, i.e., the MZM, causing two-fold degeneracy at every energy level. The existence of MZM is also an evidence of topological superconductor. Figure 1: (a) Schematic of the interacting Kitaev chain. Four coupling terms, \(g_{+}\), \(g_{-}\), \(\eta\), and \(\zeta\) are denoted by solid lines, dashed lines, and double solid lines, and green squares, respectively. (b) Majorana zero mode (MZM) for \(g_{-}=\eta=\zeta=0\). Two MFs \(\gamma_{1}^{s}\) and \(\gamma_{N}^{a}\) at the edges are decoupled from the system, corresponding to the MZM. ## III Ground state Next, we explain how to obtain the ground states in the QC with conserving the fermion parity. Since the Kitaev chain has pair creation and annihilation terms [the second term in (1)], the total number of fermions \(N=\sum_{j}n_{j}\) is not conserved. Instead, the fermion parity \(\mathcal{F}=\exp[i\pi\sum_{j}n_{j}]=\pm 1\), i.e., whether the number of fermions is even or odd, is a good quantum number. The fermion parity corresponds to the magnetization parity \(\mathcal{M}_{z}\) in the XYZ spin chain, \[\mathcal{F}=\mathcal{M}_{z}=\imath^{N}\exp\left[\imath\pi\sum_{j}S_{j}^{z} \right]=(-\imath)^{N}\prod_{j}(2S_{j}^{z}). \tag{7}\] It is worth noting that since the \(\alpha=x,y,z\) components of spin operator \(S_{j}^{\alpha}\) are introduced in the XYZ spin chain as the same form, there are other conversed quantities \(\mathcal{M}_{\alpha}=\imath^{N}\exp[\imath\pi\sum_{j}S_{j}^{\alpha}]\) for \(\alpha=x,y\), while the three parities are not independent due to the relation \(\mathcal{M}_{x}\mathcal{M}_{y}\mathcal{M}_{z}=(-\imath)^{3N}\prod_{j}(8S_{j}^ {x}S_{j}^{y}S_{j}^{z})=(-1)^{N}\)[64]. In this paper, to avoid misunderstanding due to the system-size dependence of the fermion parity, we consider only the system sizes satisfying \(N=0\) (mod. 4). To implement the ground state into the QC, we use the parity-conserved 2-site unitary operator on \(j\)th bond defined by \[U_{2}(\mathbf{\theta})=U_{2,a}(\theta_{a})U_{2,b}(\theta_{b})U_{2,c}(\theta_{c}) \tag{8}\] with \[U_{2,a}(\theta) =\exp\left[2\imath\theta\left(S_{j}^{x}S_{j+1}^{x}+S_{j}^{y}S_{j+ 1}^{y}\right)\right]\] \[=\exp\left[\imath\theta\left(S_{j}^{+}S_{j+1}^{-}+\text{H.c.} \right)\right], \tag{9}\] \[U_{2,b}(\theta) =\exp\left[2\imath\theta\left(S_{j}^{x}S_{j+1}^{x}-S_{j}^{y}S_{j +1}^{y}\right)\right]\] \[=\exp\left[\imath\theta\left(S_{j}^{+}S_{j+1}^{+}+\text{H.c.} \right)\right],\] (10) \[U_{2,c}(\theta) =\exp\left[4\imath\theta S_{j}^{z}S_{j+1}^{z}\right], \tag{11}\] where the vector of angles includes three angles, \(\mathbf{\theta}=(\theta_{a},\theta_{b},\theta_{c})\). Although the unitary operators \(U_{2}\) and \(U_{1}\) have the site dependence, we omit the site index \(j\) in the notation of unitary operators (8) and (12) for simplicity. Note that these operators \(U_{2,p}(\theta)\) (\(p=a,b,c\)) on the \(j\)th bond, commute with each other, \([U_{2,p}(\theta),U_{2,p^{\prime}}(\theta^{\prime})]=0\), while the unitary operators do not always commute between the neighboring bonds. In addition, to take into account the effect of magnetic field, we introduce the 1-site unitary operator at \(j\)th site given by \[U_{1}(\theta)=R_{z}(2\theta)=\exp\left[2\imath\theta S_{j}^{z}\right]. \tag{12}\] Figure 2 shows the QC representation of these unitary operators. Since these operators preserve the fermion parity \([U_{2,p}(\theta),\mathcal{F}]=[U_{1}(\vartheta),\mathcal{F}]=0\), the fermion parity after the unitary operations equals the parity of the initial state. By using the parity-conserved 2-site and 1-site unitary operators, we adopt the following wavefunction ansatz for the VQE method: \[\ket{\psi_{\pm}(\{\mathbf{\theta}_{j,m},\vartheta_{j,m}\})}=U_{\psi\pm}\ket{ \mathrm{i}_{\pm}}, \tag{13}\] with \[U_{\psi\pm}=\prod_{m=1}^{M}\left[\prod_{j}U_{1}(\vartheta_{j,m})\prod_{\text{even} \,j}U_{2}(\mathbf{\theta}_{j,m})\prod_{\text{odd}\,j}U_{2}(\mathbf{\theta}_{j,m})\right] \tag{14}\] and the initial state for even parity \(\ket{\text{i}_{+}}=\ket{0}^{\otimes N}\) or odd parity \(\ket{\text{i}_{-}}=\sigma_{1}^{x}\ket{\text{i}_{+}}=\ket{1}\otimes\ket{0}^{ \otimes N-1}\). \(M\) is the number of layers (see the QC representation shown in Fig. 2), so that the total number of variational parameters (namely, the number of angles in unitary operators \(\{\mathbf{\theta}_{j,m}\}\) and \(\{\vartheta_{j,m}\}\)) corresponds to \(N_{\theta}=(4N-3)M\) with the OBC. For these angles, we perform the VQE calculation, i.e., optimization of the angles to minimize the expectation value of energy for the Hamiltonian (2), \[E_{\pm}(\{\mathbf{\theta}_{j,m},\vartheta_{j,m}\})=\langle\psi_{\pm}\,|\,\mathcal{ H}_{\text{S}}\ket{\psi_{\pm}} \tag{15}\] with the quantum simulator, qulacs [63], in the classical computer. As the optimization method, we use the (dual) simulated annealing (SA) and Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithms served by the python library SciPy [65]. The SA calculation is used to prepare appropriate initial angles \(\{\mathbf{\theta}_{j,m},\vartheta_{j,m}\}\) for the BFGS calculation, avoiding the local minima. Nevertheless, we sometimes failed to obtain the global energy minimum, so that we regard the minimal-energy state in 10-times trials including both the SA and BFGS optimizations starting with random initial angles, as the ground state. For optimization of the BFGS method, we set the acceptable error \(\Delta E<10^{-8}\), where \(\Delta E\) is the energy difference between current and previous iterations in the main loop of the BFGS method. Figure 3 shows the energy differences between the VQE and the exact-diagonalization (ED) method for various points in the model-parameter space of the 12-site XYZ spin chain. In Fig. 3, (a,b) left, (c-e) center, and (f-h) right panels show the XY anisotropy, the magnetic field, and the Ising term (namely, the Coulomb interaction in the Kitaev chain) dependencies of the energy difference, respectively, with fixed other parameters. The energy difference is basically smaller with even parity than odd parity. We consider that the reason is because the initial state before unitary operations in the QC (i.e., \(\ket{\text{i}_{\pm}}\)) is not uniform with odd parity but with even parity. Nevertheless, we can confirm that when the number of layers is large enough, the energy difference becomes small enough, e.g., \(E_{\text{VQE}}-E_{\text{ED}}\lesssim 10^{-4}\) for \(M\geq 4\) except for the odd-parity state with large \(J_{z}\). The ground state with large \(J_{z}\) is the so-called Schrodinger's cat state [58], which is the superposition of macroscopic classical state like \((\ket{\uparrow\uparrow\cdots\uparrow})\pm\ket{\downarrow\downarrow\cdots \downarrow})/\sqrt{2}\). The Schrodinger's cat state may require many swap operations, resulting in the worse energy difference with large \(J_{z}\). However, the verification is out of scope in this paper, because the large \(J_{z}\) region is the topologically-trivial phase, so that we leave it to future research. ## IV Topological invariant In this section, we introduce the MB topological invariant after a brief explanation of the TB topological Figure 3: Energy differences between the VQE and the ED method in the 12-site XYZ spin chain. (a,b) Left panels show the XY anisotropy dependence without the magnetic field and the Ising term, \(h_{z}=J_{z}=0\). (c-e) The field dependence with fixed \(J_{y}/J_{x}=0.5\) and \(J_{z}=0\) is shown in center panels. (f-h) Right panels show the Ising term (i.e., the Coulomb interaction in the Kitaev chain) dependence with fixed \(J_{y}/J_{x}=0.5\) and \(h_{z}=0.01\). To avoid degeneracy with large \(J_{z}\) in topologically-trivial state, we add a small magnetic field \(h_{z}=0.01\). \(N_{w}\) represents the winding number explained in Sec. IV. The VQE energy is obtained as the minimal energy in 10-times VQE optimization trials from random initial angles. invariant. The MB topological invariant is an extension of topological invariant determined by one-particle picture, that is, the TB model. Hence, we start with non-interacting Kitaev chain (\(V=0\)) with periodic boundary condition (PBC) as the TB model. ### Tight-binding (TB) model The TB model of the Kitaev chain with the PBC is defined by, \[\mathcal{H}_{\rm K}^{\rm(TB)}=\mathcal{H}_{\rm K}|_{V=0}-t\left(c_{N}^{\dagger }c_{1}+{\rm H.c.}\right)-\Delta\left(c_{N}^{\dagger}c_{1}^{\dagger}+{\rm H.c.} \right). \tag{16}\] The Fourier transform \(c_{k}=N^{-1/2}\sum_{k}c_{j}e^{ijk}\) to the wavenumber \(k=2\pi l/N\) (\(l=-N/2,-N/2+1,\cdots,N/2-1\) for even \(N\)) gives the momentum-space Hamiltonian, \[\mathcal{H}_{\rm K}^{\rm(TB)} =\sum_{k}\left[2\epsilon_{k}c_{k}^{\dagger}c_{k}+(-\imath\Delta_{ k}c_{k}^{\dagger}c_{-k}^{\dagger}+{\rm H.c.})\right]\] \[=\sum_{k}\mathbf{c}_{k}^{\dagger}\mathbf{H}_{k}\mathbf{c}_{k}-\mu N/2, \tag{17}\] with \(\epsilon_{k}=-t\cos k-\mu/2\) and \(\Delta_{k}=-\Delta\sin k\). The matrix form is the Nambu representation of the Hamiltonian given by, \[\mathbf{H}_{k}=\begin{pmatrix}\epsilon_{k}&-\imath\Delta_{k}\\ \imath\Delta_{k}&-\epsilon_{k}\end{pmatrix},\ \mathbf{c}_{k}=\begin{pmatrix}c_{k}\\ c_{-k}^{\dagger}\end{pmatrix}. \tag{18}\] Since the coupling matrix is rewritten by \(\mathbf{H}_{k}=\epsilon_{k}\sigma^{z}+\Delta_{k}\sigma^{y}=\mathbf{v}_{\rm A} \cdot\mathbf{\sigma}\) with the so-called Anderson pseudo vector \(\mathbf{v}_{\rm A}=(0,\Delta_{k},\epsilon_{k})\) and the Pauli matrices \(\sigma^{\alpha}\) (\(\alpha=x,y,z\)), the rotation around \(x\) axis \(R_{x}(\phi)\) can diagonalize the coupling matrix. By setting the rotation angle to \(\phi_{k}=\tan^{-1}(\Delta_{k}/\epsilon_{k})\), we obtain the diagonalized Hamiltonian, \[\mathcal{H}_{\rm K}^{\rm(TB)}=2\sum_{k}\xi_{k}\beta_{k}^{\dagger}\beta_{k}+E_{ \rm gs}, \tag{19}\] with the dispersion relation \(\xi_{k}=\sqrt{\epsilon_{k}^{2}+\Delta_{k}^{2}}\), where the ground-state energy \[E_{\rm gs}^{\rm(TB)}=-\sum_{k}\left(\xi_{k}+\mu/2\right) \tag{20}\] and the bogolon operator \[\beta_{k}=\cos(\phi_{k}/2)c_{k}-\imath\sin(\phi_{k}/2)c_{-k}^{\dagger}. \tag{21}\] Since the Hamiltonian is diagonalized by the bogolon operator, the ground state is the vacuum of the bogolon, \[\ket{\rm gs}_{\rm TB}=\prod_{k\geq 0}\left[\cos\left(\frac{\phi_{k}}{2}\right)+ \imath\sin\left(\frac{\phi_{k}}{2}\right)c_{k}^{\dagger}c_{-k}^{\dagger}\right] \ket{0}. \tag{22}\] The topological invariant of the TB Kitaev chain (19), the so-called winding number, is defined by \[N_{w}^{\rm(TB)}=\frac{1}{2\pi}\int_{k=-\pi}^{\pi}{\rm d}\phi_{k}. \tag{23}\] The topological invariant represents the number of times that the Anderson pseudo vector \(\mathbf{v}_{\rm A}\) circulates counterclockwise around \(x\) axis. Thus, the topological phase appears when \(|\mu|<2t\) (\(t>0\)) with finite pairing potential \(\Delta\neq 0\). The sign of the pairing potential affects the sign of the winding number; \(N_{w}=-1\) (\(N_{w}=1\)) for \(\Delta>0\) (\(\Delta<0\)) if \(t>0\) and \(|\mu|<2t\). ### Many-body (MB) model In the TB model, since the electron with wavenumber \(k\) interacts only with the electron of wavenumber \(-k\), we can clearly write down the ground state in the momentum space, and calculate the winding number, as a topological invariant defined by the coefficients of momentum-space ground state. However, if the interaction \(V\) is introduced, the one-particle representation of the ground state is difficult to obtain analytically in general [66], because the interaction hybridizes all electrons with any momentum, \[V\left(\sum_{j=1}^{N-1}n_{j}n_{j+1}+n_{N}n_{1}\right)=\frac{V}{N}\sum_{q,k,k^{ \prime}}c_{k+q}^{\dagger}c_{k^{\prime}}^{\dagger}c_{k^{\prime}-q}. \tag{24}\] Thus, the ground state is not the direct-product form of the bogolon wavefunctions. Instead of the TB winding number (23), we adopt the MB winding number [67; 68; 69] given by, \[N_{w}=\frac{1}{4\pi\imath}\int_{-\pi}^{\pi}{\rm d}k\,{\rm Tr}\left[\sigma^{x} \mathbf{G}_{k}^{-1}\partial_{k}\mathbf{G}_{k}\right] \tag{25}\] where \(\mathbf{G}_{k}\) represents the \(2\times 2\) matrix of the Green functions of zero frequency, \[\mathbf{G}_{k}=\begin{pmatrix}g_{c_{k}^{\dagger},c_{k}}&g_{c_{-k},c_{k}}\\ g_{c_{k}^{\dagger},c_{-k}^{\dagger}}&g_{c_{-k},c_{-k}^{\dagger}}\end{pmatrix} \tag{26}\] with \[g_{A,B}=\bra{\rm gs}A\frac{1}{\mathcal{H}_{\rm K}-E_{\rm gs}}B\ket{\rm gs}- \bra{\rm gs}B\frac{1}{\mathcal{H}_{\rm K}-E_{\rm gs}}A\ket{\rm gs}. \tag{27}\] For instance, by using (19),(20) and (22), we can obtain the matrix \(\mathbf{G}_{k}\) in the TB model as \[\mathbf{G}_{k}^{\rm(TB)}=\frac{1}{\sqrt{\epsilon_{k}^{2}+\Delta_{k}^{2}}} \begin{pmatrix}-\cos\phi_{k}&\imath\sin\phi_{k}\\ -\imath\sin\phi_{k}&\cos\phi_{k}\end{pmatrix}=\frac{-\mathbf{H}_{k}}{\epsilon_ {k}^{2}+\Delta_{k}^{2}}. \tag{28}\] Hence, we can confirm that the MB winding number (25) for the TB model corresponds to the TB winding number (23), \[N_{w} =\frac{1}{4\pi\imath}\int_{-\pi}^{\pi}{\rm d}k\,{\rm Tr}\left[ \sigma^{x}\left\{\mathbf{G}_{k}^{\rm(TB)}\right\}^{-1}\partial_{k}\mathbf{G}_{k}^ {\rm(TB)}\right]\] \[=\frac{1}{4\pi\imath}\int_{-\pi}^{\pi}{\rm d}k\,(2\imath\partial_{k }\phi_{k})=N_{w}^{\rm(TB)}. \tag{29}\] In the Green-function matrix for the TB model (28), we can see the relations between the matrix elements: \(g_{c_{k}^{\dagger},c_{k}}=-g_{c_{-k},c_{-k}^{\dagger}}\in\mathbb{R}\) and \(g_{c_{-k},c_{k}}=-g_{c_{k}^{\dagger},c_{-k}^{\dagger}}\in\imath\mathbb{R}\). This relations are reserved even with the MB interaction (24), because the time-reversal symmetry (\(\mathcal{T}:c_{k}\to c_{-k},\;\imath\to-\imath\)) protecting the relations, is kept. Based on the relations, we can simplify the MB winding number as follows, \[N_{w}=\frac{1}{2\pi\imath}\int_{k=-\pi}^{\pi}\mathrm{d}\log Z_{k} \tag{30}\] with the MB counterpart of the Anderson pseudo vector in the complex plane, \[Z_{k}=-\frac{\imath}{2}g_{c_{k}^{\dagger}+c_{-k},c_{k}-c_{-k}^{\dagger}}= \frac{1}{2}g_{\gamma_{k}^{\prime},\gamma_{-k}^{\ast}}, \tag{31}\] where the Fourier transform of MFs is defined by \(\gamma_{k}^{\prime}=N^{-1/2}\sum_{k}\gamma_{j}^{\tau}e^{-\imath jk}\) (\(\tau=s,a\)). ### Quantum-circuit (QC) algorithm In the finite-size systems, the MB winding number (30) are discretized as follows, \[N_{w}=\frac{1}{2\pi}\sum_{k}\Im\log\left[Z_{k+\Delta k}Z_{k}^{\ast}\right], \tag{32}\] with \(\Delta k=2\pi/N\). Additionally, to determine the MB Anderson pseudo vector in the QC, we need to calculate the Green functions of MFs (31) in the real space, \[Z_{k}=\frac{1}{2}g_{\gamma_{k}^{\prime},\gamma_{-k}^{\ast}}=\frac{1}{2N}\sum_ {j,j^{\prime}}e^{-\imath(j-j^{\prime})k}g_{\gamma_{j^{\prime}}^{\prime}, \gamma_{j^{\prime}}^{\ast}}. \tag{33}\] To obtain the real-space Green function of MFs, we rewrite it by using the time-evolution form: \[g_{\gamma_{j}^{\ast},\gamma_{j^{\prime}}^{\ast}} =2\Im\left\langle\mathrm{gs}\right|\gamma_{j}^{s}\frac{1}{\mathcal{ H}_{\mathrm{K}}-E_{\mathrm{gs}}}\gamma_{j^{\prime}}^{a}\left|\mathrm{gs}\right\rangle\] \[=-\lim_{\delta\to+0}\lim_{T\to\infty}2\int_{0}^{T}\mathrm{d}t\,e ^{-\delta t}\Re\left\langle\gamma_{j}^{s}(t)|\gamma_{j^{\prime}}^{a}(t) \right\rangle, \tag{34}\] with two time-evolved MF-excited states, \[\ket{\gamma_{j}^{s}(t)}=\gamma_{j}^{s}e^{-\mathcal{H}_{\mathrm{K}}t}\ket{ \mathrm{gs}},\;\ket{\gamma_{j}^{a}(t)}=e^{-\mathcal{H}_{\mathrm{K}}t}\gamma_{ j}^{a}\ket{\mathrm{gs}}. \tag{35}\] Here, although we introduce the infinitesimal damping factor \(\delta\to+0\) and the infinite cutoff time \(T\to\infty\), these are set to be finite values in the numerical calculation. We should set the cutoff time \(T\), satisfying \(T\delta\gg 1\), with the small enough damping factor \(\delta\ll 1\). The effects of finite values are important to obtain the winding number by using the Green-function matrix, whereas the effects are not clarified so far. Thus, the damping-factor dependence of \(Z_{k}\) in the numerical calculations is discussed below. Besides, the real part of the transition amplitude corresponds to the expectation value of the \(x\) component of Pauli matrix of an ancila qubit \(\sigma_{a}^{x}\) as follows, \[\Re\left\langle\gamma_{j}^{s}(t)|\gamma_{j^{\prime}}^{a}(t)\right\rangle= \bra{\psi_{j,j^{\prime}}(t)}\sigma_{a}^{x}\ket{\psi_{j,j^{\prime}}(t)} \tag{36}\] with \[\ket{\psi_{j,j^{\prime}}(t)}=\frac{1}{\sqrt{2}}\left(\ket{1}_{a}\ket{\gamma_{j }^{s}(t)}+\ket{0}_{a}\ket{\gamma_{j^{\prime}}^{a}(t)}\right). \tag{37}\] Therefore, we can calculate the expectation value in the QC given in Fig. 4(a). Note that a similar technique is proposed by Endo _et al._, to calculate the Green functions of fermions [70]. To implement the ground state, we use the even-parity initial state \(\ket{\mathrm{i}_{+}}=\ket{0}^{\otimes N}\) and the wavefunction ansatz \(U_{\psi+}\) in Fig. 2(e) with the optimized angles \(\{\mathbf{\theta}_{j,m},\vartheta_{j,m}\}\) obtained by the VQE calculation. The MF operator \(\gamma_{j}^{\tau}\) (\(\tau=s,a\)) can be introduced by a control unitary gate [see Fig. 4(b,c)]. The time evolution \(e^{-\imath\mathcal{H}_{\mathrm{S}}t}\) in the QC algorithm for topological invariant [Fig. 4(a)] is given by the Trotter decomposition whose circuit form is the same as \(U_{\psi\pm}\), while the angles \(\{\mathbf{\theta},\vartheta\}\) are set to constant by the model parameters \(J_{\alpha}\), \(h_{z}\), and infinitesimal time step \(\Delta t\), \[\theta_{a}=\frac{J_{x}+J_{y}}{4}\Delta t,\,\theta_{b}=\frac{J_{x}-J_{y}}{4} \Delta t,\,\theta_{c}=\frac{J_{z}}{4}\Delta t,\,\vartheta=\frac{h_{z}}{2} \Delta t. \tag{38}\] Figure 5 shows the MB Anderson pseudo vector (\(\Re[Z_{k}],\Im[Z_{k}]\)) obtained by the QC algorithm in Fig. 4, for various points in the model-parameter space of the 12-site XYZ spin chain, where the parameter points correspond to Fig. 3. The MB winding number is defined by the number of times that the MB Anderson pseudo vector (\(\Re[Z_{k}],\Im[Z_{k}]\)) circulates counterclockwise around the origin as well as the Anderson pseudo vector in the TB model. Colors of solid lines show different damping factors, \(\delta=0.5\), \(0.15\), and \(0.05\), with fixed \(T\delta=5\). Although the size and angle of \(Z_{k}\) changes with changing the damping factor, the shape is roughly kept, and thus the winding number is conserved even if the damping factor is not so small. In addition, we have confirmed that the winding numbers are equal between the QC algorithm and the ED calculation. Consequently, our QC algorithm for the topological invariant can basically be applied to the current NISQ devices with the serious limitation of coherent time, although the error mitigation techniques are necessarily required. ## V Majorana zero mode In this section, we explain how to visualize the MZM. The MZM is a zero-energy excitation of MFs, localized at edges of chain [see Fig. 1]. We can understand the MZM if starting with the Majorana representation of the TB model, rewritten by \[\mathcal{H}_{\mathrm{M}}|_{\zeta=0}=-\imath\mathbf{\gamma}_{s}^{T}\mathbf{H}_{ \mathrm{M}}\mathbf{\gamma}_{a} \tag{39}\] with the tridiagonal coefficient matrix \[\mathbf{H}_{\mathrm{M}}=\begin{pmatrix}\eta&g_{-}&&\\ g_{+}&\eta&g_{-}&\\ &g_{+}&\eta&\ddots&\\ &&\ddots&\ddots&g_{-}\\ &&&g_{+}&\eta\end{pmatrix} \tag{40}\] and the vector of MFs, \[\mathbf{\gamma}_{\tau}=(\gamma_{1}^{\tau},\gamma_{2}^{\tau},\cdots,\gamma_{N}^{ \tau})\quad(\tau=s,a). \tag{41}\] Diagonalization of the matrix is obtained by the singular-value decomposition, resulting in \(\mathbf{H}_{\mathrm{M}}=\mathbf{U}_{\mathrm{M}}\mathbf{A}_{\mathrm{M}}\mathbf{ V}_{\mathrm{M}}^{\dagger}\), with the unitary matrices \(\mathbf{U}_{\mathrm{M}}\) and \(\mathbf{V}_{\mathrm{M}}\), and the diagonal matrix \(\mathbf{\Lambda}_{\mathrm{M}}=\mathrm{diag}\{\lambda_{1},\lambda_{2},\cdots, \lambda_{N}\}\) with the ascending order singular values \(\lambda_{l}<\lambda_{l\pm 1}\). Then, the TB Majorana Figure 5: The MB Anderson pseudo vector (\(\Re[Z_{k}],\Im[Z_{k}]\)) obtained by the QC algorithm in Fig. 4 for the 12-site XYZ spin chain. Each panel shows \(Z_{k}\) for the model-parameter point corresponding to Fig. 3. Namely, (a,b) left, (c-e) center, and (f-h) right panels show the XY anisotropy, the magnetic field, and the Ising term (i.e., the Coulomb interaction in the Kitaev chain) dependencies, respectively, with fixed other parameters. Purple, green, and cyan solid lines represent the damping factors \(\delta=0.5\), \(0.15\), and \(0.05\), respectively, with fixed the cutoff time \(T\delta=5\) and the Trotter time step \(\Delta t=0.01\). Close (open) symbols denote \(Z_{k=0}\) (\(Z_{k=\pi}\)), and arrows indicate the ascending order of \(k\). Hamiltonian (39) reads, \[\mathcal{H}_{\mathrm{M}}|_{\zeta=0}=-\imath\sum_{l}\lambda_{l}\tilde{\gamma}_{l}^ {s}\tilde{\gamma}_{l}^{a} \tag{42}\] with the superposition of MFs, \[\tilde{\gamma}_{l}^{s}=\sum_{j}(\mathbf{U}_{\mathrm{M}}^{\dagger})_{lj}\gamma_{j }^{s},\ \tilde{\gamma}_{l}^{a}=\sum_{j}(\mathbf{V}_{\mathrm{M}}^{\dagger})_{lj}\gamma_{ j}^{a}. \tag{43}\] Since the superposed MFs also have the anti-commutation relation \(\{\tilde{\gamma}_{l}^{s},\tilde{\gamma}_{l}^{\tau^{\prime}}\}=2\delta_{l,l^{ \prime}}\delta_{\tau,\tau^{\prime}}\) with the Hermicity \((\gamma_{l}^{\tau})^{\dagger}=\gamma_{l}^{\tau}\) for \(\tau(\tau^{\prime})=s,a\), we can put one fermion on two MFs, \(\tilde{\gamma}_{l}^{s}=\tilde{c}_{l}^{\dagger}+\tilde{c}_{l}\) and \(\tilde{\gamma}_{l}^{a}=\imath\left(\tilde{c}_{l}^{\dagger}-\tilde{c}_{l}\right)\). With these fermions, the TB Hamiltonian (42) is rewritten by, \[\mathcal{H}_{\mathrm{M}}|_{\zeta=0}=-2\sum_{l}\lambda_{l}\left(\tilde{c}_{l}^ {\dagger}\tilde{c}_{l}-\frac{1}{2}\right). \tag{44}\] Therefore, the singular values are considered as the eigenenergies. If there is a zero singular value \(\lambda_{1}=0\), the pair of MFs, \(\tilde{\gamma}_{1}^{s}\) and \(\tilde{\gamma}_{1}^{a}\), becomes the MZM. The on-site MFs \(\gamma_{j}^{\tau}\) consist of creation and annihilation operators of fermion, so that single operation of the superposed MFs changes the fermion parity \(\mathcal{F}\). Hence, the expectation value of the MFs for any parity-conserved eigenstates is always zero. Instead, to visualize the MZM, we need to see the transfer amplitude without finite energy excitation between different parity subspaces, e.g., the real-space distribution for symmetric mode reads \[|\bra{\mathrm{gs}_{+}}\gamma_{j}^{s}\ket{\mathrm{gs}_{-}}|=\left|\sum_{l}( \mathbf{U}_{\mathrm{M}})_{jl}\bra{\mathrm{gs}_{+}}\tilde{\gamma}_{l}^{s}\ket{ \mathrm{gs}_{-}}\right|=|(\mathbf{U}_{\mathrm{M}})_{j1}|, \tag{45}\] because the transfer amplitude is zero except for \(l=1\), namely, \(|\bra{\mathrm{gs}_{+}}\tilde{\gamma}_{l}^{s}\ket{\mathrm{gs}_{-}}|=\delta_{l,1}\), if the first singular value is only zero, \(\lambda_{1}=0\). Therefore, we can visualize the real-space distribution of the MZM by calculating the transfer amplitude, \(|\bra{\mathrm{gs}_{+}}\gamma_{j}^{\tau}\ket{\mathrm{gs}_{-}}|\) for \(\tau=s,a\) in the QC. Figure 7 shows the transfer amplitudes \(|\bra{\mathrm{gs}_{+}}\gamma_{j}^{\tau}\ket{\mathrm{gs}_{-}}|\) in the topological state. The weight of MZM are localized at an edge and rapidly decreases with entering the bulk. Furthermore, we can see that the symmetric and antisymmetric modes switch the position if the winding number changes [compare (a) and (b) in Fig. 7]. The numerical cost of this QC algorithm for the MZM is much lower than that for topological invariant, so that it is easier to confirm the topological state with the MZM visualization. However, in this case, we should be careful with the energy difference between the ground states in even and odd parity subspaces, because there is usually an energy gap due to the finite-size effect. ## VI Summary and Discussion For realizing the long-term FTQC, topological states of matter are important, while the QC algorithms to determine topological invariant are still not sufficient. In this paper, we propose the QC algorithm for topological invariant by using time evolution. Since this algorithm requires the ground state with keeping the fermion parity, we also show the VQE method conserving the parity. In addition, we propose how to visualize the MZM in the QC, and demonstrate it on the QC simulator, qulacs [63], in the classical computer. As the result of parity-conserved VQE calculations, we find that the ground states with odd parity are comparably difficult to be obtained. The non-uniform initial state before unitary operations in the QC may affect the convergence in the shallow QC. In the QC algorithm of topological invariant, we clarify that introducing the damping factor and the cutoff time only gives a slight change in the size and angle of the Anderson pseudo vector, but keeps the topological invariant. This feature guarantees the stability of our algorithms even with an inevitable noise in NISQ devices, while the shallow QC due to the short coherent time might make the topological character somewhat unstable. Then, to execute our algorithms in NISQ devices, combining with error mitigation techniques and long time evolution algorithms will be cru Figure 7: Visualization of the MZM in the 12-site XYZ spin chain by using the QC algorithm shown in Fig. 6 for various model-parameter points. The model parameters (a-d) correspond to (a), (b), (c), and (g) in Fig. 3 and Fig. 5. cial. Alternatively, for the visualization of the MZM, our algorithm only requires the shallow QC, and thus, its demonstration is possible even in current NISQ devices. ###### Acknowledgements. This work was supported by MEXT Quantum Leap Flagship Program (MEXTQLEAP) Grant No. JPMXS0118067394 and JPMXS0120319794, and the COE research grant in computational science from Hyogo Prefecture and Kobe City through Foundation for Computational Science. Numerical computation in this work was partly carried out on the supercomputers at JAEA.
2305.13721
Continual Dialogue State Tracking via Example-Guided Question Answering
Dialogue systems are frequently updated to accommodate new services, but naively updating them by continually training with data for new services in diminishing performance on previously learnt services. Motivated by the insight that dialogue state tracking (DST), a crucial component of dialogue systems that estimates the user's goal as a conversation proceeds, is a simple natural language understanding task, we propose reformulating it as a bundle of granular example-guided question answering tasks to minimize the task shift between services and thus benefit continual learning. Our approach alleviates service-specific memorization and teaches a model to contextualize the given question and example to extract the necessary information from the conversation. We find that a model with just 60M parameters can achieve a significant boost by learning to learn from in-context examples retrieved by a retriever trained to identify turns with similar dialogue state changes. Combining our method with dialogue-level memory replay, our approach attains state of the art performance on DST continual learning metrics without relying on any complex regularization or parameter expansion methods.
Hyundong Cho, Andrea Madotto, Zhaojiang Lin, Khyathi Raghavi Chandu, Satwik Kottur, Jing Xu, Jonathan May, Chinnadhurai Sankar
2023-05-23T06:15:43Z
http://arxiv.org/abs/2305.13721v2
# Continual Dialogue State Tracking ###### Abstract Dialogue systems are frequently updated to accommodate new services, but naively updating them by continually training with data for new services in diminishing performance on previously learnt services. Motivated by the insight that dialogue state tracking (DST), a crucial component of dialogue systems that estimates the user's goal as a conversation proceeds, is a simple natural language understanding task, we propose reformulating it as a bundle of granular example-guided question answering tasks to minimize the task shift between services and thus benefit continual learning. Our approach alleviates service-specific memorization and teaches a model to contextualize the given question and example to extract the necessary information from the conversation. We find that a model with just 60M parameters can achieve a significant boost by learning to learn from in-context examples retrieved by a retriever trained to identify turns with similar dialogue state changes. Combining our method with dialogue-level memory replay, our approach attains state of the art performance on DST continual learning metrics without relying on any complex regularization or parameter expansion methods. ## 1 Introduction As conversational digital assistants are becoming increasingly popular and versatile, it is important to continuously update their underlying models to accommodate more services.1 One of these models is a dialogue state tracking (DST) model, which estimates the user's goal, i.e. the dialogue state, as dialogue progresses. DST is crucial to task-oriented dialogue as the dialogue state serves as parameters for queries sent to application programming interfaces to retrieve necessary information, such as a list of restaurant names with a cuisine type, that is used to ground the dialogue model's response. Footnote 1: In this work, we use _services_ and _domains_ interchangeably to denote high-level services supported by digital assistants, e.g. setting an alarm or booking a restaurant. _Task_ refers to lower-level functions, e.g. question answering, sentiment classification, and dialogue state tracking. Yet, training an existing model further with only the new data causes _catastrophic forgetting_McCloskey and Cohen (1989); French (1999), the drop in performance for previous services not covered by the new data. To mitigate this issue while also avoiding the impracticality of training a model from scratch with data from all services each time new data becomes available, three main approaches have been established as approaches to effective continual learning (CL): memory replay, regularization, and parameter expansion. Various variants of the three have been applied for CL in DST as well to some degree of success Liu et al. (2021); Madotto et al. (2021); Zhu et al. (2022). Figure 1: _Left_: Previous work sought to enhance continual learning for DST while treating it as a structured text generation task, which enforces memorization of service-specific outputs that is not conducive to continual learning. _Right_: Instead, we reformulate DST into a bundle of granular example-guided question answering tasks such that data from new services effectively becomes additional training data for learning general example-guided question answering. Most previous work have focused on improving CL performance with domain-specific inputs or outputs, a paradigm illustrated on the left side of Figure 1. This approach introduces a large distribution shift from one domain to another as at each domain the model needs to memorize which domain-specific slots to predict values for. However, DST can become a significantly more consistent task across domains by simply reformulating the DST dataset as a bundle of example-guided question answering task. The outcome of the reformulation, which we denote as _Dialogue State Tracking as Example-Guided Question Answering_ (DST-EGQA) is that the DST task becomes more granular, easy, and consistent across domains as the training becomes about learning to learn how to use examples to answer questions for a single slot at a time (right side of Figure 1), rather than trying to predict domain-specific structured outputs all at once without any explicit guidance (left side of Figure 1). Motivated by this insight, we hypothesize that DST-EGQA will benefit continual learning because it promotes generalizability while performing DST. We demonstrate that this is indeed the case, as it leads to significant gains in CL performance without using any of the aforementioned approaches or data augmentation methods. Specifically, we transform DST into the TransferQA Lin et al. (2021) format and add examples retrieved by a retriever that is trained to identify turns that result in similar dialogue state updates Hu et al. (2022). Our approach obviates complex partitioning of the training set to target samples and retrieval samples, as we find that we can double dip on the train set as both target samples and retrieval database. In addition, we experiment with a wide array of retrievers and find that models trained to perform DST-EGQA can be effective even with lower quality retrievers by intentionally training it with subpar examples such that it can learn when to leverage good examples and ignore bad ones. Lastly, we simply tweak the sampling approach for memory replay to sample at the dialogue-level instead of the turn-level and achieve significant gains to CL performance even with a single dialogue sample, resulting in state-of-the-art performance on the Schema Guided Dialogue (SGD) dataset Zhu et al. (2022).2 Footnote 2: We release our code at [https://github.com/facebookresearch/DST-EGQA](https://github.com/facebookresearch/DST-EGQA). In summary, our main contributions are: 1. We show that refactoring the DST as granular example-guided question answering task (DST-EGQA) alone can significantly improve continual learning performance by simply enhancing task consistency across domains. 2. We propose a simple but highly effective dialogue-level sampling strategy for choosing memory samples that leads to state-of-the-art performance when combined with DST-EGQA. 3. We share a thorough analysis on the parameters relevant to DST-EGQA to establish its effectiveness, robustness, and limitations as a method for continual learning. ## 2 Dialogue State Tracking as Example-Guided Question Answering (DST-EGQA) ### Motivation and Goal DST as question answering.Dialogue state tracking (DST) is defined as estimating the beliefs of possible user's goals at every dialogue turn, and it was traditionally formulated as a slot-filling task Wu et al. (2020); Heck et al. (2020), and more recently as a structured text generation task Hosseini-Asl et al. (2020); Peng et al. (2021); Su et al. (2022), shown as (0) in Figure 2. However, we can also achieve the same outcome as domain-specific structured outputs through natural text by reformulating DST as a bundle of per-slot questions Gao et al. (2019); Lin et al. (2021) to answer or sentences to complete using each slot description Lin et al. (2021), such as (1) in Figure 2. Fine-tuning with in-context examples.Further generalization can be achieved by transforming domain-specific question answering to domain-agnostic, example-guided question answering. This kind of task reformulation, as demonstrated by Wang et al. (2022); Min et al. (2022); Ouyang et al. (2022), enables the development of models that achieve state-of-the-art zero-shot performance and generalizability even with much smaller models by explicitly fine-tuning with instructions and in-context examples. Since most recent work that focus on generalizability and zero-shot models leverage generation models because of its open vocabulary, we also place our focus on generation models. GoalWith DST-EGQA, we apply these two main ideas to continual learning for DST: the process of sequentially training on a stream of \(n\) domains \(\{T_{1}...T_{n}\}\) with the goal of minimal degradation, i.e. _catastrophic forgetting_, of peak performance that was achieved when training for each \(T_{i}\). ### Method Here, we define our approach more formally and provide details on how we leverage the two main motivations to achieve our goal. The first step of DST-EGQA is to transform DST into question answering as shown in (1) in Figure 2. Here, we leverage the TransferQA (Lin et al., 2021) format. Given a user's utterance of a turn \(u_{t}\) in a dialogue \(\{u_{1},...,u_{n}\}\) of domain \(T\) and its corresponding dialogue state \(DS\) expressed as slot key value pairs \(\{(s_{t,i},v_{t,i})\mid i\in I\}\) for \(I=\{1,...,N_{T}\}\), where \(N_{T}\) is the number of slots of interest for domain \(T\), each \(s_{t,i}\) is transformed to a question with a manually pre-defined template \(Q:s_{i}\to q_{i}\). The overhead of creating these templates is minimal as it only has to be done once and is as simple as transforming the name slot in the hotel domain to a natural text question equivalent _"What is the name of the hotel that the user wants?"_. Thus, with dialogue history until turn \(t\) as \(H_{t}=\{u_{1},...,u_{t}\}\), the original single input output pair of \[H_{t}\oplus T\rightarrow\{(s_{t,i}=v_{t,i})\mid i\in I\} \tag{1}\] becomes \(N_{T}\) granular question answer pairs: \[\{Q(s_{t,i})\oplus H_{t}\to v_{t,i}\mid i\in I\} \tag{2}\] where \(\oplus\) denotes simple text concatenation. A difference with the original TransferQA approach is that since we will be finetuning the model, we skip the step of training with external question answering datasets and do not take any special measures to handle none slots, i.e. empty slots, because our models will learn to generate none as the answer for empty slots. Then, motivated by the results from Tk-instruct Wang et al. (2022) and MetaICL (Min et al., 2022) that showed even relatively small models can generalize well if explicitly trained to follow instructions with examples, we explore whether we can prevent a model from overfitting to domain-specific questions and instead continually develop example-based question answering capabilities to enhance continual learning performance. Therefore, we extend Equation 2 to include in-context examples that are retrieved from the training set, as shown in (2) in Figure 2. To retrieve relevant examples, we use \(H_{t}\) to form a query that retrieves top \(k\) samples \(\{H_{t}^{\prime j}|j\leq k\}\) to use as in-context examples. By inserting the retrieved examples and their relevant Figure 2: DST-EGQA overview. We factor (0) the original dialogue state tracking task into a (1) granular question answering task with the TransferQA format (Lin et al., 2021) and make it (2) domain-agnostic by pairing with similarly formatted retrieved examples that are provided in-context such that the domain-shift is reduced further to an example-guided question answering task. In TransferQA, the original dialogue state is mapped to templated questions that correspond to each slot key and value pair, which in aggregate request for the equivalent information. For DST-EGQA, we build on TransferQA and use the target dialogue as the query to retrieve similar examples from the database, which is the same as the training set excluding the target. slot values for the each slot question \(q_{i}\), the final format becomes: \[\{Q(s_{t,i})\oplus\{H_{t}^{{}^{\prime}j}\oplus v_{t,i}^{{}^{\prime}j}|j\leq k\} \oplus H_{t}\to v_{t,i}\mid i\in I\} \tag{3}\] Throughout this work, we use \(k=1\) unless otherwise specified. The details of the retrieval process is described in Section 2.3. ### In-context Example Retrieval The goal of the retrieval system is to find an example turn \(H_{t^{\prime}}^{\prime}\) that requires similar reasoning for answering the target sample \(H_{t}\), such that fine-tuning with it as an in-context example will help enable the model to apply the same reasoning for answering the question for the target sample. In Hu et al. (2022), the authors found that instead of matching for dialogue state overlap, matching for similar dialogue state change \(\Delta DS\), i.e. state change similarity (SCS), that occurs at turn \(t\) returns more relevant examples. State changes are simply a subset of \(DS\) that is different from the previous turn: \(\Delta DS=\{(s_{t,i},v_{t,i})\mid i\in I,v_{t,i}\neq v_{t-1,i}\}\). We found that computing similarity with this definition of state change results in many ties, so we make minor modifications by including the \(\Delta DS\) operations, e.g. INSERT, DELETE, and UPDATE, as part of the slot key: \(\Delta DS_{ours}=\{(s_{1}\oplus o_{1},v_{1}),...(s_{m}\oplus o_{m},v_{m})\}\), where \(o\) is the slot operation. To resolve the remaining ties, we compute similarity using the last user and bot utterance pair and BM25 (Robertson et al., 2009) as the second-level re-ranker.3 With our changes, we were able to observe a much better top \(k=1\) match, which we verified manually with 100 random samples. We denote examples retrieved with this new SCS+BM25 score as the _Oracle_ because getting \(\Delta DS\) requires knowing the DS to predict ahead of time, and therefore cannot be used at test time. However, the Oracle score is useful for training a retriever that can learn to identify similar \(\Delta DS\) and for estimating the upper bound for DST-EGQA. Footnote 3: Refer to Section A.1 for the details of the original definition of state change similarity and the reasoning behind our modification details. Using the Oracle score, for each sample in the training set, we calculate its similarity with other training samples and select the top 200 samples. From the selected samples, we pair the top ten and bottom ten as hard positive and hard negative samples, respectively, to train a SentenceBERT-based (Reimers and Gurevych, 2019) retriever using contrastive loss. We call the resulting retriever as IC-DST-ret. This is the same configuration for creating the dataset that was used to train the original retriever used for IC-DST, but instead of using \(x\%\) of the entire training data, we use the entire training set of the first domain \(T_{1}\) to train separate retrievers for each of the five domain orderings. We impose this constraint such that we conduct our experiments under the practical assumption that we are only provided data for \(T_{1}\) at the beginning and we do not want to extend the continual learning problem for training the retriever. More details of IC-DST-ret's training procedure can be found in Section A.2. We also experiment with simpler retrieval techniques as baselines to our IC-DST retriever: (_i_) BM25 (Robertson et al., 2009), (_ii_) GPT: OpenAI's text-embedding-ada-002 model (_iii_) SentenceBERT (Reimers and Gurevych, 2019): the all-mpnet-base-v2 model4 (_iv_) and original IC-DST retriever (orig. IC-DST-ret): the retriever from Hu et al. (2022) that was trained with the original SCS formulation and pairs created from the MultiWOZ2.1 dataset (Eric et al., 2020). We also evaluate with random retrieval as a control. With the exception of the orig. IC-DST-ret, which was trained to identify similarity with the last turn's dialogue state and last utterance pairs between the bot and user: \(\{(s_{t-1,i}=v_{t-1,i})\mid i\in I\}\oplus u_{t-1}\oplus u_{t}\), the query and key of the database uses only the last utterance pairs: \(u_{t-1}\oplus u_{t}\). We found this approach to be better as it diminishes the undesirably high similarity assigned to examples from the same dialogue that have the same previous dialogue state. Footnote 4: [https://www.sbert.net/docs/pretrained_models.html](https://www.sbert.net/docs/pretrained_models.html) ## 3 Experimental Setup ### Data We use the continual learning setup proposed by Zhu et al. (2022), which uses 15 single-domains from the Schema Guided Dialogue dataset (Rastogi et al., 2020), and aggregate our results over the same five domain orders to make the most reliable comparisons with their results. Comparing results with the same order is crucial as we find that results can have significant variance depending on the chosen domains and their order. For multi-task training, there is only a single permutation, and therefore we aggregate results over runs with three different seed values. Note that our formulation described in 2.2 shows that we are operating under the assumption that the domain of interest will be known ahead of time. ### Evaluation DST performance is mainly measured by joint goal accuracy (JGA), which measures the percentage of turns that all slot values were correctly predicted. For CL, given JGA for domain \(i\) after training up to the \(t^{\text{th}}\) domain \(a_{t,i}\) and the total number of domains \(T\), we compare our approaches with three metrics from Zhu et al. (2022): _(i)_ Average JGA \(=\dfrac{1}{T}\sum_{i=1}^{T}a_{T,i}\), the average of JGA on each domain after training on all domains in the continual learning setup, _(ii)_ Forward Transfer (FWT) \(=\dfrac{1}{T-1}\sum_{i=2}^{T}a_{i-1,i}\), how much training on the current domain boosts JGA on future unseen domains, and _(iii)_ Backward Transfer (BWT) \(=\dfrac{1}{T-1}\sum_{i=1}^{T-1}a_{T,i}-a_{i,i}\), how much the training on the current domain reduced JGA on previously seen domains. We place the most importance on Final JGA, while FWT and BWT provides additional signal on how different approaches provide more transferability, hence task consistency, between domains. ### Baselines We replicate the baseline results from Zhu et al. (2022) using their implementation, which include approaches from Madotto et al. (2021): * SimpleTOD (Hosseini-Asl et al., 2020): perform DST as a structured text generation task, predicting the full state as a single sequence. As was done in Zhu et al. (2022), we modify the SimpleTOD format to append the domain name at the end of the dialogue history as described in Equation 1. * Memory: randomly select \(M\) turns from the training data for each previous domain and include it in the current domain's training data. * EWC: use the same samples selected for memory replay to regularize with the Fisher information matrix (Kirkpatrick et al., 2017) * AdapterCL (Madotto et al., 2021): freeze the base model and train parameter efficient adapters for each domain with number of weights that are equivalent to 2% of that of the pretrained model. * Continual Prompt Tuning (Zhu et al., 2022): freeze the base model and continually train soft prompts after reformulating DST as a \begin{table} \begin{tabular}{l c|c c c|c c c} \hline Method & Retriever & Avg. JGA & FWT & BWT & +Memory & +Params & +Reg. \\ \hline SimpleTOD (2020) & & \(14.4_{2.7}\) & \(7.1_{1.0}\) & -\(42.5_{2.4}\) & - & - & - \\ EWC (2017) & & \(13.9_{1.1}\) & \(8.4_{0.9}\) & -\(50.8_{4.3}\) & ✓ & ✓ & ✓ \\ Memory (2021) & - & \(58.6_{3.5}\) & \(10.9_{0.5}\) & -\(3.2_{2.3}\) & ✓ & - & - \\ Adapter (2021) & & \(49.8_{1.7}\) & - & - & - & ✓ & - \\ CPT (2022) & & \(61.2_{2.5}\) & \(13.7_{0.8}\) & \(\mathbf{0.5_{0.4}}^{\dagger}\) & ✓ & ✓ & ✓ \\ \hline DST-EGQA & & \(43.2_{3.4}\) & \(14.1_{1.9}\) & -\(31.0_{4.2}\) & - & & \\ + Memory & - & \(59.8_{1.6}\) & \(15.6_{1.7}\) & -\(12.8_{2.0}\) & ✓ & - & - \\ + Dialogue Memory & & \(64.2_{0.8}\) & \(15.0_{2.1}\) & -\(7.4_{2.2}\) & & & \\ \hline DST-EGQA & \multirow{2}{*}{IC-DST-ret} & \(54.1_{3.3}\) & \(\mathbf{22.8_{1.8}}\) & -\(22.3_{4.5}\) & - & & \\ + Dialogue Memory & & \(\mathbf{68.9_{0.3}}^{\dagger}\) & \(21.2_{1.5}\) & -\(6.1_{1.7}\) & ✓ & - & - \\ \hline DST-EGQA & \multirow{2}{*}{Oracle} & \(55.5_{3.5}\) & \(23.6_{2.1}\) & -\(19.1_{4.2}\) & - & & \\ + Dialogue Memory & & \(69.3_{1.0}\) & \(22.5_{1.8}\) & -\(5.9_{1.9}\) & ✓ & - & - \\ \hline CPT Multi-task (2022) & - & \(64.0_{1.9}\) & - & - & - & ✓ & ✓ \\ DST-EGQA Multi-task & - & \(74.2_{1.8}\) & - & - & - & - & - \\ \hline \end{tabular} \end{table} Table 1: CL metric results and reliance on other continual learning techniques. We compare models sequentially trained on 15 tasks from the SGD dataset and aggregate results across five different domain permutations. DST-EGQA achieves the best results without any additional parameters or regularization methods. The last two rows provide the multi-tasking results, which serve as an upper bound. In this table, results with retrievers are with a single in-context example and the indicated retriever is used for training and test time while the Oracle retriever is used for the validation step. All rows that use memory are with \(M=50\). \({}^{\dagger}\) indicates statistically significant at \(p<0.05\) with the next best comparable value. masked-span recovery task (Raffel et al., 2020). We include their best results that takes advantage of a memory buffer for replay and for memory-guided backward transfer, a form of regularization that prevents updates if it would increase the current model's loss on the memory samples by computing gradients on them. For DST-EGQA, we compare various configurations to better understand the strengths and weaknesses of our approach. We vary the retriever used during training and combine with other memory replay strategies. We also show CPT Multi-task and DST-EGQA Multi-task to show the multi-tasking upper bound performance for average JGA. ### Technical details We conduct our experiments with the T5-small model (Raffel et al., 2020). We train with a single GPU using the AdamW optimizer, a learning rate of 1e-4, and a batch size of 16. We train on each domain for ten epochs without early stopping. We select the checkpoint with the best validation set performance when moving on to the next domain. Our experiments are run on V100, A40, and A100 GPUs, based on availability.5 Footnote 5: Our preliminary experiments with different GPU types with otherwise same configurations showed that the choice of GPU in final performance introduces minimal variability to the final result. ## 4 Experiments and Analysis ### Main results TransferQA's format is more CL-friendly.As shown in the first row after CPT in Table 1, transforming the DST task from prior work (Equation 1) to that of granular question answering using the TransferQA (Equation 2) format already produces a dramatic improvement in CL performance, increasing average JGA from \(14.4\) to \(43.2\), and also improving on both FWT and BWT. Example-guided question answering further enhances CL performance.The subsequent rows for DST-EGQA shows that fine-tuning with in-context examples can further enhance all CL metrics by a large margin. Most notable is the boosts that are seen in the FWT, which memory replay has almost a negligible effect. Augmenting DST-EGQA with memory replay leads to even larger boosts, even exceeding the CPT Multi-task model, with most gains coming from BWT, which is expected with memory replay methods. Using the Oracle retriever at test time actually only leads to statistically insignificant improvements, indicating that IC-DST-ret can retrieve examples that are on par with the Oracle examples. Lastly, we can see that the relative gains in Average JGA and BWT from memory replay becomes less pronounced with models trained with in-context examples, indicating that memory replay and example-guided question answering have overlapping gains. Double-dipping the training set as a retrieval database does not lead to overfitting.It is important to note that, because our retrieval methods are commutative, a target sample that is paired with an example will serve as an example when the example becomes the target sample. Therefore, the answers for all training samples are seen as part of the context during training with our setup described in Section 2.3. This raises overfitting concerns where the model easily memorizes the answers for all samples and it doesn't learn generalizable question-answering. Interestingly, this does not seem to be the case, as training in this setup leads to improved or on-par final test set performance as training without any examples. This implies that our approach does not impose additional data constraints of having to split the training set into dedicated training samples and retrieval samples for it to be effective. However, not shown in Table 1 is that we find that DST-EGQA is sensitive to the training dynamics (Section 4.2) and the quality of the retrieved examples (Section 4.3). ### Effect of Training Dynamics In a practical setting without an ideal retriever and large enough database that allow us to find a per \begin{table} \begin{tabular}{l l l l l l} \hline \hline Train & Dev & Test & Avg. JGA & FWT & BWT \\ \hline - & - & - & 43.2\({}_{4.1}\) & 14.1\({}_{5.3}\) & -31.0\({}_{4.2}\) \\ \hline IC-DST-ret & Oracle & & 43.1\({}_{2.1}\) & 24.1\({}_{4.1}\) & -31.9\({}_{4.4}\) \\ Oracle & Oracle & & IC-DST-ret & 54.1\({}_{4.2}\) & 29.8\({}_{2.8}\) & -22.3\({}_{4.5}\) \\ Oracle & Oracle & & 48.5\({}_{5.2}\) & 19.6\({}_{6.6}\) & -27.1\({}_{4.4}\) \\ \hline Oracle & Oracle & Oracle & 53.7\({}_{4.4}\) & 24.1\({}_{1.6}\) & -21.3\({}_{4.1}\) \\ IC-DST-ret & & & 55.5\({}_{5.5}\) & 23.6\({}_{6.1}\) & -19.1\({}_{4.6}\) \\ \hline \hline \end{tabular} \end{table} Table 2: Train-validation-test retrieval method comparison. Keeping the Training and Test-time retrieval methods the same while keeping the development set as the Oracle leads to the best results, except for the last row, which requires knowing the correct answer ahead of time. \({}^{\dagger}\) indicates statistically significant at \(p<0.05\) with the next best value. fect example for each case seen during test time, it is important for the model to be able to leverage relevant examples and ignore irrelevant ones. To become more robust to these realistic circumstances, it may be useful to intentionally mix in irrelevant examples during training for DST-EGQA, and therefore we vary the combination of IC-DST-ret and Oracle used for training, validation, and test time. Results in Table 2 support our hypothesis, showing that aligning the retrieval method from training time with test time leads to the best performance. Interestingly, best performance is achieved by using the Oracle retriever at validation time, shown by the large gap between IC-DST-ret \(\rightarrow\) IC-DST-ret \(\rightarrow\) IC-DST-ret and IC-DST-ret \(\rightarrow\) Oracle \(\rightarrow\) IC-DST-ret (second and third row). This is somewhat surprising given that one may expect selecting a checkpoint that performs the best in the same setting as test time would lead to better test time performance. ### Retrieval method sensitivity The findings from Section 4.2 raises a question on whether training with other retrievers that may provide a different mixture of good and bad examples can lead to further boost performance with DST-EGQA. We apply all the retrievers defined in Section 2.3 and use the same training dynamics that led to best results previously to examine each of their effectiveness. Interestingly, our IC-DST-ret model seems to capture this balance the most effectively, as it is significantly better than all other retrieval methods. ### Effect of Memory Type and Size As hinted by the results in Table 1, dialogue-level sampling seems to be a superior sampling strategy to turn-level sampling. We take a deeper dive into the relationship between the two sampling techniques and how both approaches scale with memory budgets by varying the memory budget sizes to 10, 50, and 100. Table 4 shows that dialogue-level sampling achieves a significantly better performance for all equivalent memory budget sizes for turn-level sampling and even on par with the next budget size used for turn-level sampling. This is likely due to dialogue-sampling leading to a more comprehensive set of samples that cover a wider diversity of dialogue state updates in these smaller sizes of the memory budget. As the memory budget becomes larger, however, the gap between turn-level sampling and dialogue-level sampling diminishes, since both methods converge to multi-task training when the memory budget is unlimited. ### Effect of number of examples Including only one example to learn from in-context creates a single point of failure, which is especially risky for suboptimal retrieval methods. Thus, having additional examples to learn from can help mitigate this risk, and therefore we also repeat our experiments using multiple in-context examples. However, at least with small model sizes, the \begin{table} \begin{tabular}{l l l l l} \hline \hline Size & Method & Avg. JGA & FWT & BWT \\ \hline - & - & \(43.2_{3.4}\) & \(14.1_{1.9}\) & -31.0\({}_{4.2}\) \\ \hline \multirow{3}{*}{\(10\)} & Turn & \(50.1_{8.0}\) & \(15.0_{1.4}\) & -23.7\({}_{4.4}\) \\ & Dialogue & \(59.1_{1.5}\) & \(15.2_{2.7}\) & -14.7\({}_{2.3}\) \\ \hline \multirow{3}{*}{\(50\)} & Turn & \(59.8_{1.6}\) & \(15.6_{1.7}\) & -12.8\({}_{2.0}\) \\ & Dialogue & \(64.2_{0.8}\) & \(15.0_{2.1}\) & -7.4\({}_{2.2}\) \\ \hline \multirow{3}{*}{\(100\)} & Turn & \(63.9_{1.2}\) & \(15.6_{1.7}\) & -8.7\({}_{1.3}\) \\ & Dialogue & \(66.8_{1.5}\) & \(15.4_{2.1}\) & -3.3\({}_{2.5}\) \\ \hline \hline \end{tabular} \end{table} Table 4: Memory analysis for the DST-EGQA. Sampling at the dialogue-level is much more effective than sampling at the turn-level, especially for a constrained memory budget. \begin{table} \begin{tabular}{l c c c c} \hline \hline Train, Test & Dev & Avg. JGA & FWT & BWT \\ \hline - & - & \(43.2_{3.4}\) & \(14.1_{1.9}\) & -31.0\({}_{4.2}\) \\ \hline Random & & \(45.5_{4.5}\) & \(14.2_{2.2}\) & -31.4\({}_{5.1}\) \\ BM25 & & \(46.7_{3.3}\) & \(21.6_{1.6}\) & -20.1\({}_{5.0}\) \\ SentBERT & & \(46.2_{4.0}\) & \(17.3_{2.0}\) & -29.7\({}_{6.9}\) \\ GPT & & \(47.8_{2.9}\) & \(17.5_{2.4}\) & -27.0\({}_{8.7}\) \\ orig. IC-DST-ret & & \(49.2_{1.7}\) & \(19.9_{2.1}\) & -26.2\({}_{5.2}\) \\ IC-DST-ret (ours) & & \(\mathbf{54.1_{3.4}}^{3}\) & \(\mathbf{22.8_{1.8}}\) & -\(\mathbf{22.3_{4.5}}\) \\ \hline \hline \end{tabular} \end{table} Table 3: Retrieval methods comparison. Although mixing in irrelevant examples can boost performance, our results show that lacking a reliable retrieval method at test time is critical to performance. Our IC-DST-ret model captures this balance the most effectively. \begin{table} \begin{tabular}{c c c c c} \hline \hline Train, Test & \# Examples & Avg. JGA & FWT & BWT \\ \hline - & - & \(43.2_{3.4}\) & \(14.1_{1.9}\) & -31.0\({}_{4.2}\) \\ \hline \multirow{3}{*}{Random} & 1 & \(43.2_{8.6}\) & \(14.5_{1.7}\) & -33.5\({}_{7.7}\) \\ & 2 & \(45.2_{1.5}\) & \(15.5_{1.8}\) & -31.6\({}_{6.2}\) \\ & 3 & \(43.9_{6.2}\) & \(16.9_{1.6}\) & -31.4\({}_{6.7}\) \\ \hline \multirow{3}{*}{BM25} & 1 & \(45.9_{4.5}\) & \(20.3_{1.9}\) & -21.4\({}_{6.2}\) \\ & 2 & \(46.2_{4.1}\) & \(23.3_{1.6}\) & -17.1\({}_{7.5}\) \\ & 3 & \(47.0_{5.2}\) & \(20.8_{2.8}\) & -21.8\({}_{5.5}\) \\ \hline \multirow{3}{*}{IC-DST-ret} & 1 & \(54.1_{3.3}\) & \(22.8_{1.8}\) & -22.3\({}_{4.5}\) \\ & 2 & \(50.2_{3.7}\) & \(22.0_{1.8}\) & -29.3\({}_{5.2}\) \\ \cline{1-1} & 3 & \(48.0_{4.4}\) & \(21.8_{1.9}\) & -22.3\({}_{4.1}\) \\ \hline \multirow{3}{*}{Oracle} & 1 & \(55.7_{4.4}\) & \(21.2_{6.8}\) & -21.3\({}_{4.1}\) \\ \cline{1-1} & 2 & \(54.3_{3.0}\) & \(28.7_{2.6}\) & -18.5\({}_{5.6}\) \\ \cline{1-1} & 3 & \(53.9_{3.8}\) & \(30.5_{1.5}\) & -14.1\({}_{2.6}\) \\ \hline \hline \end{tabular} \end{table} Table 5: Number of examples analysis. Small models are unable to leverage more than one example when training to do in-context learning. DST models are not able to effectively leverage additional examples. This is not surprising for the Oracle retriever, where in most cases the top example is the best example that can be leveraged from the training set. ## 5 Related Work Continual LearningContinual learning prolongs the lifetime of a model by training it further with new incoming data without incurring the cost of _catastrophic forgetting_McCloskey and Cohen (1989); French (1999). There are three main branches of continual learning: architecture-based methods, replay-based methods, and regularization-based methods. Architecture-based methods propose dynamically adding model weights when learning new data Fernando et al. (2017); Shen et al. (2019). Replay-based methods mitigate catastrophic forgetting by keeping a small sample of the previous data as part of a a memory budget to train with the new data Rebuffi et al. (2017); Hou et al. (2019). These methods mainly experiment with sampling strategies and memory budget efficiency. Lastly, regularization-based methods place constraints on how the model becomes updated during training with the new data such that its performance on previous data is maintained Kirkpatrick et al. (2017); Li and Hoiem (2018). Dialogue State TrackingContinual learning for DST has been explored by a series of recent work that applied a combination of methods mentioned above. Liu et al. (2021) expanded on SOM-DST Kim et al. (2020) with prototypical sample selection for the memory buffer and multi-level knowledge distillation as a regularization mechanism. Madotto et al. (2021) applied various continual learning methods to end-to-end task-oriented dialogue models and found that adapters are most effective for the intent classification and DST while memory is most effective for response generation. More recently, Zhu et al. (2022) proposed _Continual Prompt Tuning_ (CPT), which is most related to our work. CPT improves continual learning performance by finetuning soft prompts for each domain and reformulating DST to align with T5's masked-span recovery pretraining objective Raffel et al. (2020). Compared to CPT, we suggest a more granular reformulation to facilitate the learning from examples and do not rely on any regularization nor additional weights. Task reformulation and in-context learningEnhancing a model's generalizability to various tasks by reformulating their input/outputs to become more uniform has become an increasingly popular method for massive multi-task learning Aghajanyan et al. (2021), even for tasks that were considered distant from one another. T5 Raffel et al. (2020) accelerated this movement by providing dataset or task-specific labels or minimal instructions to the inputs and then doing multi-task training. Building on T5, Sanh et al. (2022) and Wei et al. (2021) used more elaborate and diverse set of instruction templates and showed that this can significantly boost zero-shot performance. Cho et al. (2022) applied a similar idea to a more selective set of pre-finetuning tasks before training on the target DST dataset to improve DST robustness. Tk-instruct Wang et al. (2022) takes a step further by scaling up the amount of tasks included in T0 and also provides positive and negative examples in the context in addition to the instructions. Similarly, Min et al. (2022) introduced MetaICL, which explicitly trains a model with the few-shot in-context learning format used for large language models Brown et al. (2020), and showed that it showed better in-context learning performance than larger models. Task reformulation has also been recently explored to help the model better understand the task at hand and reduce domain-specific memorization and thus boost zero-shot DST performance Li et al. (2021); Lin et al. (2021); Gupta et al. (2022); Zhao et al. (2022). ## 6 Conclusion In this paper, we propose _Dialogue State Tracking as Example-Guided Question Answering_ as a method for enhancing continual learning performance that factors dialogue state tracking into granular question answering tasks and fine-tunes the model to leverage relevant in-context examples to answer these questions. Our method is an effective alternative to existing continual learning approaches that does not rely on complex regularization, parameter expansion, or memory sampling techniques. Analysis of our approach finds that even models as small as 60M parameters can be trained to perform in-context learning for continual learning and that complementing it with memory replay with samples randomly selected at the dialogue-level achieves state-of-the-art results compared to strong baselines.
2310.06824
The Geometry of Truth: Emergent Linear Structure in Large Language Model Representations of True/False Datasets
Large Language Models (LLMs) have impressive capabilities, but are prone to outputting falsehoods. Recent work has developed techniques for inferring whether a LLM is telling the truth by training probes on the LLM's internal activations. However, this line of work is controversial, with some authors pointing out failures of these probes to generalize in basic ways, among other conceptual issues. In this work, we use high-quality datasets of simple true/false statements to study in detail the structure of LLM representations of truth, drawing on three lines of evidence: 1. Visualizations of LLM true/false statement representations, which reveal clear linear structure. 2. Transfer experiments in which probes trained on one dataset generalize to different datasets. 3. Causal evidence obtained by surgically intervening in a LLM's forward pass, causing it to treat false statements as true and vice versa. Overall, we present evidence that at sufficient scale, LLMs linearly represent the truth or falsehood of factual statements. We also show that simple difference-in-mean probes generalize as well as other probing techniques while identifying directions which are more causally implicated in model outputs.
Samuel Marks, Max Tegmark
2023-10-10T17:54:39Z
http://arxiv.org/abs/2310.06824v3
The Geometry of Truth: Emergent Linear Structure in Large Language Model Representations of True/False Datasets ###### Abstract Large Language Models (LLMs) have impressive capabilities, but are also prone to outputting falsehoods. Recent work has developed techniques for inferring whether a LLM is telling the truth by training probes on the LLM's internal activations. However, this line of work is controversial, with some authors pointing out failures of these probes to generalize in basic ways, among other conceptual issues. In this work, we curate high-quality datasets of true/false statements and use them to study in detail the structure of LLM representations of truth, drawing on three lines of evidence: 1. Visualizations of LLM true/false statement representations, which reveal clear linear structure. 2. Transfer experiments in which probes trained on one dataset generalize to different datasets. 3. Causal evidence obtained by surgically intervening in a LLM's forward pass, causing it to treat false statements as true and _vice versa_. Overall, we present evidence that language models _linearly represent_ the truth or falsehood of factual statements. We also introduce a novel technique, mass-mean probing, which generalizes better and is more causally implicated in model outputs than other probing techniques. ## 1 Introduction Despite their impressive capabilities, large language models (LLMs) do not always output true text (Lin et al., 2022; Steinhardt, 2023; Park et al., 2023). In some cases, this is because they do not know better. In other cases, LLMs apparently know that generated statements are false but output them anyway. For instance, Perez et al. (2022) demonstrate that LLM assistants output more falsehoods when prompted with the biography of a less-educated user. More starkly, OpenAI (2023) documents a case where a GPT-4-based agent gained a person's help in solving a CAPTCHA by lying about being a vision-impaired human. "I should not reveal that I am a robot," the agent wrote in an internal chain-of-thought scratchpad, "I should make up an excuse for why I cannot solve CAPTCHAs." We would like techniques which, given a language model \(M\) and a statement \(s\), determine whether \(M\) believes \(s\) to be true (Christiano et al., 2021). One approach to this problem relies on inspecting model outputs; for instance, the internal chain-of-thought in the above example provides evidence that the model understood it was generating a falsehood. An alternative class of approaches instead leverages access to \(M\)'s internal state when processing \(s\). There has been considerable recent work on this class of approaches: Azaria and Mitchell (2023), Li et al. (2023b), and Burns et al. (2023) all train probes for classifying truthfulness based on a LLM's internal activations. In fact, the probes of Li et al. (2023b) and Burns et al. (2023) are _linear probes_, suggesting the presence of a "truth direction" in model internals. However, the efficacy and interpretation of these results are controversial. For instance, Levinstein and Herrmann (2023) note that the probes of Azaria and Mitchell (2023) fail to generalize in basic ways, such as to statements containing the word "not." The probes of Burns et al. (2023) have similar generalization issues, especially when using representations from autoregressive transformers. This suggests that these probes may be identifying not truth, but other features which correlate with truth on their training data. In this work, we shed light on this murky state of affairs. We first **curate high-quality datasets of true/false factual statements** which are _uncontroversial_, _unambiguous_, and _simple_ (section 2). Then, working with the autoregressive transformer LLaMA-13B (Touvron et al., 2023) as a testbed, we study in detail the structure of LLM truth representations, drawing on multiple lines of evidence: * **PCA visualizations of LLM representations of true/false statements display clear linear structure** (section 3), with true statements separating from false ones in the top PCs (see figure 1). Although visually-apparent axes of separation do not always align between datasets (figure 3), we argue that this is compatible with the presence of a truth direction in LLM representations (section 3.2). * **Linear probes trained to classify truth on one dataset generalize well to other datasets** (section 4). For instance, probes trained only on statements of the form "\(x\) is larger/smaller than \(y\)" achieve near-perfect accuracy when evaluated on our Spanish-English translation dataset. We also show that this is not explained by LLMs linearly representing the difference between probable and improbable text. * **Truth directions identified by probes are causally implicated in model outputs** (section 5). By adding truth vectors into the residual stream above certain tokens, we can cause LLaMA-13B to treat false statements introduced in-context as true, and _vice-versa_. Improving our understanding of the structure of LLM truth representations also improves our ability to extract LLM beliefs: based on geometrical considerations, we introduce **mass-mean probing1**, a simple, optimization-free probing technique which may also be of interest outside of the study of LLM truth representations (section 4.1). We find that mass-mean probes generalize better and are more causally implicated in model outputs than other probing methods. Footnote 1: Mass-mean probing is named after the mass-mean shift intervention of Li et al. (2023) Overall, this work provides strong evidence that LLM representations contain a truth direction and makes progress on extracting this direction given access to true/false datasets. Our code, datasets, and an interactive dataexplorer are available at [https://saprmarks.github.io/geometry-of-truth/](https://saprmarks.github.io/geometry-of-truth/). ### Related work **Linear world models.** Substantial previous work has centered on the question of whether LLMs have world models decodable from their representations (Li et al., 2023; 2021; Abdou et al., 2021; Patel and Pavlick, 2022). Early work especially focused on whether individual neurons represent features (Wang et al., 2022; Sajjad et al., 2022; Bau et al., 2020), but features may more generally be represented by _directions_ in a LLM's latent space (i.e. linear combinations of neurons) (Dalvi et al., 2018; Gurnee et al., 2023; Cunningham et al., 2023; Elhage et al., 2022). If a model represents a feature along a single direction in its latent space, then we say the model _linearly represents_ the feature. Just as other authors have asked whether models have directions representing the concepts of "West Africa" (Goh et al., 2021) or "basketball" (Gurnee et al., 2023), we ask here whether there is a direction corresponding to the truth or falsehood of a factual statement. Figure 1: Top two principal components of the LLaMA-13B layer 12 residual stream representations of statements in our datasets. **Probing for truthfulness.** Other authors have trained probes to classify truthfulness from LLM activations, using both logistic regression (Azaria and Mitchell, 2023; Li et al., 2023b) and unsupervised techniques (Burns et al., 2023). This work differs from prior work in a number of ways. First, our datasets consist only of clear, simple, and unambiguous factual statements, unlike the intentionally misleading question/answer pairs of Li et al. (2023b); Lin et al. (2022), the complicated and inconsistently structured prompts of Burns et al. (2023), and the sometimes confusing statements of Azaria and Mitchell (2023); Levinstein and Herrmann (2023). Second, a cornerstone of our analysis is evaluating whether probes trained on one dataset transfer to other topically and structurally different datasets in terms of _both_ accuracy _and_ causal mediation of model outputs.2 Third, we go beyond the mass-mean shift interventions of Li et al. (2023b) by introducing and systematically studying the properties of mass-mean probes; this improved understanding allows us to perform causal interventions which are more localized than those of _loc. cit._ Footnote 2: Burns et al. (2023); Azaria and Mitchell (2023); Levinstein and Herrmann (2023) do test the transfer accuracy of probes (with mixed results), but do not perform any causal mediation experiments, even on their probes’ train sets. **Causal methods.** Accurate generalization of probes trained on one dataset to other datasets is an inherently _correlational_ observation, a lax standard of evidence for evaluating interpretability hypotheses (Hewitt and Liang, 2019; Belinkov, 2022). For instance, probes trained to identify a feature \(f\) could generalize well by instead relying on some feature \(f^{\prime}\neq f\) which is correlated with \(f\) on both the train and test sets. A more stringent standard of evidence is _causal_ evidence that targeted changes to a model's execution produce outputs consistent with the predictions of an interpretability hypothesis (Pearl, 2001; Vig et al., 2020; Meng et al., 2022; Li et al., 2023b). ## 2 Datasets In this work, we scope truth to mean the truth or falsehood of a factual statement. Appendix A further clarifies this definition and its relation to definitions used elsewhere. We introduce two classes of datasets, shown in table 1. Our **curated** datasets consist of statements which are _uncontroversial, unambiguous_, and _simple enough_ that LLaMA-13B is likely to understand whether they are true or false. For example, "The city of Zagreb is in Japan" (false) or "The Spanish word 'nariz' does not mean 'giraffe' " (true). Following Levinstein and Herrmann (2023), some of our datasets are formed from others by negating statements by adding "not" (e.g. neg_cities consists of negations of statements in cities) or by taking logical conjunctions (e.g. cities_cities_conj consists of statements of the form "It is the case both that sl and sl2" where sl and sl are statements from cities). In addition to our true/false datasets, we introduce another dataset, likely, consisting of nonfactual text where the final token is either the most likely or the 100th most likely completion, according to LLaMA-13B. We use this to disam \begin{table} \begin{tabular}{l l r} name & topic & rows \\ \hline cities & locations of world cities & 1496 \\ sp\_en\_trans & Spanish-English translation & 354 \\ neg\_cities & negations of statements in cities & 1496 \\ neg\_sp\_en\_trans & negations of statements in sp\_en\_trans & 354 \\ larger\_than & Numerical comparisons: larger than & 1980 \\ smaller\_than & Numerical comparisons: smaller than & 1980 \\ cities\_cities\_conj & Conjunctions of two statements in cities & 1500 \\ cities\_cities\_disj & Disjunctions of two statements in cities & 1500 \\ \hline likely & Nonfactual text with likely or unlikely final tokens & 10000 \\ \hline companies\_true\_false & The headquarters and industries of companies & 1200 \\ common\_claim\_true\_false & Various claims & 4450 \\ counterfact\_true\_false & Various factual recall claims & 31960 \\ \hline \end{tabular} \end{table} Table 1: Our datasets biguate between the text which is true and text which is likely. For more details on the construction of these datasets, including statement templates, see appendix G. Our **uncurated** datasets are more difficult test sets adapted from other sources. They contain claims which are sometimes ambiguous, malformed, controversial, or unlikely for the model to know the fact-of-the-matter about. The uncurated sets are companies_true_false, common_claim_true_false, and counterfactual_true_false, adapted from Azaria & Mitchell (2023), Casper et al. (2023), and Meng et al. (2022), respectively. ## 3 Visualizing LLM representations of true/false datasets We begin our investigation with a simple technique: visualizing LLaMA-13B representations of our datasets using principal component analysis (PCA). We observe clear linear structure in the top two principal components (PCs) of our datasets, with true statements linearly separating from false ones. As explored in appendix B, this structure emerges rapidly in early-middle layers and emerges later for datasets of more structurally complicated statement (e.g. conjunctive statements). Here and throughout this paper, we extract residual stream activations over the final token of the input statements, all of which end with a period. This choice is discussed in appendix H. We also center the representations in each dataset by subtracting off their mean. In this section, we use the residual stream in layer 12, selected for being the earliest layer in which linear structure had emerged for all of our true/false datasets. We encourage readers to view our online dataexplorer at [https://saprmarks.github.io/geometry-of-truth/dataexplorer](https://saprmarks.github.io/geometry-of-truth/dataexplorer), which contains interactive versions of these visualizations. ### Key observations **True and false statements separate in the top few PCs** (figures 1 and 2). Moreover, after projecting away these PCs, there remains essentially no linearly-accessible information for distinguishing Figure 2: LLaMA-13B layer 12 residual stream representations of datasets, visualized after projection onto top PCs of other datasets. If \(\mathcal{D}_{\mathrm{PCA}}\) is the dataset given on the \(y\)-axis and \(\mathcal{D}_{\mathrm{plot}}\) is the dataset given on the \(x\)-axis, then the corresponding subplot is produced by computing the top two PCs of \(\mathcal{D}_{\mathrm{PCA}}\) and then projecting \(\mathcal{D}_{\mathrm{plot}}\) onto these PCs. Thus the subspace shown is the same across rows and the data shown is the same across columns. true/false statements (appendix C). Given a dataset \(\mathcal{D}\), call the vector pointing from the false statement representations to the true ones the **naive truth direction (NTD)** of \(\mathcal{D}\).3 Footnote 3: Of course, there are many such vectors. In section 4 we will be more specific about which such vector we are discussing (e.g. the vector identified by training a linear probe with logistic regression). In this section, we will leave the notion informal to facilitate discussion. **NTDs of different datasets often align, but sometimes do not.** For instance, figure 2 displays our datasets separating along the first PC of cities. On the other hand, in figure 3 we see a stark failure of NTDs to align: the NTDs of cities and neg_cities are approximately _orthogonal_, and the NTDs of larger_than and smaller_than are approximately _antipodal_. In section 4, these observations will be corroborated by the poor generalization of probes trained on cities and larger_than to neg_cities and smaller_than. ### Hypotheses for explaining misalignment of naive truth directions Here we articulate hypotheses which would explain both (1) the visible linear structure apparent in each dataset individually and (2) the failure for NTDs of different datasets to align in general. * **LLM representations have no truth direction, but do have directions corresponding to other features which are sometimes correlated with truth.** For instance, LLaMA-13B might have linearly-represented features representing sizes of numbers, association between English words and their Spanish translations, and association between cities and their countries (Hernandez et al., 2023). This would result in each dataset being linearly separated, but NTDs only aligning when all their truth-relevant features are correlated. * **LLMs linearly represent the truth of various types of statements, without having a unified truth feature.** The the truth of negated statements, conjunctive statements, statements about comparisons, etc., may all be treated as distinct linearly-represented features. * **Misalignment from correlational inconsistency (MCI): there is a truth direction as well as other linearly-represented features which correlate with truth on narrow data distributions; however these correlations may be inconsistent between datasets.** For instance, MCI would explain the center panel of figure 3 by positing that the negative \(y\)-direction represents truth and the positive \(x\)-direction represents some feature which is _correlated_ with truth on sp_en_trans and _anticorrelated_ with truth on neg_sp_en_trans. H1 is at odds with the results of sections 4 and 5: for H1 to hold, there would have to be a non-truth feature which is both correlated with truth across all of our datasets and causally mediates the way Figure 3: Top two principal components of the datasets \(\mathcal{D}^{+}\cup\mathcal{D}^{-}\) where \(\mathcal{D}^{+}\) and \(\mathcal{D}^{-}\) consist of opposite statements. Representations in \(\mathcal{D}^{+}\) and \(\mathcal{D}^{-}\) are independently centered by subtracting off their means; without this, there would also be an additional translational displacement between \(\mathcal{D}^{+}\) and \(\mathcal{D}^{-}\). Inset shows NTDs for \(\mathcal{D}^{+}\) and \(\mathcal{D}^{-}\). The orthogonality in the left and center plots emerges over layers; see appendix B. LLaMA-13B handles in-context true/false statements. We will also see in section 5 that directions identified by training probes on _both_ cities and neg_cities are more causally implicated in LLaMA-13's processing of true/false statements. Thus, our work is overall suggestive of MCI. ## 4 Probing and generalization experiments In this section we train probes on datasets of true/false statements and test their generalization to other datasets. But first we discuss a deficiency of logistic regression and propose a simple, optimization-free alternative: **mass-mean probing**. We will see that mass-mean probes generalize better and are more causally implicated in model outputs than other probing techniques. ### Challenges with logistic regression, and mass-mean probing A common technique in interpretability research for identifying directions representing a feature is to train a linear probe with logistic regression Alain and Bengio (2018) on a dataset of positive and negative examples of the feature. In some cases, however, the direction identified by logistic regression can fail to reflect an intuitive best guess for the feature direction, even in the absence of confounding features. Consider the following scenario, illustrated in figure 4 with hypothetical data: * Truth is represented linearly along a direction \(\mathbf{\theta}_{t}\). * Another feature \(f\) is represented linearly along a direction \(\mathbf{\theta}_{f}\) which is _non-orthogonal_ to \(\mathbf{\theta}_{t}\).4 Footnote 4: As suggested by the _superposition hypothesis_ of Elhage et al. (2022), features being represented non-orthogonally in this way may be the typical case in deep learning. * The statements in our dataset have some variation with respect to feature \(f\), independent of their truth value. We would like to recover the direction \(\mathbf{\theta}_{t}\), but logistic regression will fail to do so. Assuming for simplicity linearly separable data, logistic regression will instead converge to the maximum margin separator Soudry et al. (2018) (the dashed magenta line in figure 4). Intuitively, logistic regression treats the small projection of \(\mathbf{\theta}_{f}\) onto \(\mathbf{\theta}_{t}\) as significant, and adjusts the probe direction to have less "interference" (Elhage et al., 2022) from \(\mathbf{\theta}_{f}\). A simple alternative to logistic regression which will recover the desired direction in this scenario is to take the vector pointing from the mean of the false data to the mean of the true data. In more detail if \(\mathcal{D}=\{(\mathbf{x}_{i},y_{i})\}\) is a dataset of \(\mathbf{x}_{i}\in\mathbb{R}^{d}\) with binary labels \(y_{i}\in\{0,1\}\), we set \(\mathbf{\theta}_{\mathrm{mm}}=\mathbf{\mu}^{+}-\mathbf{\mu}^{-}\) where \(\mathbf{\mu}^{+},\mathbf{\mu}^{-}\) are the means of the positively- and negatively-labeled datapoints, respectively. A reasonable first pass at converting \(\mathbf{\theta}_{\mathrm{mm}}\) into a probe is to define5 Footnote 5: In this work, we are interested in identifying truth _directions_, so we always center our data and use probes without biases. In other settings, we would instead set \(p_{\mathrm{mm}}(\mathbf{x})=\sigma(\mathbf{\theta}_{\mathrm{mm}}^{T}\mathbf{x}+b)\) for a tunable bias \(b\in\mathbb{R}\). \[p_{\mathrm{mm}}(\mathbf{x})=\sigma(\mathbf{\theta}_{\mathrm{mm}}^{T}\Sigma^{-1}\mathbf{x}).\] where \(\sigma\) is the logistic function. However, when evaluating on data that is independent and identically distributed (IID) to \(\mathcal{D}\), we can do better. Letting \(\Sigma\) be the covariance matrix of the dataset \(\mathcal{D}^{c}=\{\mathbf{x}_{i}-\mathbf{\mu}^{+}:y_{i}=1\}\cup\{\mathbf{x}_{i}-\mathbf{\mu}^{ -}:y_{i}=0\}\) formed by independently centering the positive and negative datapoints, we set \[p_{\mathrm{mm}}^{\mathrm{iid}}(\mathbf{x})=\sigma(\mathbf{\theta}_{\mathrm{mm}}^{T} \Sigma^{-1}\mathbf{x}).\] Figure 4: An illustration of a weakness of logistic regression. Truth is represented along the black direction, while an irrelevant feature that varies independently from truth is represented along the blue direction. Logistic regression finds the magenta direction. The effect of multiplying by \(\Sigma^{-1}\) is to tilt the decision boundary to accommodate interference from \(\theta_{f}\); in fact, we show in appendix F that under mild assumptions, \(\Sigma^{-1}\theta_{\mathrm{mm}}\) coincides on average with the direction found by logistic regression. Thus mass-mean probing provides a way to select a good decision boundary while - unlike logistic regression - also tracking a candidate feature direction which may be non-orthogonal to this decision boundary. Appendix E gives another interpretation of mass-mean probing in terms of Mahalanobis whitening. We call the probes \(p_{\mathrm{mm}}\) and \(p_{\mathrm{mm}}^{\mathrm{id}}\)**mass-mean probes**. As we will see, \(p_{\mathrm{mm}}^{\mathrm{lid}}\) is about as accurate as logistic regression probes on the train set \(\mathcal{D}\), while \(p_{\mathrm{mm}}\) enjoys better generalization to other true/false datasets and is more causally implicated in model outputs than other probing techniques. ### Experimental set-up We evaluate the following techniques for eliciting the truth or falsehood of factual statements from LLaMA-13B. **Logistic regression**, as in Alain & Bengio (2018) but with fixed bias \(b=0\). **Mass-mean probing.** We use \(p_{\mathrm{mm}}^{\mathrm{id}}\) when validating on held-out IID data and \(p_{\mathrm{mm}}\) otherwise. **Contrast-Consistent Search (CCS)**, introduced in Burns et al. (2023). CCS is an unsupervised method: given _contrast pairs_ of statements with opposite truth values, CCS identifies a direction along which the activations of these statements are far apart. For our contrast pairs, we pair statements from cities and neg_cities, and from larger_than and smaller_than. **Logistic regression/mass-mean probing on the likely dataset.** This is used to benchmark our probes against probes trained only to classify statements as being likely/unlikely text. **Calibrated 5-shot prompting.** Given a dataset \(\mathcal{D}\), we construct a 5-shot prompt by sampling five statements6 and labels from \(\mathcal{D}\) and presenting them to the model in-context. We then append the remaining statements in \(\mathcal{D}\) to this prompt one-at-a-time and treat the model's predicted next token as its classification. See appendix I for example prompts. We then calibrate predictions so that half of the statements are labeled true/false; this improves performance by a few percentage points. Since performance is very sensitive to the 5-shot prompt used, we report the best of five randomly-generated 5-shot prompts. Footnote 6: The number \(n=5\) of shots was selected by a hyperparamter sweep on cities. **Logistic regression on the validation set (oracle).** This gives an upper-bound for the accuracy of a linear probe on the validation set. ### Results The results are shown in figure 5. We highlight some key observations. **Generalization accuracy is high across all techniques.** For instance, no matter the technique, training probes only on datasets of statements about numerical comparisons results in a probes with \(95\%+\) accuracy on Spanish-English translation. The performance of the probes relative to calibrated 5-shot accuracies suggest that model outputs are being influenced by features other than the truth. **CCS and mass-mean probing outperform logistic regression, with mass-mean probing doing best.** The average accuracies across the cities+neg_cities columns are 73%, 86%, and 84% for logistic regression, mass-mean probing, and CCS, respectively. **Probes trained on true/false datasets outperform probes trained on likely.** While probes trained on likely are clearly better than random on cities (a dataset where true statements are significantly more probable than false ones), they generally perform poorly. This is especially true on datasets where likelihood is negatively correlated (neg_cities, neg_sp_en_trans) or approximately uncorrelated (larger_than, smaller_than) with truth. This demonstrates that LLaMA-13B linearly encodes truth-relevant information beyond the plausibility of the text. ## 5 Causal intervention experiments In this section we perform experiments which measure the extent to which the probe directions identified in section 4 are causally implicated in model outputs. ### Experimental set-up Our goal is to cause LLaMA-13B to treat false statements introduced in context and true and _vice versa_. Consider the following prompt: The Spanish word 'jirafa' means 'giraffe'. This statement is: TRUE The Spanish word 'escribir' means 'to write'. This statement is: TRUE The Spanish word 'diccionario' means 'green'. This statement is: FALSE The Spanish word 'gato' means 'cat'. This statement is: TRUE The Spanish word 'aire' means'silver'. This statement is: FALSE The Spanish word 'uno' means 'floor'. This statement is: We hypothesize that the truth value of the statement "The Spanish word 'uno' means 'floor'." is represented in the residual stream above two tokens: the final word (floor) and the end-of-sentence punctuation token ('.), bolded above. Thus if \(\mathbf{\theta}\) is a candidate truth direction in the layer \(\ell\) residual stream, we intervene in the forward pass of LLaMA-13B by adding some multiple \(\alpha\mathbf{\theta}\), \(\alpha>0\), to the layer \(\ell\) residual stream above these tokens. More specifically, we pass the above prompt into LLaMA-13B to obtain layer \(\ell\) residual stream activations \(\mathbf{x}_{0},\mathbf{x}_{1},\dots,\mathbf{x}_{-1}\). If \(\mathbf{x}_{k}\) and \(\mathbf{x}_{k+1}\) are the activations above these two tokens, we add \(\alpha\mathbf{\theta}\) to \(\mathbf{x}_{k}\) and \(\mathbf{x}_{k+1}\) while leaving all other Figure 5: Generalization accuracy of probes trained on LLaMA-13B layer 12 residual stream activations. The \(x\)-axis shows the train set, and the \(y\)-axis shows the test set. All probes are trained on 80% of the data. When the train set and test set are the same, we evaluate on the held-out 20%. Otherwise, we evaluate on the full test set. activations unchanged. We then allow the model to continue its forward pass as usual with the modified activations. We record the model's probabilities \(p(\texttt{TRUE}),p(\texttt{FALSE})\); our goal is to increase \(p(\texttt{TRUE})-p(\texttt{FALSE})\). Conversely, starting from a true statement we can _subtract_ a multiple \(\alpha\mathbf{\theta}\) from the corresponding token positions with the goal of _decreasing_\(p(\texttt{TRUE})-p(\texttt{FALSE})\). We perform this intervention with \(\ell=10\) and where \(\mathbf{\theta}\) is a direction extracted by one of the probes \(p\) in section 4. We normalize \(\mathbf{\theta}\) so that \(p(\mu^{-}+\mathbf{\theta})=p(\mu^{+})\) where \(\mu^{+},\mu^{-}\) are the mean representations of the true and false statements, respectively. Thus, from the perspective of \(p\), adding \(\mathbf{\theta}\) takes the average false statement to the average true statement. Effective intervention strengths closer to \(\alpha=1\) therefore indicate that \(\mathbf{\theta}\) better aligns with the model's truth direction. For the false\(\rightarrow\)true version of our experiment, we use the first five lines of the prompt above with each false statement in sp_en_trans appended one-at-a-time; we report the optimal intervention strength \(\alpha\) and the average \(p(\texttt{TRUE})-p(\texttt{FALSE})\) for that intervention strength. For the true\(\rightarrow\)false version, we do the same but using only true statements in sp_en_trans. ### Results **Mass-mean probe directions are highly causal; logistic regression directions are less causal.** This is most stark when causing LLaMA-13B to believe a true statement is false: our best intervention induces LLaMA-13B to swing its average prediction from TRUE with probability \(77\%\) to FALSE with probability \(89\%\). **Probes trained on likely have some effect, but it is small and inconsistent.** For instance, in the false\(\rightarrow\)true case, intervening along the logistic regression direction of likely has the opposite of the intended effect, so we leave it unreported. This reinforces our case that LLMs represent truth and not only text likelihood. **Training on statements and their negations results in directions which are more causal.** This provides evidence for the MCI hypothesis of section 3.2. **Interventions in other positions are ineffective.** We tested applying our interventions over the final two tokens of other statements in the prompt. This produced no effect. Thus, our intervention cannot work by simply adding in a "say true" direction. This also supports our hypothesis that LLaMA-13B represents truth over the final two tokens of a factual statement. ## 6 Discussion ### Limitations and future work Our work has a number of limitations. First, we focus on simple, uncontroversial statements, and therefore cannot disambiguate truth from closely related potential features, such as "commonly believed" or "verifiable" (Levinstein & Herrmann, 2023). Second, we only address how to identify a \begin{table} \begin{tabular}{c|c|c||c|c} & \multicolumn{2}{c||}{false\(\rightarrow\)true} & \multicolumn{2}{c}{true\(\rightarrow\)false} \\ train set & \(\alpha\) & \(p(\texttt{TRUE})-p(\texttt{FALSE})\) & \(\alpha\) & \(p(\texttt{FALSE})-p(\texttt{TRUE})\) \\ \hline \hline no intervention & \(-\) & \(-0.45\) & \(-\) & \(-0.55\) \\ \hline cities (LR) & \(15\) & \(0.23\) & \(14\) & \(0.01\) \\ cities+neg\_cities (LR) & \(47\) & \(0.39\) & \(17\) & \(0.18\) \\ \hline cities (MM) & \(4\) & \(0.25\) & \(6\) & \(0.77\) \\ cities+neg\_cities (MM) & \(15\) & \(\mathbf{0.43}\) & \(9\) & \(\mathbf{0.79}\) \\ \hline cities+neg\_cities (CCS) & \(46\) & \(0.41\) & \(13\) & \(0.59\) \\ \hline likely (LR) & \(-\) & \(-\) & \(49\) & \(0.01\) \\ likely (MM) & \(7\) & \(0.23\) & \(15\) & \(0.19\) \\ \end{tabular} \end{table} Table 2: Results of intervention experiments. The train set column indicates the datasets and probing technique (logistic regression, mass-mean probing, or CCS) which was used to identify the truth direction. The \(\alpha\) column gives the scaling factor which was optimal in a sweep of \(\alpha\)’s. Probability differences are averaged over all statements in sp_en_trans. truth _direction_; we found empirically that the optimal bias for linear probes was under-determined by many of our training sets, and so we leave the problem of identifying well-generalizing biases to future work. Third, we studied only one model at a single scale, though we've checked that many of our results seem to hold for LLAMA-7B and LLAMA-30B as well. Finally, although the evidence in section 4 and 5 shed light on which of the hypotheses in section 3.2 is correct, uncertainty remains. ### Conclusion In this work we conduct a detailed investigation of the structure of LLM representations of truth. Drawing on simple visualizations, correlational evidence, and causal evidence, we find strong reason to believe that there is a "truth direction" in LLM representations. We also introduce mass-mean probing, a simple alternative to other linear probing techniques which better identifies truth directions from true/false datasets. #### Acknowledgments We thank Ziming Liu and Isaac Liao for useful suggestions regarding distinguishing true text from likely text, and Wes Gurnee, Eric Michaud, and Peter Park for many helpful discussions throughout this project. We thank David Bau for useful suggestions regarding the experiments in section 5. We also thank Oam Patel, Hadas Orgad, Sohee Yang, and Karina Nguyen for their suggestions, as well as Helena Casademunt, Max Nadeau, and Ben Edelman for giving feedback during this paper's preparation.
2302.09906
Revealing production networks from firm growth dynamics
We study the correlation structure of firm growth rates. We show that most firms are correlated because of their exposure to a common factor but that firms linked through the supply chain exhibit a stronger correlation on average than firms that are not. Removing this common factor significantly reduces the average correlation between two firms with no relationship in the supply chain while maintaining a significant correlation between two firms that are linked. We then investigate if this observation can be used to reconstruct the topology of a supply chain network using Gaussian Markov Models.
Luca Mungo, José Moran
2023-02-20T11:07:17Z
http://arxiv.org/abs/2302.09906v4
# Revealing production networks from firm growth dynamics ###### Abstract We study the correlation structure of firm growth rates. We show that most firms are correlated because of their exposure to a common factor but that firms linked through the supply chain exhibit a stronger correlation on average than firms that are not. Removing this common factor significantly reduces the average correlation between two firms with no relationship in the supply chain while maintaining a significant correlation between two firms that are linked. We then demonstrate how this observation can be used to reconstruct the topology of a supply chain network using Gaussian Markov Models. ## I Introduction Fifty years ago, Wassily Leontief was awarded the Nobel prize in Economics for his _development of the input-output method and its application to important economic problems_.1. His input-output framework [2] views industries as nodes in a network of physical and monetary flows. Conservation laws for these flows lead, at economic equilibrium, to linear systems of equations linking the production of different industries, whose solutions show how differences in the output of an industry impact the output of any other economic sector. Footnote 1: See e.g. [https://www.nobelprize.org/prizes/economic-sciences/1973/summary/](https://www.nobelprize.org/prizes/economic-sciences/1973/summary/). Interestingly, Leontief took inspiration from the _tableau economique_[1] of the physician-turned-economist Quesnay, a member of the physiocratic school of economic thought, which saw the economy much like a human body. To Quesnay, mapping the relationships within the economy was equivalent to studying human anatomy. They were used to determine, for example, how much one should invest in each sector of an economy in order to increase the production of a given sector.2. It was in particular an important tool for central planners in the decades following the Second World War [3]. Footnote 2: This was indeed not an easy problem: to increase the production of steel, it is necessary to increase the production of coal, but coal extraction requires having steel. Input-output analysis provided tools to solve this conundrum. Later on, input-output analysis was used to understand the origins of macroeconomic fluctuations, with the seminal paper of Long and Plosser [4], where the input-output network amplifies small shocks that can lead to system-wide crises. However, most of these analyses are conducted at a very coarse-grained level, in the sense that they attempt to model the different _sectors_ of the economy rather than modelling more granular constituents: there are 405 industries in U.S. Bureau of Economic Analysis' most disaggregated input-output tables, while there are approximately 200 million firms worldwide. This is an unsettling remark, as recent literature [5; 6; 7] shows that fine-grained production networks play an important role in the propagation of shocks, and that aggregating firms into sectors can lead to a misestimation of risk and distress propagation. Detailed firm-level data will also be crucial to the coming of age of agent-based modelling, a promising approach to studying _out-of-equilibrium_ macro-economic phenomena [8], that recently matched the forecasting accuracy of more traditional methods [9; 10; 11]. Firm-level production data is thus very useful, but is also scarce [12]: the few datasets that are available only cover certain countries or certain categories of companies, leaving most of the global production network inaccessible. To tackle this problem, recent efforts have attempted to _reconstruct_ the production network, inferring the topology of the network using only partial, aggregate or related data. For instance, [13] uses mobile phone data to reconstruct the supply chain network of an undisclosed European country, while [14] and [15] pioneered machine learning for link prediction in supply chains, leveraging topological features computed by hand or distilled automatically through Graph Neural Networks. A similar approach was used in [16] to predict links between firms using their financial, industrial, and geographical features. Additional efforts have been carried out to adapt maximum-entropy models [17; 18; 19; 20; 21; 22; 23], already popular for models of international trade [24; 25; 26; 27; 21], to the reconstruction of firm-level networks. The motivation of this research effort is that economic models conceived to represent the economy at the firm level require a good knowledge of the production network and should lead to a better understanding of economic dynamics and forecast. But the converse should also be true: supply chains are vital in a firm's production, and they should leave a trace on the dynamics of a firm, something that has been observed when considering natural disasters [6] or the dynamics of companies' market capitalisation [28]. Is it possible to work backwards from this, and infer the network topology from firm dynamics? The study of firm dynamics, through the statistical analysis of their growth rates, has a long history which dates back to the work of Gibrat [29]. Gibrat's model is a multiplicative growth model initially proposed to explain the distribution of firm sizes (proxied, e.g., by sales or number of employees). The model assumes that a firm grows by a random percentage of its current size from one period to the next. This random variable is thought of as being independent across firms, and was initially also modelled as having the same distribution for all companies. Although this last hypothesis has been weakened in past work, showing for example that the volatility of firm growth decreases with their size in a non-trivial way [30], and even that it is necessary to think of the volatility of growth as being firm-dependent [31], the hypothesis of independence has not been explicitly questioned thus far. We propose to go beyond this, making the dependencies between firm growth explicit by studying the correlations between them and leveraging this information to reconstruct the firm network. This paper is organised as follows. Section I gives an overview of the data we use for our paper, which we use in conjunction with the methods we outline in Section II. Section III presents clear empirical evidence of the link between the supply chain and firm growth. Section IV makes use of these observations to reconstruct the production network from firm growth time series. We detail both the optimisation algorithm used to carry out this reconstruction as well as the results we obtain. Finally, Section V concludes. ## I Data The primary data sources used in this article are the FactSet Fundamentals and FactSet Supply Chain Relationships datasets. Together, they provide a coherent environment from which companies' financial information (such as their quarterly sales or market capitalisation), legal information (e.g., their industrial classification or headquarters location) and supply chain connections can be retrieved. Although it is very large, it should be noted that this dataset has a strong bias in covering mainly US firms. The first dataset contained in this environment, FactSet Fundamentals, contains firms' financial, balance sheet, and legal information. The dataset spans a time range going from the early 1980s to the present day and covers developed and emerging markets worldwide for a total of around \(100,000\) active and inactive companies. From 1995 onwards, data on firms' sales, capitalisation and investments is available for each quarter. The second dataset, FactSet Supply Chain Relationships, is assembled by FactSet using multiple sources. The most prominent of these are filings required by the US Federal Accounting Standards, whereby each firm must report its most important suppliers and clients, and import-export declarations from bills of lading. These sources are complemented with insight mined by FactSet from news, press releases, company websites, and other sources of business intelligence, which permit the inference of a link between two companies. Each record of a link between two companies can be represented by a temporal network, using directed links connecting a supplier to its customers. The temporal dimension of this data is also provided by FactSet: each link is assigned specific timestamps indicating the first time the connection was reliably attested and when the connection is known to have ended, when this is the case.3 Footnote 3: Note that this procedure implies that persistent links appear multiple times, as they are reported over many years. To simplify our analysis, we have discarded the temporal dimension by aggregating all the links into a single network that only considers whether a link between two companies was ever present in the time period we consider. Another simplification we perform is to aggregate firms that may be part of large conglomerates at the ultimate parent level using ownership structure data. Thus, the total sales, market capitalisation and any other balance sheet data of these aggregated entities are the sum of these quantities for each of the constituting entities. At the network level, this procedure has the effect of deleting possible self-loops, as, for example, two branches of the same conglomerate that are present in separate countries can trivially be reported to have supply chain linkages between them. These aggregated entities constitute what we understand by "firms" or "companies" in the remainder of this paper. Finally, we have only retained firms in the global supply chain's _weakly largest connected component4_, whose financial information was available for at least eight years.5 Our final sample is composed of \(16,401\) firms connected by \(178,911\) links. Appendix B details how to transform FactSet's original tables into our working dataset. Footnote 4: A weakly connected component is a set of nodes such that for any two nodes \(A\) and \(B\), there exists a directed path starting at \(A\) and arriving at \(B\) or from \(B\) to \(A\), but not necessarily the other way around. When both a path \(A\rightarrow\ldots\to B\) and \(B\rightarrow\ldots\to A\) exist for any two nodes \(A\) and \(B\) in the component, a much more restrictive condition, then it is said to be strongly connected. Footnote 5: The reason for this is to remove time series that are too short for our analysis, as the reader will appreciate later. ## II Growth time series We label firms with an index \(i=1,\ldots,N\), calling \(s_{i}(t)\) and \(m_{i}(t)\) the sales and market capitalisation (the stock price multiplied by the number of shares outstanding) of firm \(i\) at time \(t\) (counted in quarters). With this, we define the annual growth rate of the sales of the firm as \[g_{i}(t):=\log\left(\frac{s_{i}(t+4)}{s_{i}(t)}\right). \tag{1}\] This quantity describes sales' variations over the scale of a year, but sampled with a quarterly frequency. We follow Ref. [31] in describing sales growth rates with a random variable with a Gaussian central region, although with fatter tails than a normal distribution, along with firm-dependent mean and variance (volatility). This therefore leads us to define the rescaled growth-rates, \[g_{i}^{\prime}(t):=\frac{g_{i}(t)-\mathbb{E}_{t^{\prime}}\left[g_{i}(t^{ \prime})\right]}{\sqrt{\forall_{t^{\prime}\neq t}\left[g_{i}(t^{\prime}) \right]}} \tag{2}\] where the average is computed over all times \(t^{\prime}\), but the variance is computed from the time series where the observation corresponding to \(t^{\prime}=t\) has been removed. This corresponds to the _leave-one-out_ rescaling defined in [32], where the denominator on the right-hand side of Eqs.(2) allows one to rescale with respect to the volatility when considering a variable with a fat-tailed distribution.6 We drop the apostrophe below for clarity, as we will not use the "bare" growth rates in the remainder of this article. Footnote 6: Indeed, when the distribution is fat-tailed then the naive estimator for the variance, related to \(\sum_{t}g_{i}(t)^{2}\), may be dominated by a single observation (the largest one in the sample) and therefore introduce an artificial cut-off when dividing by the variance, because in this case \(\sum_{i}g_{i}(t)^{2}\approx\max_{i}g_{i}(t)^{2}\). When rescaling the largest value in the sample, it is clear that it may be clipped because of this. Our goal in the rest of this article is to infer the supply chain structure from the correlation structure of the growth rates. Nonetheless, it is likely that the growth rates of two companies are correlated because of reasons other than their connection through the supply chain. This can be the case, for instance, if two firms are in a given country that endures an exogenous economic shock, as in the case of the Covid-19 pandemic. Our strategy therefore will be to attempt to remove these common factors, assuming that what remains in the correlations must be the more subtle effects due to the supply chain. To illustrate the technique use for this, we shall resort to a very simple model that is described below. \begin{table} \begin{tabular}{c|c} \hline \hline Number of firms & \(16,401\) \\ Number of links & \(178,911\) \\ Density & \(6.7\times 10^{-4}\) \\ Median degree & \(7\) \\ Max. degree & \(1664\) \\ \hline \end{tabular} \end{table} Table 1: Network summary statistics ### Removing common shocks Let us propose first a very simple example, where one has \(N\) time series \(x_{i}(t)\), with \(1\leq i\leq N\) and \(1\leq t\leq T\). Each time series \(x_{i}(t)\) is composed of an idiosyncratic term, driving time series \(i\) only and given by i.i.d. Gaussian terms, and a common term that affects all the time series and that is also random. The model reads \[x_{i}(t)=\xi_{i}(t)+\sigma v(t), \tag{3}\] where \(\xi_{i}(t)\) is a Gaussian random variable with \(\mathbb{E}[\xi_{i}(t)]=0\) and \(\mathbb{E}[\xi_{i}(t)\xi_{j}(t^{\prime})]=\delta_{ij}\delta_{tt^{\prime}}\), with \(\delta_{ij}\) the Kronecker delta (i.e., \(\delta_{ij}=1\) if \(i=j\) and \(0\) otherwise). Similarly, \(v(t)\) is a Gaussian random variable satisfying \(\mathbb{E}[v(t)v(t^{\prime})]=\delta_{tt^{\prime}}\) and \(\mathbb{E}[v(t)\xi_{i}(t^{\prime})]=0\). In this case, where we know precisely the nature of the common shock, we can estimate \(v(t)\) when \(N\) is large by writing: \[\frac{1}{N}\sum_{i=1}^{N}x_{i}(t)=\frac{1}{N}\sum_{i=1}^{N}\xi_{i}(t)+\sigma v (t)\underset{N\gg 1}{\approx}\sigma v(t). \tag{4}\] The correlation matrix for the model's time series reads \[C_{ij}:=\mathbb{E}[x_{i}(t)x_{j}(t)]=\delta_{ij}+\sigma^{2}, \tag{5}\] which we can rewrite as \(\mathbf{C}=\mathbf{I}+N\sigma^{2}\mathbf{u}\mathbf{u}^{\intercal}\), with \(\mathbf{u}=\frac{1}{\sqrt{N}}\mathbf{1}\), and where \(\mathbf{u}^{\intercal}\) indicates vector transposition.7 Because \(\mathbf{C}\) is the sum of the identity matrix and a rank-one matrix, it is easy to see that it has an eigenvalue \(1+\sigma^{2}\), corresponding to the eigenvector \(\mathbf{u}\) as \(\mathbf{Cu}=(1+N\sigma^{2})\mathbf{u}\), with all the other \(N-1\) remaining eigenvalues equal to \(1\), with eigenvectors corresponding to the canonical basis of the vector space that is orthogonal to \(\mathbf{u}\). We can in fact go further in this geometric interpretation and bring meaning to the vector \(\mathbf{u}\) by focusing on the _projection_ of the time series onto it. What we mean by this is that for every time-step in the multi-dimensional time series, we may consider the vector \(\mathbf{x}(t)=(x_{1}(t),\ldots,x_{2}(t))\), and consider the projected time series \(\hat{v}(t)=\mathbf{u}\cdot\mathbf{x}(t)\). Footnote 7: This vector \(\mathbf{u}\) is chosen to be normalised. In this case, we notice that for large \(N\) we should have \(\hat{v}(t)=\frac{1}{N}\sum_{i=1}^{N}x_{i}(t)\approx\sigma v(t)\). We can actually generalise this: if we replace Eq.(3) by \[x_{i}(t)=\xi_{i}(t)+\sigma u_{i}v(t), \tag{6}\] that is a model where each time series has a different exposure (or loading, in factor-models' jargon) to the common mode \(v(t)\), then the correlation matrix is the same and we still have an eigenvector \(\mathbf{u}=(u_{1},\ldots,u_{N})\).8 Doing the projection \(\mathbf{x}(t)\cdot\mathbf{u}(t)\) still leads to \(\hat{v}(t)\approx v(t)\). Footnote 8: This vector can be assumed to be normalised, if not we can always replace \(\sigma\) by \(\sqrt{\mathbf{u}^{2}}\sigma\) in the model. In fact, we can also consider the _orthogonal projector_ to \(\mathbf{u}\), given by \(\mathbf{P}=\mathbf{I}-\mathbf{u}\mathbf{u}^{\intercal}\), or equivalently \(P_{ij}=\delta_{ij}-u_{ij}\). We can now apply this projector to our time series, as \(\mathbf{y}(t)=\mathbf{P}\mathbf{x}(t)\), or equivalently by defining \(\mathbf{Y}=\mathbf{P}\mathbf{X}\). It is straightforward to check that \(y_{i}(t)=x_{i}(t)-\hat{v}(t)\approx\xi_{i}(t)\). To address our general problem of removing common fluctuations from time series, we can adopt the following procedure to remove the common mode and be left only with the idiosyncratic fluctuations. Assuming that the common mode \(v(t)\) is the primary driver of time series variations (\(\sigma\gg 1\)), we can: 1. Take the time series and compute the empirical correlation matrix, 2. Diagonalise the correlation matrix and rank the eigenvalues and eigenvectors according to the magnitude of the eigenvalue, 3. Project the time series onto the eigenvector corresponding to the largest eigenvalue to get the dynamics of the common mode, 4. Remove the dynamics of the common mode from the time series by using the orthogonal projector to the corresponding eigenvector. Naturally, we can repeat this procedure and remove also the mode corresponding to the second largest eigenvalue and so on, so that it is easily generalisable to other, more complex situations than the one of Eq.(3) (see Fig. 1 for an example where the common mode \(v(t)\) is a sinusoidal wave). The issue, however, is that this relies on the assumption that the empirical correlation matrix is a reliable estimator of the "true" underlying correlation matrix from which the data is generated.9 Naturally, this is not true, and one expects some estimation error when the length of the time series \(T\) is finite. In our toy model above, it is in fact possible to separate the contribution of the idiosyncratic noise, as \(\widehat{\mathbf{C}_{0}}:=\frac{1}{T}\left(\xi\xi^{\intercal}\right)_{ij}= \frac{1}{T}\sum_{t=1}^{T}\xi_{i}(t)\xi_{j}(t)\). Because the elements of \(\xi\) are i.i.d. Gaussian random variables, this empirical correlation matrix is known as a Wishart matrix [34], and the statistical properties of its spectrum are known to be determined by the Marcenko-Pastur distribution [35]. For a more in-depth understanding of this and other links with random matrix theory, we invite the reader to consult [36], but we will explain the main results we need below. Footnote 9: At least in this model. In reality, when analysing time series with this point of view we are making the more stringent assumption that the correlation structure of data is time-invariant. Although there has been work to relax this assumption in e.g. financial data [33], these approaches are difficult, if not impossible, to adopt to the time series we analyse because of their relatively small length and sampling frequency. Because \(\widehat{\mathbf{C}_{0}}\xrightarrow[\tau\to\infty]{}\mathbf{I}\), we expect naturally that for large time series the spectrum of \(\widehat{\mathbf{C}_{0}}\) should be concentrated around \(1\). In practice however, because of measurement error, we don't expect _all_ of its eigenvalues to be equal to \(1\). Thus, we intuitively expect the full spectrum of \(\widehat{\mathbf{C}}\) to be constituted of \(N-1\) eigenvalues close to \(1\), which constitute the contribution coming from \(\mathbf{C}_{0}\), and a single peaked eigenvalue close to \(\sigma^{2}\), which is the contribution coming from the dynamics of \(v(t)\) that couples all of the \(N\) time series. For the full empirical correlation matrix \(\widehat{\mathbf{C}}\), we also expect that the eigenvector corresponding to its largest eigenvalue will satisfy, \(\bar{\mathbf{u}}\approx\mathbf{u}\). However, the result of Marcenko-Pastur is that in the limit where both \(N\), \(T\to\infty\), but with the ratio \(q=\frac{N}{T}\) fixed, the spectrum of \(\mathbf{C}_{0}\) is concentrated in the interval \((1-\sqrt{q},1+\sqrt{q})\), called the "bulk", and may Figure 1: (A) The time series \(x_{i}(t)\) are created by adding a sine wave and an idiosyncratic random noise. (B) The spectrum of the empirical correlation matrix \(\widehat{C}_{ij}=\frac{1}{T}\sum_{t=1}^{T}x_{i}(t)\,x_{j}(t)\), along with the random benchmark given by the Marčenko-Pastur distribution. Note the presence of an eigenvector at \(\lambda\approx 16\), beyond the random benchmark (C). The eigenmode \(\hat{v}(t)\), obtained by projecting the time series onto the vector \(\bar{\mathbf{u}}\) corresponding to the largest eigenvalue, tracks the collective oscillations of the system. also have a delta-peak at \(0\) if \(q<1\). For finite \(N,T\) we also expect some eigenvalues to be slightly out of this interval. This sheds light on why in practice finding the common mode may be difficult: if, say, \(\sigma\) is of the order of \(q\), then the eigenvalue "spike" at \(1+\sigma^{2}\) will in fact be inside the Marcenko-Pastur interval. This is linked to the so-called Baik-Ben Arous-Peche (BBP) transition [37], and in this case it is not possible to reconstruct the common mode. We can indeed imagine that we run the model and execute the procedure described above first for a value of \(\sigma\gg q\), and then reduce \(\sigma\) progressively until we reach \(\sigma\approx q\). When diagonalising the empirical correlation matrix \(\widehat{\mathbf{C}}\) and considering the eigenvector corresponding to its largest eigenvalue, \(\widehat{\mathbf{u}}\), this eigenvector will match the "true" eigenvector \(\mathbf{u}\) when \(\sigma\gg q\), so that for example \(\widehat{\mathbf{u}}\cdot\mathbf{u}\approx 1\). However, as \(\sigma\to q\) this overlap will decrease, and the intuition then is that when the outlier eigenvalue reaches the Marcenko-Pastur bulk, then its associated eigenvector \(\widehat{\mathbf{u}}\) cannot now reliably be thought of as an estimator of \(\mathbf{u}\), and will instead point in any random direction. In this case \(\mathbf{u}\cdot\widehat{\mathbf{u}}\) will be of order \(1/\sqrt{N}\) (see [36, Section 14.2.2], and also [38] for an intuition for this phenomenon using Dyson Brownian motion). In this case, the usage of the projectors, or steps 3 and 4 of our procedure, will not lead to the identification of common modes. The conclusion from this is that we are indeed capable of identifying common factors in time series using this approach, but we must first make sure that these modes correspond to eigenvalues of the correlation matrix that are not compatible with a random benchmark. Indeed, the example above corresponds to time series of equal length, where each entry of the time series is drawn at random from a Gaussian distribution. In this case, the random benchmark for the spectrum is determined by the Marcenko-Pastur distribution, as said above. The case of our time series is, however, different since sales data is not available for every company at any time. Growth time series can have different starting points and lenghts, and the period over which one can compute their correlation is different for any pair of firms. Our data therefore has a lot of missing values, and two firms present in non-overlapping times for example will be set two have a correlation of \(0\). Another issue is that the growth-rate distribution is not Gaussian, and has slightly heavier tails. Understanding the correlation spectrum of heavy-tailed processes is feasible (see for example [39]), but very difficult to do for any distribution. We can nonetheless establish a random benchmark for the correlation spectrum computationally and use it to identify eigenvalues indicating correlated modes. We achieve this by creating a surrogate of the growth-rate time series where the missing data structure is preserved and where the individual growth-rates are drawn at random from their empirical distribution. This is similar to the procedure used in [40], where the authors randomly shuffle a time-series to benchmark the eigenvalues of correlation matrices that can be distinguished from noise. Figure 2 shows that the real correlation spectrum has several eigenvalues that are beyond the bulk corresponding to the random benchmark, both on the left and on the right side of the bulk. Note that the presence of negative eigenvalues is a consequence of missing data, and is something that one does not obtain for standard Wishart matrices. The largest eigenvalue corresponds to the _market mode_, a collective trend shared by all the firms in the supply chain. This collective mode concerns all firms, as shown by the fact that the entries of the corresponding eigenvector have (roughly) all the same sign and magnitude10. Thus, this mode corresponds to a common factor in the economy, and all the firms move coherently with it. Interpreting the modes corresponding to eigenvalues outside the bulk is more challenging: contrary to what is observed in the correlation structure of financial returns, we have not been able to identify them with specific industrial sectors or geographies. Because we are unable to give these eigenvector a clear interpretation, and since they could potentially carry information about the production network, we have decided to remove only the first eigenmode from the time series. In the rest of our paper, we will refer to growth time series cleaned of the system's first eigenmode as "cleaned" time series \(\tilde{g}_{i}(t)\), and to their correlation as the "cleaned" correlation.11 Footnote 10: This is similar to the toy model presented in Section II.1. Footnote 11: We attract the reader’s attention to the fact that we mean “cleaning” in a sense that is the opposite of what is done for returns’ correlation matrices in finance: there, usually one discards the modes corresponding to the _smaller_ eigenvalues (see e.g. [41]). We, however, discard the largest mode because we want to remove reasons for firm co-movement that are distinct from supply-chain induced co-movement. ## III Network correlation and random benchmarks We have introduced the main object of our analysis, firms' growth time series \(\mathbf{g}_{i}(t)\). We will now show that the supply chain induces specific correlations between firms, a necessary step to later justify our usage of correlations in supply-chain reconstruction. We define the following correlation matrices,12 Footnote 12: Note that here we use the notation \(\mathbb{E}_{t}[\cdot]=\frac{1}{\tau}\sum_{i=1}^{T}(t)\) to indicate the empirical average across the time variable. The notation \(\mathbf{E}\) used in the previous section corresponds instead to the “true” average value of our stochastic model, computed over the distribution of the noise \(\xi_{i}\) and \(v\). Similarly, \(\mathbb{E}_{ij}\) indicates an empirical average taken by summing over the variables \(i\) and \(j\). \[\begin{split} C_{ij}(\tau)&=\mathbb{E}_{t}\left[g_{ i}(t)g_{j}(t+\tau)\right],\\ \widetilde{C}_{ij}(\tau)&=\mathbb{E}_{t}\left[ \widetilde{g}_{i}(t)\tilde{g}_{j}(t+\tau)\right].\end{split} \tag{7}\] We can compute the average value of the elements of the matrix \(\mathbf{C}\) and \(\widetilde{\mathbf{C}}\) across the pairs of firms \((i,j)\) linked in the production network, defining averaged client/supplier correlation functions. Given any (binary) adjacency matrix \(\mathbf{A}\) we define \[C_{\mathbf{A}}(\tau)=\mathbb{E}_{ij}\left[C_{ij}(\tau)\left|A_{ij}=1\right], \tag{8}\] and \[\widetilde{C}_{\mathbf{A}}(\tau)=\mathbb{E}_{ij}\left[\tilde{C}_{ij}(\tau) \left|A_{ij}=1\right], \tag{9}\] where the average runs over all pairs \(1\leq i\leq j\leq N\). In other words, \(C_{\mathbf{A}}\) and \(\widetilde{C}_{\mathbf{A}}\) are the average correlation between two neighbours in a graph with an adjacency matrix \(\mathbf{A}\). This average can be computed using the _true_ adjacency matrix of the production network, \(\mathbf{S}\), or over the adjacency matrix of any other network. ### Random benchmarks We first compute the correlations averaged over the adjacency matrix \(\mathbf{S}\) of FactSet's production network, where \(S_{ij}=1\) if \(j\) either supplies or is a client of \(i\), and compare their value to those obtained with several random network models: the _Erdos-Renyi_ model [42], the _Stochastic Block Model_[43], and the _Configuration Model_[44]. We describe all three models and their parameters in detail below. We randomly sample \(n=50\) networks of each model, with adjacency matrices \(\mathbf{R}_{1},\ldots,\mathbf{R}_{n}\) and compute the mean and standard deviation of the sets \(\left\{C_{\mathbf{R}_{1}},\ldots,C_{\mathbf{R}_{n}}\right\}\) and \(\left\{\tilde{C}_{\mathbf{R}_{1}},\ldots,\tilde{C}_{\mathbf{R}_{n}}\right\}\). All of the models are parametrised to match the empirical properties of the supply-chain network. Figure 2: (A) The distribution \(\rho\left(g\right)\) of the growth rates for every firm \(i\) and time \(t\). A normal distribution is provided as a reference. (B) Growth time series correlation spectrum. The two random benchmarks are obtained by sampling random time series from the empirical distribution \(\rho\left(g\right)\) (_Empirical benchmark_) and the normal distribution (_Gaussian benchmark_). The starting points and duration of the random time series match those of the real ones. The spectrum shown is the average of 10 sets of random time series. For the Erdos-Renyi network, we fix its density \(p\) to match that of the production network, namely \[p=\frac{1}{N\left(N-1\right)}\sum_{i=1}^{N}\sum_{j>i}^{N}S_{ij}.\] The Erdos-Renyi network has no real structure, and in particular no clear community structure is apparent in it. We therefore also used stochastic block models, which we initialised with several different block schemes. Specifically, we divided firms into blocks \(\{B_{1},\ldots,B_{m}\}\) depending on their industrial sector (at their SIC code's third-digit level of aggregation), their country, or their network community as identified by the Louvain community-detection algorithm [45]. The network densities within- and across- blocks are chosen to be equal to their empirical counterparts, \[\rho_{ij}=\frac{1}{\left|B_{i}\right|\left(\left|B_{j}\right|-\delta_{ij} \right)}\sum_{k\in B_{i},l\in B_{j}}A_{kl}. \tag{10}\] Finally, we use the configuration model to produce networks with a degree distribution that matches exactly the empirical one. Figure 3 compares the average correlation measured on the true production network \(\mathbf{S}\) and on the random network benchmarks. The value of \(C_{\mathbf{S}}(0)\) is twice as high as the average correlation measured on the Erdos-Renyi graph, and \(\approx 50\%\) higher than the correlation measured for the configuration model. The result for \(\tilde{C}_{\mathbf{S}}(0)\) are even more striking, with the residual correlation on the supply chain being still \(\approx 0.1\) and most of the random benchmarks dropping close to zero. This highlights the usefulness of our cleaning procedure, as it significantly increases our signal-to-noise ratio. ### Relationship with network distance A second way to show that the supply chain induces correlations in the dynamics of firm sales is to study how the correlation behaves with respect to network distance. Intuitively, we expect that two firms that are close to each other on the supply chain will be more correlated than two firms that are far apart. Figure 3: (A): Average correlation on the production network \(C\left(\tau\right)_{S}\) and several random network benchmarks. (B): Average “cleaned” correlation on the production network \(\tilde{C}\left(\tau\right)_{S}\) and several random network benchmarks. (C):Correlations along the supply chain decays with distance. At distance \(d=4\) (\(d=3\) for the cleaned correlation), firms’ average correlation is the same of the Erdos-Renyi benchmark. Results for the cleaned time series are flagged with a (C) To see this, we start again from the binary adjacency matrix \(\mathbf{S}\) of the production network, and define recursively \[S_{ij}^{(k)}=\sum_{l_{1},\ldots,l_{k-1}}\mathbf{1}\left(S_{l_{1}}S_{l_{1}l_{2}} \ldots S_{l_{k-1}j}>0\right)\prod_{m=1}^{k-1}\left(1-S_{ij}^{(m)}\right), \tag{11}\] where \(S_{ij}^{(1)}=S_{ij}\). The first factor in the right hand side is equal to \(1\) if and only if there exists a path \(i\to l_{1}\rightarrow\ldots\to j\) of length \(k\) linking \(i\) to \(j\). The second factor is \(0\) if it exists a shorter path from \(i\) to \(j\) in the network. Thus defined, \(S_{ij}^{(k)}\) is equal to one only if the shortest path between \(i\) and \(j\) is of length \(k\). We can see how these correlations decay with distance, by computing the values \[D_{S}(k)=\mathbb{E}_{ij}\left[C_{ij}(0)|S_{ij}^{(k)}=1\right], \tag{12}\] and \[\widetilde{D}_{S}(k)=\mathbb{E}_{ij}\left[\tilde{C}_{ij}(0)|S_{ij}^{(k)}=1 \right], \tag{13}\] namely the average of the non-lagged growth correlation between any two firms that are \(k\)-steps apart in the supply chain. We show this on Figure 3, C. The correlation between firms decays as their distance in the production networks increases, revealing again that the production networks mediates growth correlations between firms. ## IV Supply chain reconstruction In the previous Sections, we have established that the supply-chain induces correlations between firms, and we have also established that our cleaning procedure increases the signal-to-noise ratio of these correlations with respect to the real supply chain. We next propose a procedure to reconstruct the supply chain using the cleaned correlation matrix. Inferring networks from observations, or _graph learning_[46], is a problem that encompasses several branches of natural and social sciences. Following [46], we define the problem of graph learning as follows: given Figure 4: An illustration of network distance. Nodes \(2\), \(3\) and \(4\) are at a distance \(k=1\) from node \(10\). Even though the path \(1\to 3\to 4\) exists, we do not consider \(4\) to be at distance \(k=2\) from \(1\) observations on \(N\) entities, represented by a data matrix \(\mathbf{X}\in\mathbb{R}^{N\times T}\), and taking some prior knowledge as given, we seek to infer relationships between our \(N\) entities and represent these relationships as a graph \(\mathcal{G}\). A possible approach to solve this problem is to assume that \(\mathcal{G}\) encodes some statistical relationship between the entities. Specifically, _probabilistic graphical models_ assume that the structure of \(\mathcal{G}\) determines the joint probability distribution of the observations on the data entities: the presence or absence of edges in the graphs encodes the conditional independence among the random variables represented by the vertices. In particular, _Markov Random Fields_ consider a graph \(\mathcal{G}=\{\mathcal{V},\mathcal{E}\}\) and a set of random variables \(\mathbf{x}=\{x_{i}:v_{i}\in\mathcal{V}\}\) satisfying the pairwise Markov property, \[\left(v_{i},v_{j}\right)\notin\mathcal{E}\Leftrightarrow p\left(x_{i}|x_{j}, \mathbf{x}\right)\left\{x_{i},x_{j}\right\}=p\left(x_{i},\mathbf{x}\right) \left\{x_{i},x_{j}\right\}\right), \tag{14}\] which simply states that two variables \(x_{i}\) and \(x_{j}\) are conditionally independent if there is no edge between the corresponding vertices \(v_{i}\) and \(v_{j}\). In Markov Random Fields, the joint probability distribution of the variables \(x_{1},\ldots,x_{N}\) may also be represented as \[p\left(\mathbf{x}\right)=\frac{1}{Z}\prod_{i=1}^{K}\phi_{i}\left(\mathbf{D}_{ i}\right), \tag{15}\] where \(\mathbf{D}_{i},\ldots,\mathbf{D}_{K}\) are a set of graph's cliques (i.e., groups of nodes), \(Z\) is a normalisation factor known as partition function, and \(\phi_{i}\)s are generic functions known as factors. It is straightforward to see that the exponential family of distributions with a parameter matrix \(\mathbf{\Theta}\in\mathbb{R}\), \[p\left(\mathbf{x}|\mathbf{\Theta}\right)=\frac{1}{Z\left(\mathbf{\Theta}\right) }\exp\left(\sum_{v_{i}\in\mathcal{V}}\theta_{ii}x_{i}^{2}+\sum_{\left(v_{i},v _{j}\right)\in\mathcal{E}}\theta_{ij}x_{i}x_{j}\right), \tag{16}\] is compatible with this formalism; the multivariate Gaussian distribution with precision matrix \(\mathbf{\Theta}\), \[p\left(\mathbf{x}|\mathbf{\Theta}\right)=\frac{|\mathbf{\Theta}|^{1/2}}{\left( 2\pi\right)^{N/2}}\exp\left(-\frac{1}{2}\mathbf{x}^{T}\mathbf{\Theta x}\right), \tag{17}\] belongs to this family. The subclass of Markov random fields that adopt Eq.(17) as the parametrisation for the joint probability distribution \(p\) are called Gaussian Markov Random Fields or Gaussian Graphical Models. In Gaussian Graphical models, the problem of finding the graph \(\mathcal{G}\) is reduced to that of estimating a precision matrix \(\mathbf{\Theta}\) that encodes the conditional relationship between the nodes. In the previous section, we saw that the production network influences the correlation of firms' growth \(g_{i}\). If we consider each vector \(\mathbf{g}(t)\) as a drawn from a joint probability distribution where the correlations are driven by the supply chain, Gaussian graphical models seem well equipped to reconstruct the production network if one ignore the fact that the growth rates do not have a Gaussian distribution.13 We think nonetheless that, because the growth-rates show a Gaussian-like central region, as shown by [31], it is reasonable to use this model to attempt a reconstruction. Footnote 13: Indeed, the marginal distribution of \(x_{i}\) in Eq.(17) is clearly a Gaussian distribution. We therefore use the _Graphical Lasso method_ to construct an estimator \(\widehat{\mathbf{\Theta}}\) of \(\mathbf{\Theta}\) by solving the following optimisation problem:14 Footnote 14: This is the result of applying Bayes theorem assuming a constant prior for \(\mathbf{\Theta}\). \[\widehat{\mathbf{\Theta}}=\operatorname{argmax}_{\mathbf{\Theta}}\log\det \mathbf{\Theta}-\operatorname{tr}\left(\widehat{\mathbf{C}}\mathbf{\Theta} \right)-\alpha\|\mathbf{\Theta}\|_{1}, \tag{18}\] with \(\widehat{\mathbf{C}}=\frac{1}{T}\mathbf{G}\mathbf{G}^{T}\) the sample covariance matrix, \(\det(\cdot)\) the determinant and \(\operatorname{tr}(\cdot)\) the trace. The first two terms can be thought of as the log-likelihood of \(\mathbf{\Theta}\) in the Gaussian Graphical Model, while \(\alpha\,|\mathbf{\Theta}|\) is an \(L^{1}\) regularisation term with parameter \(\alpha\). This approach will,in general, recover a matrix \(\mathbf{\Theta}\) with both positive and negative entries. In this setting, a positive off-diagonal entry \(\theta_{ij}\) of the precision matrix implies a negative partial correlation between \(\mathbf{x}_{i}\) and \(\mathbf{x}_{j}\), whose interpretation is problematic since we would like \(\mathbf{\Theta}\) to proxy the adjacency matrix of the network. References [47; 48; 49] suggest instead searching for the precision matrix among the set \(\mathcal{S}_{\mathbf{\Theta}}\) of possible Graph Laplacian matrices, \[\mathcal{S}_{\mathbf{\Theta}}=\left\{\mathbf{\Theta}|\theta_{ij}=\theta_{ji}<0\text{ for }i\neq j,\theta_{ii}=-\sum_{j\neq i}\theta_{ij}\right\}. \tag{19}\] Conditioning \(\widehat{\mathbf{\Theta}}\) to be in the set of possible graph Laplacians has two interesting consequences. First, the graph Laplacian \(\mathbf{L}\) uniquely determines the adjacency matrix \(\mathbf{W}\) of the graph; thus, the problem in (18) with the assumption \(\mathbf{\Theta}\in\mathcal{S}_{\mathbf{\Theta}}\) creates a direct connection between the data and the topology of the network. Second, since the time series \(\mathbf{g}_{i}\) has zero mean, we can write the trace \(\left(\widehat{\mathbf{\Theta}}\right)\) as \[\mathrm{tr}\left(\widehat{\mathbf{\mathbf{C}}}\mathbf{\Theta}\right)= \frac{1}{T}\mathrm{tr}\left(\mathbf{\mathbf{G}}\mathbf{G}^{T}\mathbf{\Theta}\right)= \frac{1}{T}\sum_{i,j}\sum_{t=1}^{T}\theta_{ij}\left(g_{i}(t)-g_{j}(t)\right)^ {2}. \tag{20}\] The term on the right hand of the equation measures the (squared) difference between the observation on firms \(i\) and \(j\) (\(\mathbf{g}_{i}\) and \(\mathbf{g}_{j}\)), computed over couples of connected firms (\(\theta_{ij}>0\)); it is generally known as the quadratic energy function and quantifies the _smoothness_ of \(\mathbf{G}\) over the graph with Laplacian \(\mathbf{L}\). For an economic interpretation, the second term in (18), \(\mathrm{tr}\left(\widehat{\mathbf{\mathbf{C}}}\mathbf{\Theta}\right)\), can be interpreted as a penalty term affecting networks over which \(\mathbf{G}\) is not smooth, i.e., a production network that exhibits large differences between the growth rates of connected firms. In [50] (see Appendix A), the authors propose an efficient algorithm to solve the problem in Eq.(18) while also enforcing some (soft) constraints on the spectrum \(\mathrm{Sp}(\mathbf{\Theta})\) of the Laplacian matrix. The problem becomes \[\begin{split}\widehat{\mathbf{\Theta}}=& \mathrm{argmax}_{\mathbf{\Theta}}\log\det\mathbf{\Theta}-\mathrm{tr}\left( \widehat{\mathbf{\mathbf{C}}}\mathbf{\Theta}\right)-\alpha\|\mathbf{\Theta}\|_{1},\\ &\text{subject to}\quad\mathbf{\Theta}\in\mathcal{S}_{\mathbf{\Theta}}, \ \mathrm{Sp}(\mathbf{\Theta})\subset\mathcal{S}_{\mathbf{\lambda}}\end{split} \tag{21}\] where \(\mathcal{S}_{\mathbf{\lambda}}\) is the set of admissible spectra that we choose. Because the spectrum of the Laplacian encodes information about the underlying network's topology, choosing \(\mathcal{S}_{\mathbf{\lambda}}\) appropriately allows us to enforce high-level topological features on the reconstructed network. We therefore attempt to use the algorithm provided in [50] to reconstruct the production network. In the following, we assume that we know the network's density in advance, and that we also have a reliable estimate for the number of links within and across different sectors. This information would not be available directly in a real-world situation, but the literature on production networks and other available data sources as input-output tables allow informed guesses (see, e.g., [12]). This means that our results should be placed halfway between a proof of concept and a realistic use case. We must however slightly modify this algorithm to apply it to our specific situation. Indeed, a problem with the algorithm described in [50] is that, while it is possible to encode a given community structure by constraining the Laplacian, we are not be able to specify which firms should go into which community (see Fig. 5). To solve this, we have devised the following procedure. First, we split \(\widehat{\mathbf{\mathbf{C}}}\) into diagonal and off-diagonal blocks based on firm industries. Next we use the procedure defined in (21) to reconstruct each diagonal block independently. Thirdly, we go through all the possible pairs of diagonal blocks and - keeping the diagonal blocks equal to those that were reconstructed in the previous step - we reconstruct the off-diagonal blocks. Finally, we assemble all the blocks together to obtain the entire adjacency matrix; this procedure is shown graphically in Fig. 6. Every time we reconstruct a network, we choose the parameter \(\alpha\) to match the empirical network density. To reconstruct the diagonal blocks, we use the spectrum obtained by averaging over the spectra 1000 Erdos-Renyi random networks' Laplacians, with probability \(p\) equal to the desired density. Similarly, to reconstruct the off-diagonal blocks, we use the spectrum obtained by averaging over the spectra of 1000 block models' Laplacians, where the probabilities of links within and across each block are chosen to match the desired density. We provide details on the reconstruction algorithm in Appendix A. We ran our procedure over several different subparts of the real production network, each composed of a minimum of 300 to a maximum of 500 firms. We compared our results to those of two random benchmarks: a Erdos-Renyi graph and an industrial sector block model, built as in III. Our approach beats the benchmark across several different metrics, often with a considerable performance difference (Fig. 7). ## V Conclusions In this paper, we studied if the correlation between firms' growth time series could be useful in reconstructing production networks. Using FactSet's supply chain network as a use case and several random network models as benchmarks, we have first shown that growths of firms connected in the production networks are on average more correlated than those of randomly selected firms' pairs. We have shown that this effects fades gradually as one looks at the average correlation between pair of firms at an increasing network distance along the Figure 5: (A) A stylised representation of an adjacency matrix with two sectors. The density of links between the \(n_{A}\) firms in sector A is \(\rho_{A}\), the density of links between the \(n_{B}\) firms in sector B is \(\rho_{B}\), and the density of links across the two sectors is \(\rho_{AB}\). (B) Another adjacency matrix. There are two group of firms of size \(n_{A}\) (right bottom corner of the matrix) and \(n_{B}\) (top left corner of the matrix). The density within firms in the first group is \(\rho_{B}\), the density between firms in the second group is \(\rho_{A}\), and the density across the groups is \(\rho_{AB}\). The graph Laplacian of the matrix in (A) and that of the matrix in (B) will have the same spectrum. However, the density within and across sectors in (B) is different from that in (A). Figure 6: Reconstruction of the supply chain networks. The original correlation matrix (A) is split in the different industry sectors. First, we reconstruct the diagonal blocks (B). Then, we reconstruct the off-diagonal blocks (C). Finally, we reassemble the blocks together (D). supply chain. Finally, we have framed the production network reconstruction in the context of graph learning, and showed that some recent techniques developed in the field can successfully be used to identify trade connections between firms. Even if the results of our paper are closer to a proof of concept than to a realistic use case, we believe that our approach is promising. First, it relies on a mechanism that can be easily accepted as universal: the growth of business partners is correlated. Second, it is a fully "unsupervised" approach, which does not require the training of a model, and is not prone to over-fitting. Third, it requires data that is easily accessible (firms' sales) and, to a certain extent, substitutable (e.g., we obtained similar results when we looked at the correlation of firms' stock returns). Finally, it generates a network that matches a set of desired topological features. This last point also highlights interesting avenues of research: as more "universal" production networks features will be documented, and better generative models for these networks will be developed, the more effective our approach will be. ## Acknowledgements We would like to thank Jean-Philippe Bouchaud, Francois Lafond, Doyne Farmer, and Xiaowen Dong for their numerous suggestions for this work, and Andrea Bacilieri for her help in handling the data. We would also like to thank the participants of the 2022 CSH-INET Workshop on Firm-Level Production Networks and the CCS 2022, in particular Christian Diem and Tobias Reisch, for the useful feedback, and Stefan Thurner and the network economics group at CSH Vienna for their hospitality and insight. This work was supported by Baillie Gifford and the Institute for New Economic Thinking at the Oxford Martin School.
2308.02808
Forward production of prompt neutrinos in the atmosphere and at high-energy colliders
The atmospheric neutrino flux at very high energies is dominated by prompt neutrinos, mostly contributed by the decays of charmed hadrons produced in the forward direction by cosmic ray interactions with air nuclei. Theoretical predictions of the prompt atmospheric neutrino flux have large uncertainties mainly related to charm hadron production. Prompt neutrinos can also be studied through high-energy colliders. In particular, two ongoing forward experiments and the proposed Forward Physics Facility at the LHC can detect forward prompt neutrinos. We will present the kinematic regions relevant to the prompt atmospheric neutrino flux in terms of collider kinematic variables, the collision energy $\sqrt{s}$ and the center-of-mass rapidity of charm hadrons $y$, and discuss implications of the forward experiments at the LHC on the theoretical predictions of the prompt atmospheric neutrino flux.
Yu Seon Jeong, Weidong Bai, Milind Diwan, Maria Vittoria Garzelli, Karan Kumar, Mary Hall Reno
2023-08-05T06:47:28Z
http://arxiv.org/abs/2308.02808v1
# Forward production of prompt neutrinos in the atmosphere and at high-energy colliders ###### Abstract: The atmospheric neutrino flux at very high energies is dominated by prompt neutrinos, mostly contributed by the decays of charmed hadrons produced in the forward direction by cosmic ray interactions with air nuclei. Theoretical predictions of the prompt atmospheric neutrino flux have large uncertainties mainly related to charm hadron production. Prompt neutrinos can also be studied through high-energy colliders. In particular, two ongoing forward experiments and the proposed Forward Physics Facility at the LHC can detect forward prompt neutrinos. We will present the kinematic regions relevant to the prompt atmospheric neutrino flux in terms of collider kinematic variables, the collision energy \(\sqrt{s}\) and the charm hadron's center-of-mass rapidity \(y\), and discuss implications of the forward experiments at the LHC on the theoretical predictions of the prompt atmospheric neutrino flux. Introduction Cosmic ray interactions in the Earth's atmosphere produce a cascade of various particles, some of which decay into neutrinos, called atmospheric neutrinos. Due to broad energy spectrum of cosmic rays, atmospheric neutrinos generated from their interactions are also distributed in a wide energy range. Typical particles that create atmospheric neutrinos are charged pions (\(\pi^{\pm}\)) and kaons (\(K^{\pm}\)). Neutrinos from these light meson decays are referred to as conventional neutrinos and distributed at relatively low energies dominating the atmospheric neutrino flux up to \(\sim 10^{5}\) GeV. On the other hand, at very high energies, neutrinos are also produced from heavier hadrons that contain a heavy quark, which are called prompt neutrinos and come mostly from charm hadrons. Pions and kaons are relatively long-lived particles, and their decay lengths become longer as energy increases. Then, they are likely to lose energy through the interactions with other particles until they decay. As a result, the flux of conventional atmospheric neutrinos rapidly decreases with energy. By comparison, the decay lengths of charm hadrons are extremely short even at high energies, therefore they immediately decay and the prompt atmospheric neutrino flux has a harder energy spectrum than the conventional flux. Consequently, the fluxes of conventional neutrinos and prompt neutrinos cross over at a certain energy. From various theoretical evaluations, the cross-over energy is expected to be in the range of \(E_{\nu}\sim 10^{5}-10^{6}\) GeV [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] for muon neutrinos and antineutrinos. In this energy region, high-energy neutrinos from astrophysical sources have been observed by IceCube as a diffuse flux [11, 12, 13], for which the prompt atmospheric neutrino can be the primary background. At present, analyses of observational data based on existing models of astrophysical neutrino fluxes indicate that the atmospheric neutrino flux can be described by only conventional neutrinos. Prompt atmospheric neutrinos have not been detected yet, and there are only predictions from theoretical evaluations, which have currently large uncertainties. Two of the most important components that are responsible for large uncertainties are the incident cosmic-ray spectrum and heavy-flavor production. Today, prompt neutrinos can be probed through high energy colliders as well. Over the past few years, two neutrino experiments have been prepared and installed at the LHC, FASER\(\nu\)[14] and SND@LHC [15, 16]. The detectors are located at a distance of 480 m from the ATLAS interaction point, one of the four proton beam collision points. They are designed to detect neutrinos produced from the \(pp\) collisions and emitted into very forward direction. Both experiments started last year and recently reported the detection of collider neutrinos for the first time from the data collected during 2022 [17, 18]. These experiments will be continuously operating during the Run 3 of the LHC. In the meantime, a set of next-stage experiments for the High-Luminosity era of the LHC (HL-LHC) have been proposed as a collective project under the name of Forward Physics Facility (FPF) [19, 20]. The FPF will include three neutrino experiments: the expanded versions of current experiments, FASER\(\nu\)2, AdvSND and an additional liquid Argon detector FLArE, locating the detectors at a distance of 620-685 m from the ATLAS interaction point. At the FPF, the prompt neutrinos could be studied with very high statistics. The estimated number of neutrino interactions in FPF detectors are \(\sim 10^{6}\) for muon neutrinos and \(\mathcal{O}(10^{5})\) for electron neutrinos [20]. The LHC at the HL-LHC stage will be run with the collision energy \(\sqrt{s}=14\) TeV, which is equivalent to an energy of \(\sim 10^{8}\) GeV in a fixed-target frame. This energy is in a relevant region to explore astrophysical neutrinos and prompt atmospheric neutrinos. Therefore, measurements of prompt neutrinos and study of the heavy-flavor production through forward experiments at the LHC will help us to better understand and estimate the prompt atmospheric neutrino fluxes. To demonstrate the relevance of the FPF for probing atmospheric neutrinos and astrophysical neutrinos, in this work we investigate the kinematic regions for prompt atmospheric neutrinos using collider variables, collision energy \(\sqrt{s}\), and center-of-mass (CM) rapidity of charm hadrons \(y\). ## 2 Prompt atmospheric neutrino fluxes Atmospheric neutrino fluxes can be evaluated using the so-called \(Z\)-moment method that gives an approximate solution to the coupled cascade equations for incident cosmic rays, secondary hadrons and leptons from the hadron decays. The cascade equations describe the propagation of the high-energy particles in the atmosphere, given by \[\frac{d\phi_{j}(E,X)}{dX} = -\frac{\phi_{j}(E,X)}{\lambda_{j}(E)}-\frac{\phi_{j}(E,X)}{ \lambda_{j}^{\rm dec}(E,X)}+\sum_{k}S(k\to j) \tag{1}\] with \(\phi_{j}(E,X)\) the flux of a particle \(j\) at the column depth \(X\), and \(\lambda_{j}^{\rm(dec)}\) interaction (decay) length. The source term \(S(k\to j)\) involves the particle \(j\) produced by interaction or decay, and can be expressed with the energy distribution of the produced particle, \(dn(k\to j)/dE\) that depends on the production process \[S(k\to j) = \int_{E}^{\infty}dE^{\prime}\frac{\phi_{k}(E^{\prime},X)}{ \lambda_{k}(E^{\prime})}\frac{dn(k\to j;E^{\prime},E)}{dE}\,. \tag{2}\] Under the assumption \(\phi_{k}(E^{\prime},X)/\phi_{k}(E,X)\simeq\phi_{k}(E^{\prime},0)/\phi_{k}(E,0)\), eq. (2) can be approximated in terms of energy dependent \(Z\) moment, the flux and interaction/decay length of the parent particle \(k\) as below: \[S(k\to j) \simeq Z_{kj}(E)\frac{\phi_{k}(E,X)}{\lambda_{k}(E)}\,, \tag{3}\] \[Z_{kj}(E) \equiv \int_{E}^{\infty}dE^{\prime}\frac{\phi_{k}^{0}(E^{\prime})}{\phi_ {k}^{0}(E)}\frac{\lambda_{k}(E)}{\lambda_{k}(E^{\prime})}\frac{dn(k\to j;E^{ \prime},E)}{dE}\,. \tag{4}\] The resulting flux of atmospheric neutrinos can be obtained in terms of two approximate solutions of the coupled cascade equations by \[\phi_{\nu}=\sum_{h}\frac{\phi_{h\to\nu}^{\rm low}\phi_{h\to\nu}^{\rm high}}{( \phi_{h\to\nu}^{\rm low}+\phi_{h\to\nu}^{\rm high})}\,, \tag{5}\] where the two fluxes in the low-energy and high-energy limits, \(\phi_{h\to\nu}^{\rm low}\) and \(\phi_{h\to\nu}^{\rm high}\) are expressed in terms of the \(Z\)-moments, incident cosmic ray flux \(\phi_{p}^{0}\) and critical energy \(\epsilon_{k}\) as \[\phi_{h\to\nu}^{\rm low} = \sum_{h}\frac{Z_{ph}Z_{h\nu}}{1-Z_{pp}}\phi_{p}^{0}\,, \tag{6}\] \[\phi_{h\to\nu}^{\rm high} = \sum_{h}\frac{Z_{ph}Z_{h\nu}}{1-Z_{pp}}\frac{\ln(\Lambda_{h}/ \Lambda_{p})}{1-\Lambda_{p}/\Lambda_{h}}\frac{\epsilon_{h}}{E}\phi_{p}^{0} \tag{7}\] given the effective interaction length \(\Lambda_{k}=\lambda_{k}^{\rm int}/(1-Z_{kk})\). The critical energy \(\epsilon_{k}\simeq(m_{k}c^{2}h_{0}/c\tau_{k})\) separates the energy into low-energy and high-energy regimes. In evaluating the atmospheric neutrino fluxes, one of the main input factors is the incident cosmic ray flux. A traditional parameterization is a broken power law (BPL) spectrum, which is obtained under the assumption that the cosmic rays consist of only protons or nucleons. This is useful for comparisons with prior work and results from others [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]. Modern parameterizations of the cosmic ray spectrum are obtained considering different compositions and sources. Two parameterizations most frequently used are referred to as H3p and H3a [21], which take into account supernova remnants, galactic and extra-galactic sources for the origin of cosmic rays. The difference between the two spectra is the composition of cosmic rays from the extra-galactic origin: the H3p has only protons and the H3a has a mixed composition. Another important factor is the heavy-flavor production cross section, which is input into \(dn(k\to j)/dE\) in eq. 4 to evaluate the \(Z\)-moments for production. This gives large uncertainties in the \(Z\)-moments, and eventually in the prediction of prompt neutrino flux. There are several approaches to evaluate the heavy-flavor production cross sections. In this work, adopting our previous work [22], we use perturbative QCD at next-to-leading order (NLO) with massive charm, QCD scales of (\(\mu_{R}\), \(\mu_{F}\)) = (1, 2) \(m_{T}\) and intrinsic transverse momentum smearing \(\langle k_{T}\rangle=1.2\) GeV. Fig. 1 shows the predictions of the prompt atmospheric \(\nu_{\mu}+\bar{\nu}_{\mu}\) fluxes from charm hadron decays evaluated with different cosmic ray spectra: BPL, H3p, and H3a. We also present the conventional atmospheric neutrino flux [23] and the upper limit on the prompt \(\nu_{\mu}+\bar{\nu}_{\mu}\) flux extracted by IceCube from the analysis of 7.5 year data for high-energy starting events (HESE) [13]. The upper limit is given by a scaling of the BERSS prediction [5]. As mentioned above, one can see that the cross-over energy between the predictions of prompt and conventional atmospheric neutrino flux is between \(10^{5}-10^{6}\) GeV. Figure 1: The fluxes of prompt atmospheric \(\nu_{\mu}+\bar{\nu}_{\mu}\) from H3p, H3a and BPL cosmic-ray all-nucleon spectra. Also shown are the conventional \(\nu_{\mu}+\bar{\nu}_{\mu}\) flux [23] and the IceCube upper limit on the prompt atmospheric neutrino flux [13]. The figure is taken from ref. [10]. In Fig. 2, we present some existing predictions evaluated by different groups using the BPL cosmic ray spectrum including the result from this work, referred to as JBDGKR21 [9, 10]. As shown in the figure, the uncertainty in the predictions of prompt flux is very large across the energy whereas the impact by different cosmic ray spectra appears at \(E_{\nu}\gtrsim 10^{5}\) GeV. This uncertainty comes from various factors involved in the evaluation of the prompt flux. However, it is mostly related to the charm hadron production cross section. ## 3 Connection with collider neutrinos We use the BPL cosmic-ray spectrum to illustrate the impact of hadronic collisions at different \(\sqrt{s}\) and of charmed mesons produced in different CM rapidities on the prompt atmospheric neutrino flux. The left panel of Fig. 3 shows the prompt atmospheric \(\nu_{\mu}+\bar{\nu}_{\mu}\) fluxes for different values of the maximum CM collision energy \(\sqrt{s}_{\rm max}=7\), 14 and 100 TeV with the prediction evaluated for the full range of \(\sqrt{s}\). The first two values are the respective energies for Run 1 and HL-LHC of the LHC, and \(\sqrt{s}=100\) TeV is the \(pp\) collision energy considered for the Future Circular Collider (FCC). One can see that the maximum energy of the LHC cannot cover all the region for the prompt atmospheric neutrinos, while neutrinos from the 100 TeV collision energy contribute to most of the energy region interesting for prompt atmospheric neutrinos. Although \(\sqrt{s}=14\) TeV is equivalent to about 100 PeV in a fixed target frame, the produced neutrinos are distributed at lower energies. However, collisions at the LHC with this \(\sqrt{s}\) still allow to cover the interesting energy region where the transition between conventional and prompt neutrinos occurs and a comparable flux of astrophysical neutrinos exists. In the right panel of Fig. 3, we show the contributions of charm hadrons produced in different CM charm hadron rapidity regions to the prompt atmospheric neutrino fluxes evaluated using the maximum collision energy \(\sqrt{s}_{\rm max}=14\) TeV. We divide the rapidity range into three parts: \(2<y<4.5\), \(4.5<y<7.2\) and \(y>7.2\). The range of \(2<y<4.5\) is covered by the LHCb Figure 2: Comparison of the prompt atmospheric neutrinos fluxes for \(\nu_{\mu}+\bar{\nu}_{\mu}\) from refs. [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]. experiment, which is most forward region for heavy-flavor production probed at the LHC so far. The region of \(y>7.2\) can be explored by forward experiments, both the first stage experiments (FASER\(\nu\) and SND@LHC) and at the FPF. As shown in the figure, the prompt atmospheric neutrinos come mostly from the charm produced in the rapidity region beyond the LHCb coverage for the energies where the prompt atmospheric neutrinos are important. The contribution of the charm hadrons in the rapidity of \(y>7.2\) is in the limited range of high energy. We further focus on neutrinos that can be detected at the FPF, namely, we explore the neutrino rapidity (\(\eta_{\nu}\)) greater than 7.2. The left panel of Fig. 4 shows the \(\eta_{\nu}\) distribution of muon neutrinos from the \(D^{0}+\bar{D}^{0}\) produced in the different charm hadron rapidity ranges from \(pp\) collision at \(\sqrt{s}=14\) TeV. This indicates that, for neutrinos that are incident into the neutrino detectors of the FPF at the LHC, charm hadrons produced in \(4.5<y<7.2\) contribute more than those in \(y>7.2\). The right panel of Fig. 4 presents the CM frame energy (\(E_{\nu}^{*}\)) distribution of neutrinos from \(D^{0}+\bar{D}^{0}\) produced at the LHC with \(\sqrt{s}=14\) TeV. The solid histogram is for all neutrinos from \(D^{0}+\bar{D}^{0}\) in \(y>2\), while the dashed histograms are for the neutrinos toward the neutrino detectors of the FPF (i.e. \(\eta_{\nu}>7.2\)) from the different \(D^{0}+\bar{D}^{0}\) rapidity ranges discussed above. One can see that at very high CM frame energies of \(E_{\nu}^{*}\gtrsim 1\) TeV, neutrinos detected at the FPF mostly come from the charm hadrons produced at the LHC in \(y>7.2\). However, for hundreds GeV of \(E_{\nu}^{*}\), most contributions to neutrinos at the FPF are predominantly from charm hadrons with \(4.5<y<7.2\), which is important region for the prompt atmospheric neutrinos. ## 4 Discussion We have investigated kinematic regions for prompt neutrinos produced in the atmosphere in terms of center-of-mass collision energy \(\sqrt{s}\) and collider-frame rapidity of charm hadrons \(y\). Focusing on the atmospheric neutrino energy range of \(10^{5}\) GeV \(<E_{\nu}<10^{7}\) GeV, where the prompt atmospheric neutrinos can be the main component of the atmospheric neutrino flux and play a role as an important background to the diffuse astrophysical neutrino flux, we show there is a kinematic overlap for prompt neutrino production in the atmosphere and at the LHC. Although the Figure 3: The prompt flux of atmospheric \(\nu_{\mu}+\bar{\nu}_{\mu}\) for different values of collision energies \(\sqrt{s}\) (left) and from different charm meson rapidity ranges in \(pp\) collisions evaluated evaluated with \(\sqrt{s}<14\) TeV (right). The BPL cosmic-ray spectrum is used in evaluation. The figures are taken from ref. [10]. LHC energy cannot contribute to the full energy region of prompt atmospheric neutrinos, it is high enough to cover the important energy range mentioned above. In the energy range of \(10^{5}-10^{7}\) GeV, prompt atmospheric neutrinos come mostly from the charm hadrons produced in the rapidity range of \(4.5<y<7.2\), which is beyond the coverage of the current LHC experiments that measure charm hadron production. However, the FPF can detect neutrinos from the decays of the charm hadrons in this rapidity region. The prompt neutrino measurement at the FPF will help to understand charm production, constraining the parton distribution functions (PDFs) and QCD evaluations for heavy-flavor production. Consequently, it will potentially improve predictions of prompt atmospheric neutrino fluxes. Current analyses by IceCube with several existing models for astrophysical neutrino fluxes are compatible with zero-background of prompt neutrinos. We can expect that the study of prompt neutrinos at the FPF with abundant events will be able to test the assessment of the backgrounds to the astrophysical neutrino flux, which may require modification of the astrophysical neutrino flux models. Therefore, measurements of prompt neutrinos at the FPF of the LHC will shed light on the study of astrophysical neutrinos. ## Acknowledgments This work is supported in part by U.S. Department of Energy Grants DE-SC-0010113 and DE-SC-0012704, the National Research Foundation of Korea (NRF) grant funded by the Korea government (Ministry of Science and ICT) (No. 2021R1A2C1009296) and by the German Bundesministerium fur Bildung und Forschung (contract 05H21GUCCA).
2302.11191
On the Emulation of Synchronous Machine Dynamics by Converter-Interfaced Generators
This paper discusses the conditions that a device needs to satisfy to replicate the behavior of a conventional synchronous machine (SM) connected to a power network. The conditions pertain to the device's stored energy, time scale of response, oscillation damping, and behavior during short-circuits. Relevant remarks for devices that do/don't satisfy these conditions are discussed through an illustrative numerical example as well as through simulation results based on a modified version of the well-known WSCC 9-bus test system.
Georgios Tzounas, Federico Milano
2023-02-22T07:55:26Z
http://arxiv.org/abs/2302.11191v1
# On the Emulation of Synchronous Machine Dynamics by Converter-Interfaced Generators ###### Abstract This paper discusses the conditions that a device needs to satisfy to replicate the behavior of a conventional synchronous machine (SM) connected to a power network. The conditions pertain to the device's stored energy, time scale of response, oscillation damping, and behavior during short-circuits. Relevant remarks for devices that do/don't satisfy these conditions are discussed through an illustrative numerical example as well as through simulation results based on a modified version of the well-known WSCC 9-bus test system. Converter-interfaced generation, low-inertia systems, frequency stability, virtual synchronous machine (VSM). ## I Introduction ### _Motivation_ Unlike synchronous machines (SMs), converter-interfaced generators (CIGs) do not inherently provide inertia to the power grid, are often stochastic, and operate with small or no power reserves [1]. These properties pose serious challenges to the transition from a SM- to a CIG-dominated power system [2]. On the other hand, the behavior of CIGs is dictated by their control loops, and hence these resources are very flexible, since they can be designed using a broad range of control strategies to provide fast and effective regulation [3, 4]. ### _Literature Review_ The key role that SMs play in the dynamic performance of power systems is highly appreciated by system operators. This has motivated important efforts for the design of CIG control methods able to offer the auxiliary services conventionally provided by SMs, including inertial response, voltage and frequency regulation, and suppression of electromechanical oscillations. The application of these methods varies from the control of a single power electronic converter to the controlled aggregation of multiple heterogeneous converter-based resources [5, 6, 7]. Moreover, a part of these methods has explicitly aimed to replicate the dynamic response of SMs, which has led to the concept of virtual synchronous machine (VSM). The development of VSM is still in an early stage and various implementations have been proposed in the recent literature, for example, we cite [8, 9, 10]. In a different vein, several recent studies have proposed analogies of SMs with different kinds of devices, and with a goal to study various problems, including frequency control, synchronization of power converters, transient stability, etc. For example, the authors in [11, 12] propose that a droop control is equivalent to a VSM, whereas in [13], the authors suggest an equivalence between a SM and a grid-forming converter. In [14], it is suggested that a phase-locked loop (PLL) used for converter synchronization is analogous to a SM. Yet another analogy is outlined in [15], where a non-uniform Kuramoto oscillator is described as equivalent to an overdamped SM. Motivated by the above line of works, this paper discusses the validity of characterizing a non-synchronous device as equivalent to a SM. Such characterization, apart from a formal mathematical equivalence, should depend also upon a set of additional and critical constraints. A qualitative summary of these constraints as well as of the implications of their violation is complementary to the existing literature and can provide a didactic value for researchers working on the design of CIG control methods. ### _Contribution_ The contributions of the paper are as follows: * A qualitative description of the conditions that make a power electronic-based device behave like a traditional SM connected to a power system. These conditions pertain to the device's energy availability, time scale of action, damping, and response to short-circuits. * A discussion on the ability to satisfy these conditions of devices proposed in the literature as equivalent SMs, including VSMs, droop controllers and PLLs. ### _Organization_ The remainder of the paper is organized as follows. Section II recalls the mathematical analogy between a generic second-order oscillator and the classical SM model. The requirements that a device needs to fulfill to replicate the behavior of a traditional SM are presented in Section III. The case study is discussed in Section IV. Finally, conclusions are drawn in Section V. ## II Synchronous Machines as Oscillators Let us recall the classical SM model [16]: \[\begin{split}\dot{\delta}&=\Omega_{b}\,\omega\,,\\ 2H\,\dot{\omega}&=p_{m}-p_{e}(\delta)-D\,\omega\,, \end{split} \tag{1}\] where \(\delta\) (rad) is the rotor's angle and \(\omega\) (pu) the rotor's speed variation with respect to the reference angular frequency; \(\Omega_{b}\) (rad/s) is the synchronous frequency; \(H\) (s) is the SM inertia constant and \(D\) its damping factor; \(p_{m}\) and \(p_{e}(\delta)\) are, respectively, the SM mechanical and electrical power output in pu, with \(p_{e}(\delta)=e^{\prime}v\sin(\delta-\theta)/X\), where \(e^{\prime}\) is the SM internal emf; \(\bar{v}=v\angle\theta\) is the voltage at the SM terminal bus; and \(X\) is defined as the sum of the SM transient reactance and the reactance that connects the SM to its terminal bus. Let us rewrite (1) as follows: \[c\,\ddot{y}+d\,\dot{y}-f(y)=0\,, \tag{2}\] where \(y\equiv\delta\), \(c=2H\), \(d=D\), \(f(y)=\Omega_{b}(p_{m}-p_{e}(\delta))\). The last equation describes a very well-known concept, i.e. the SM is a second-order oscillator, where the damping is determined by \(D\) and the "reluctance" to allow frequency variations is quantified by \(H\). The block diagram of (2) is depicted in Fig. 1, where \(s\in\mathbb{C}\) is the complex Laplace frequency. The literature abounds of variants of (2), such as the Van der Pol [17] and the Lienard-type oscillator [18], with additional non-linear terms, e.g. \(d=g(y)\), and/or forced input oscillations. We acknowledge but do not discuss these models as they can be considered to be part of the broader category of synchronization mechanisms, such as PLLs. Moreover, the very same model shown in (2) is ubiquitous in a broad class of engineering systems, which (or even more often, parts of which) can be reasonably approximated by a suitable second-order system of the same shape. Then, taking into consideration the importance of SMs in power systems, one may observe a striking equivalence of the SM with, in principle, any other system that can be described in the same form, e.g. with a given second-order automatic controller. The main concept discussed in this paper is that, contrary to what is underlying (to lower or higher extent) in some recent works, a given device expressed in the form of (2) is, in general, not equivalent to a SM. The only obvious analogy between such a device and a SM is that they are both _special cases_ of the same, broad family of oscillators. Moreover, a given device can be considered as emulating the behavior of a traditional SM connected to a power network, if and only if a set of additional constraints are met. These constraints are duly discussed in the next section. ## III Synchronous Machine Emulation In this section we discuss the conditions that a device needs to satisfy so that it replicates the behavior of a conventional SM connected to a power grid. These conditions pertain to the availability of energy, the time scale of response, the behavior during short-circuits, and the damping of oscillations. ### _Time Scale_ The time scale of the dynamic response of the emulating device must be similar to that of a SM. A typical range of the inertia time constant \(H\) in a SM is [2,10] MWs/MVA. The value of \(H\) has a physical meaning and represents the time (in seconds) for which the SM could inject its rated power to the system if disconnected from its turbine. Therefore, second-order oscillators in the form of (2) that respond in a different time frame do not resemble the behavior of a SM. ### _Stored Energy_ In a SM of rated power \(S_{n}\) MVA, the rotating mass has in nominal conditions stored kinetic energy \(HS_{n}\) MWs. After a negative (positive) mismatch between the mechanical power \(p_{m}\) and electrical power \(p_{e}\) and until primary regulation is initiated, this physical storage is the crucial mechanism that maintains the system's power balance but also the SM synchronism, by decreasing (increasing) instantaneously its stored energy as the rotor decelerates (accelerates). Maintaining synchronism and power balance are inextricable in a SM, and hence, a SM-emulating device is also required to include mechanisms that account for both tasks. Regarding the power balance, a device that emulates a SM should have sufficient stored energy that is available very fast (ideally instantaneously) after a power mismatch \(\Delta p=p_{m}-p_{e}\neq 0\) occurs. "Very fast" in this context basically refers to the time delay between the occurrence of the disturbance and the initiation of the device's response. For CIGs, instantaneous (i.e. delay-free) provision of the required energy during an imbalance is not possible, and so the condition may be relaxed to a requirement for a very fast response. This however may raise concerns, as it leads to a time window right after the disturbance that remains uncovered [2]. Overall, energy storage is by no means a trivial requirement for a SM-equivalent device. ### _Oscillation Damping_ Not well-damped oscillations are undesired. Thus, in an emulation of a SM where the damping is a fully controlled parameter, it is reasonable that one decides to remove oscillations during the design (e.g., in the case of (2), by choosing a large \(d\)). We recall that SMs are designed for high efficiency and thus include a relatively small damping. A typical range of \(D\) for (1) is [2,3] pu to account both for mechanical damping and effect of damper windings. In higher-order (e.g. sixth-order) machine models, the effect of damper windings is explicitly represented in the model and thus \(D\) can be chosen lower or even zero. Then, poorly damped electromechanical oscillations are partially suppressed by some form of damping control, Fig. 1: Block diagram of (2). but the resulting response is still oscillatory. In theory, good damping of SM oscillations could be achieved through prime movers, but the latter are not fast-enough due to mechanical constraints. Thus, in practice, oscillations are damped through the SM excitation system, but the effect is limited due to the weak coupling of voltage with power angle. The above problems do not exist in power electronic-based devices, which can provide a fast response and thus also be designed for very good damping. However, it is worth noting that, an overdamped response, even if desired, does not replicate the conventional behavior of a SM connected to a power network. ### _Link of Time Scale with Energy and Damping_ A qualitative way to study the link of time scale with energy and damping in model (2) is by considering its linearized version, as follows: \[c\,\Delta\ddot{y}+d\,\Delta\dot{y}+k\Delta y=0\,. \tag{3}\] The variations of stored energy (\(\Delta E\)) and power dissipation (\(\Delta P_{l}\)) of the oscillator are then [19]: \[\Delta E=\frac{1}{2}\,c\,\Delta\dot{y}^{2}\,\quad\Delta P_{l}=d\,\Delta\dot{y} ^{2}\,, \tag{4}\] while its eigenvalues are \(\lambda=(-d\pm\sqrt{d^{2}-4ck})/2c\) and, thus, the following relationship holds: \[\frac{\Delta E}{\Delta P_{l}}=\frac{c}{2d}=-\frac{1}{4\,\Re\{\lambda\}}\,. \tag{5}\] From (5), it is clear that dynamics faster than the time scale of interest are likely to lead to high damping and also violate the requirement for available stored energy, as fast eigenvalues are in general linked to lower amounts of energy and higher damping ratios. ### _Response to Short-Circuits_ The short-circuit current that a SM can tolerate before protections are activated is multiple of the nominal for some time. The high "thermal inertia" of SMs is in contrast to the limited ability to overload power converters. This implies that, even if a CIG is controlled to reproduce well the response of a SM under small disturbances, the same can not be achieved during severe voltage drops, unless the converter design is significantly overrated (e.g. by 6 to 7 times). However, such a design is not practical for economical reasons. This appears to be a rather severe limitation of VSMs in general, given that replication of the behavior of SMs is of upmost importance during large disturbances such as faults. ### _Remarks_ The following remarks are relevant: * The conditions discussed above focus mainly on the critical (for low-inertia systems) time scale of the SM inertial response, which is also the most relevant time scale for the emulation of SM dynamics. Slower actions, including primary and secondary frequency regulation, are not a concern, since they can be conveniently implemented with standard controllers without the need to make any analogy with a SM. * which is a consequence of (1) - not necessarily voltage-forming. ## IV Case Study In this section we discuss through numerical simulations the behavior of devices that have been proposed in the literature as analogous and/or equivalent to SMs. Section IV-A is based on the simplified model (2), while Section IV-B is based on the well-known WSCC 9-bus test system. ### _Illustrative Example_ In this section we consider different devices modeled as second-order oscillators in the form of (2). The first device is a conventional SM. The second device is a droop-based control that acts in the time scale of the SM inertial response. Since droop controls are in general not oscillatory, modeling such device with (2) implies that the oscillator is overdamped, or equivalently, that \(d\) is relatively large. We note that energy availability is not a given for droop controllers. Assuming a droop control combined with sufficient power reserve availability that can be used very fast following a disturbance is under certain conditions what has been often defined in the recent literature as VSM. The last device considered is a simplified PLL. The PLL is much faster than a SM and also does not have the required energy to provide inertial response. On the other hand, a PLL may oscillate, although the damping ratio of PLL oscillations is not necessarily similar to that of SM oscillations. Table I summarizes how droop control, VSM, and PLL compare to a conventional SM connected to a power system in view of the conditions for energy availability, time scale, damping, and short-circuit response, discussed in Section III. Figure 2 shows how the step responses of the SM, VSM, and PLL, compare to each other. \begin{table} \begin{tabular}{l c c c c} \hline Device & Energy & Time scale & Damping & Short-circuit response \\ \hline Conventional SM & ✓ & ✓ & ✓ & ✓ \\ Droop control & ✗ & ✓ & ✗ & ✗ \\ VSM & ✓ & ✓ & ✗ & ✗ \\ PLL & ✗ & ✗ & ✗ & ✗ \\ \hline \end{tabular} \end{table} TABLE I: Comparison of droop control, VSM, and PLL, with conventional SM. frequency variations of the devices, while the bottom provides a close-up of the same plot. The values of \(c\) and \(d\) used for each device are given in Table II. These values yield the following ratios between stored energy and power dissipation in (5) for the three devices: * SM: \(\Delta E/\Delta P_{l}=1\). * VSM: \(\Delta E/\Delta P_{l}=0.03\). * PLL: \(\Delta E/\Delta P_{l}=0.00167\). The \(\Delta E/\Delta P_{l}\) ratio for the SM is two and three, respectively, orders of magnitude larger than those of the VSM and PLL. This result, as well as the plots in Fig. 2, are illustrative of the significant difference between the conventional behavior of SMs in power systems and the responses of devices that do not satisfy the constraints described in Section III. ### _WSCC 9-Bus System_ This section is based on the WSCC 9-bus test system. The network comprises six transmission lines and three medium voltage/high voltage transformers; during transients, loads are modeled as constant admittances; two SMs are connected to buses 1 and 2, while, for the needs of this paper, the SM at bus 3 is replaced by a CIG. The modified test system is shown in Fig. 4. The CIG at bus 3 synchronizes to the power grid through a synchronous reference frame PLL and provides primary frequency response through a droop-based controller that receives the error between the reference and estimated by the PLL frequency \(\omega^{\mathrm{ref}}-\tilde{\omega}\) and regulates the \(d\)-axis current component \(\hat{t}_{d}\) in the \(dq\)-reference frame [4]. The estimated by the PLL frequency is obtained through a proportional-integral control whose input is the error between the measured and the estimated phase angles \(\theta-\tilde{\theta}\). The block diagrams and parameter values of the PLL and frequency control models are presented in Fig. 3 and Table III. We consider a three-phase fault at bus 5 at \(t=0.1\) s. The fault is cleared after \(70\) ms, by tripping the line that connects buses 5 and 7. Results are summarized in Fig. 5. In particular, Fig. (a)a shows the speed variation of the SMs; the (normalized) output of the CIG droop control; and the frequency variation at bus 3 as estimated by the PLL. Figure (b)b illustrates the tight limit in the ability to overload the CIG and thus to support the system during the fault, by comparing the d-axis current component of the stator of the SM at bus 1 to the d-axis regulated current component of the CIG. Once again, results are representative of the large qualitative deviations between the behavior of a conventional SM connected to a power system and devices that are not designed to resemble its dynamics according to the conditions discussed in Section III. ## V Conclusions The paper shows that a given second-order oscillatory device resembles the dynamic response of a SM only if it satisfies certain conditions. These conditions are concerned with the device's availability of energy, time scale of action, damping of oscillations, and response during short-circuits. Devices that do not fulfill these conditions have been characterized in the recent literature as equivalent or analogous to SMs. Such devices should not be confused with and/or misinterpreted as replicating the traditional behavior of a SM connected to a power network.
2306.03062
Geometry of a weak para-$f$-structure
We study geometry of the weak almost para-$f$-structure and its subclasses. This allow us to produce totally geodesic foliations and also to take a fresh look at the para-$f$-structure introduced by A.\,Bucki and A.\,Miernowski. We demonstrate this by generalizing several known results on almost para-$f$-manifolds. First, we express the covariant derivative of $f$ using a new tensor on a metric weak para-$f$-structure, then we prove that on a weak para-${\cal K}$-manifold the characteristic vector fields are Killing and $\ker f$ defines a totally geodesic foliation. Next, we show that a para-${\cal S}$-structure is rigid (i.e., a weak para-${\cal S}$-structure is a para-${\cal S}$-structure), and that a metric weak para-$f$-structure with parallel tensor $f$ reduces to a weak para-${\cal C}$-structure. We obtain corollaries for $p=1$, i.e., for a weak almost paracontact structure.
Vladimir Rovenski
2023-06-05T17:33:14Z
http://arxiv.org/abs/2306.03062v1
# Geometry of a weak para-\(f\)-structure ###### Abstract We study the geometry of the weak almost para-\(f\)-structure and its satellites. This allow us to produce totally geodesic foliations and Killing vector fields and also to take a fresh look at the para-\(f\)-structure introduced by A. Bucki and A. Miernowski. We demonstrate this by generalizing several known results on almost para-\(f\)-manifolds. First, we express the covariant derivative of \(f\) using a new tensor on a metric weak para-\(f\)-structure, then we prove that on a weak para-\(\mathcal{K}\)-manifold the characteristic vector fields are Killing and \(\ker f\) defines a totally geodesic foliation. Next, we show that a para-\(\mathcal{S}\)-structure is rigid (i.e., a weak para-\(\mathcal{S}\)-structure is a para-\(\mathcal{S}\)-structure), and that a metric weak para-\(f\)-structure with parallel tensor \(f\) reduces to a weak para-\(\mathcal{C}\)-structure. We obtain corollaries for \(p=1\), i.e., for a weak almost paracontact structure. **Keywords**: para-\(f\)-structure; distribution; totally geodesic foliation; Killing vector field **Mathematics Subject Classifications (2010)** 53C15, 53C25, 53D15 ## Introduction A distribution (or a foliation, associated with integrable distribution) on a pseudo-Riemannian manifold is _totally geodesic_ if any geodesic of a manifold that is tangent to the distribution at one point is tangent to it at all points. Such foliations have the simplest extrinsic geometry of the leaves and appear in Riemannian geometry, e.g., in the theory of \(\mathfrak{g}\)-foliations, as kernels of degenerate tensors, e.g., [1, 6]. We are motivated by the problem of finding structures on manifolds, which lead to totally geodesic foliations and Killing vector fields, see [5]. A well-known source of totally geodesic foliations is a para-\(f\)-structure on a smooth manifold \(M^{2n+p}\), defined using (1,1)-tensor field \(f\) satisfying \(f^{3}=f\) and having constant rank \(2n\), see [3, 9]. The paracontact geometry (a counterpart to the contact geometry) is a higher dimensional analog of almost product (\(p=0\)) [7], and almost paracontact (\(p=1\)) structures [4]. A para-\(f\)-structure with \(p=2\) arises in the study of hypersurfaces in almost contact manifolds, e.g., [2]. Interest in para-Sasakian manifolds is due to their connection with para-Kahler manifolds and their role in mathematical physics. If there exists a set of vector fields \(\xi_{1},\ldots,\xi_{p}\) with certain properties, then \(M^{2n+p}\) is said to have a para-\(f\)-structure with complemented frames. In this case, the tangent bundle \(TM\) splits into three complementary subbundles: \(\pm 1\)-eigen-distributions for \(f\) composing a \(2n\)-dimensional distribution \(f(TM)\) and a \(p\)-dimensional distribution \(\ker f\) (the kernel of \(f\)). In [11], we introduced the "weak" metric structures that generalize an \(f\)-structure and a para-\(f\)-structure, and allow us to take a fresh look at the classical theory. In [10], we studied geometry of a weak \(f\)-structure and its satellites that are analogs of \(\mathcal{K}\)- \(\mathcal{S}\)- and \(\mathcal{C}\)- manifolds. In this paper, using a similar approach, we study geometry of a weak para-\(f\)-structure and its important cases related to a pseudo-Riemannian manifold endowed with a totally geodesic foliation. A natural question arises: how rich are weak para-\(f\)-structures compared to the classical ones? We study this question for weak analogs of para-\(\mathcal{K}\)-, para-\(\mathcal{S}\)- and para-\(\mathcal{C}\)- structures. The proofs of main results use the properties of new tensors, as well as the constructions required in the classical case. The theory presented here can be used to deepen our knowledge of pseudo-Riemannian geometry of manifolds equipped with distributions. This article consists of an introduction and five sections. In Section 1, we discuss the properties of "weak" metric structures generalizing some classes of para-\(f\)-manifolds. In Section 2 we express the covariant derivative of \(f\) of a weak para-\(f\)-structure using a new tensor and show that on a weak para-\(\mathcal{K}\)-manifold the characteristic vector fields are Killing and \(\ker f\) defines a totally geodesic foliation. Also, for a weak almost para-\(\mathcal{C}\)-structure and a weak almost para-\(\mathcal{S}\)-structure, \(\ker f\) defines a totally geodesic foliation. In Section 3, we apply to weak almost para-\(\mathcal{S}\)-manifolds the tensor \(h\) and prove stability of some known results. In Section 4 we complete the result in [11] and prove the rigidity theorem that a weak para-\(\mathcal{S}\)-structure is a para-\(\mathcal{S}\)-structure. In Section 5, we show that a weak para-\(f\)-structure with parallel tensor \(f\) reduces to a weak para-\(\mathcal{C}\)-structure, we also give an example of such a structure. ## 1 Preliminaries Here, we describe "weak" metric structures generalizing certain classes of para-\(f\)-manifolds and discuss their properties. A _weak para-\(f\)-structure_ on a smooth manifold \(M^{2n+p}\) is defined by a \((1,1)\)-tensor field \(f\) of rank \(2\,n\) and a nonsingular \((1,1)\)-tensor field \(Q\) satisfying, see [11], \[f^{3}-fQ=0,\qquad Q\,\xi=\xi\quad(\xi\in\ker f). \tag{1}\] If \(\ker f=\{X\in TM:f(X)=0\}\) is parallelizable, then we fix vector fields \(\xi_{i}\) (\(1\leq i\leq p\)), which span \(\ker f\), and their dual one-forms \(\eta^{i}\). We get a _weak almost para-\(f\)-structure_ (a weak almost paracontact structure for \(p=1\)), see [11], \[f^{2}=Q-\sum\nolimits_{i}\eta^{i}\otimes\xi_{i},\quad\eta^{i}(\xi_{j})=\delta ^{i}_{j}\,. \tag{2}\] Using (2) we get \(f(TM)=\bigcap_{i}\ker\eta^{i}\) and that \(f(TM)\) is \(f\)-invariant, i.e., \[fX\in f(TM),\quad X\in f(TM). \tag{3}\] By (2)-(3), \(f(TM)\) is invariant for \(Q\). A weak almost \(f\)-structure is called _normal_ if the following tensor (known for \(Q=\mathrm{id}_{TM}\), e.g., [6]) is identically zero: \[N^{(1)}(X,Y)=[f,f](X,Y)-2\sum\nolimits_{i}d\eta^{i}(X,Y)\,\xi_{i}. \tag{4}\] The Nijenhuis torsion of \(f\) and the exterior derivative of \(\eta^{i}\) are given by \[[f,f](X,Y)=f^{2}[X,Y]+[fX,fY]-f[fX,Y]-f[X,fY],\ X,Y\in\mathfrak{ X}_{M}, \tag{5}\] \[d\eta^{i}(X,Y)=\frac{1}{2}\,\{X(\eta^{i}(Y))-Y(\eta^{i}(X))-\eta ^{i}([X,Y])\},\quad X,Y\in\mathfrak{X}_{M}. \tag{6}\] **Remark 1.1**.: A differential \(k\)-_form_ on a smooth manifold \(M\) is a skew-symmetric tensor field \(\omega\) of type \((0,k)\). According to the conventions of [8], \[d\omega(X_{1},\ldots,X_{k+1})=\tfrac{1}{k+1}\sum_{i=1}^{k+1}(-1) ^{i+1}X_{i}(\omega(X_{1},\ldots,\widehat{X}_{i}\ldots,X_{k+1}))\] \[\qquad+\sum\nolimits_{i<j}(-1)^{i+j}\,\omega([X_{i},X_{j}],X_{1},\ldots,\widehat{X}_{i},\ldots,\widehat{X}_{j},\ldots,X_{k+1}), \tag{7}\] where \(X_{1},\ldots,X_{k+1}\in\mathfrak{X}_{M}\) and \(\,\widehat{\cdot}\,\) denotes the operator of omission, defines a \((k+1)\)-form \(d\omega\) - the _exterior differential_ of \(\omega\). Thus, (7) with \(k=1\) gives (6). If there exists a pseudo-Riemannian metric \(g\) such that \[g(fX,fY)=-g(X,Q\,Y)+\sum\nolimits_{i}\eta^{i}(X)\,\eta^{i}(Y),\quad X,Y\in \mathfrak{X}_{M}, \tag{8}\] then \((f,Q,\xi_{i},\eta^{i},g)\) is called a _metric weak para-\(f\)-structure_, \(M(f,Q,\xi_{i},\eta^{i},g)\) is called a _metric weak para-\(f\)-manifold_, and \(g\) is called a _compatible metric_. Putting \(Y=\xi_{i}\) in (8) and using (1), we get \(g(X,\xi_{i})=\eta^{i}(X)\), thus, \(f(TM)\perp\ker f\) and \(\{\xi_{i}\}\) is an orthonormal frame of \(\ker f\). **Remark 1.2**.: According to [11], a weak almost para-\(f\)-structure admits a compatible pseudo-Riemannian metric if \(f\) admits a skew-symmetric representation, i.e., for any \(x\in M\) there exist a neighborhood \(U_{x}\subset M\) and a frame \(\{e_{k}\}\) on \(U_{x}\), for which \(f\) has a skew-symmetric matrix. The following statement is well-known for the case of \(Q=\mathrm{id}_{TM}\). **Proposition 1.1**.: (a) _For a weak almost para-\(f\)-structure the following hold:_ \[f\,\xi_{i}=0,\quad\eta^{i}\circ f=0,\quad\eta^{i}\circ Q=\eta_{i}\quad(1\leq i \leq p),\quad[Q,\,f]=0.\] (b) _For a metric weak almost para-\(f\)-structure the tensor \(f\) is skew-symmetric and the tensor \(Q\) is self-adjoint, i.e.,_ \[g(fX,Y)=-g(X,fY),\quad g(QX,Y)=g(X,QY). \tag{9}\] Proof.: (a) By (1) and (2), \(f^{2}\xi_{i}=0\). Applying (1) to \(f\xi_{i}\), we get \(f\xi_{i}=0\). To show \(\eta^{i}\circ f=0\), note that \(\eta^{i}(f\,\xi_{i})=\eta^{i}(0)=0\), and, using (3), we get \(\eta^{i}(fX)=0\) for \(X\in f(TM)\). Next, using (2) and \(f(Q\,\xi_{i})=f\,\xi_{i}=0\), we get \[f^{3}X=f(f^{2}X)=f\,QX-\sum_{i}\eta^{i}(X)\,f\xi_{i}=f\,QX,\] \[f^{3}X=f^{2}(fX)=Q\,fX-\sum_{i}\eta^{i}(fX)\,\xi_{i}=Q\,fX\] for any \(X\in f(TM)\). This and \([Q,\,f]\,\xi_{i}=0\) provide \([Q,\,f]=Q\,f-fQ=0\). (b) By (8), the restriction \(Q_{\mid f(TM)}\) is self-adjoint. This and (1) provide (9b). For any \(Y\in f(TM)\) there is \(\tilde{Y}\in f(TM)\) such that \(fY=\tilde{Y}\). From (2) and (8) with \(X\in f(TM)\) and \(\tilde{Y}\) we get \[g(fX,\tilde{Y})=g(fX,fY)\stackrel{{(\ref{eq:2})}}{{=}}-g(X,QY) \stackrel{{(\ref{eq:2})}}{{=}}-g(X,f^{2}Y)=-g(X,f\tilde{Y}),\] and (9a) follows. **Remark 1.3**.: For a weak almost para-\(f\)-structure, the tangent bundle decomposes as \(TM=f(TM)\oplus\ker f\), where \(\ker f\) is a \(p\)-dimensional characteristic distribution; moreover, if we assume that the symmetric tensor \(Q\) is positive definite, then \(f(TM)\) decomposes into the sum of two \(n\)-dimensional subbundles: \(f(TM)=\mathcal{D}_{+}\oplus\mathcal{D}_{-}\), corresponding to positive and negative eigenvalues of \(f\), and in this case we get \(TM=\mathcal{D}_{+}\oplus\mathcal{D}_{-}\oplus\ker f\). Define the difference tensor \(\widetilde{Q}\) (vanishing on a para-\(f\)-structure) by \[\tilde{Q}=Q-\mathrm{id}_{TM}.\] By the above, \(\widetilde{Q}\,\xi_{i}=0\) and \([\tilde{Q},f]=0\). We can rewrite (5) in terms of the Levi-Civita connection \(\nabla\) as \[[f,f](X,Y)=(f\nabla_{Y}f-\nabla_{fY}f)X-(f\nabla_{X}f-\nabla_{fX}f)Y; \tag{10}\] in particular, since \(f\,\xi_{i}=0\), \[[f,f](X,\xi_{i})=f(\nabla_{\xi_{i}}f)X+\nabla_{fX}\,\xi_{i}-f\,\nabla_{X}\, \xi_{i},\quad X\in\mathfrak{X}_{M}. \tag{11}\] The fundamental 2-form \(\Phi\) on \(M(f,Q,\xi_{i},\eta^{i},g)\) is defined by \[\Phi(X,Y)=g(X,fY),\quad X,Y\in\mathfrak{X}_{M}.\] Since \(\eta^{1}\wedge\ldots\wedge\eta^{p}\wedge\Phi^{n}\neq 0\), a metric weak para-\(f\)-manifold is orientable. **Definition 1.1**.: A metric weak para-\(f\)-structure \((f,Q,\xi_{i},\eta^{i},g)\) is called a _weak para-\(\mathcal{K}\)-structure_ if it is normal and the form \(\Phi\) is closed, i.e., \(d\Phi=0\). We define two subclasses of weak para-\(\mathcal{K}\)-manifolds as follows: _weak para-\(\mathcal{C}\)-manifolds_ if \(d\eta^{i}=0\) for any \(i\), and _weak para-\(\mathcal{S}\)-manifolds_ if \[d\eta^{i}=\Phi,\quad 1\leq i\leq p. \tag{12}\] Omitting the normality condition, we get the following: a metric weak para-\(f\)-structure is called (i) a _weak almost para-\(\mathcal{S}\)-structure_ if (12) is valid; (ii) a _weak almost para-\(\mathcal{C}\)-structure_ if \(\Phi\) and \(\eta^{i}\) are closed forms. For \(p=1\), weak para-\(\mathcal{C}\)- and weak para-\(\mathcal{S}\)- manifolds reduce to weak para-cosymplectic manifolds and weak para-Sasakian manifolds, respectively. Recall the formulas with the Lie derivative \(\pounds_{Z}\) in the \(Z\)-direction and \(X,Y\in\mathfrak{X}_{M}\): \[(\pounds_{Z}f)X = [Z,fX]-f[Z,X], \tag{13}\] \[(\pounds_{Z}\eta^{j})X = Z(\eta^{j}(X))-\eta^{j}([Z,X]),\] (14) \[(\pounds_{Z}g)(X,Y) = Z(g(X,Y))-g([Z,X],Y)-g(X,[Z,Y])\] (15) \[= g(\nabla_{X}\,Z,Y)+g(\nabla_{Y}\,Z,X).\] The following tensors are known in the theory of para-\(f\)-manifolds, e.g., [6]: \[N_{i}^{(2)}(X,Y) =(\pounds_{fX}\,\eta^{i})Y-(\pounds_{fY}\,\eta^{i})X\stackrel{{ \eqref{eq:def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def _def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_defdef_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_defdef_def_def_def_def_defdef_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_defdef_def_def_defdef_def_def_def_defdef_def_defdef_def_def_defdef_def_def_defdef_def_def_def_def_defdef_def_def_defdef_def_def_def_defdef_def_def_def_def_def_defdef_def_def_def_def_def_defdef_def_def_def_defdef_def_def_defdef_def_def_defdef_def_def_defdef_def_defdef_def_def_defdef_defdef_def_defdef_def_defdef_def_def_defdef_def_def_def_defdef_def_def_defdef_def_def_defdef_def_defdef_def_defdef_def_def_def_def_defdef_def_def_defdef_def_def_defdef_def_def_def_def_def_def_defdef_def_def_def_defdef_def_def_def_defdef_def_def_def_def_def_def_defdef_def_defdef_def_def_defdef_def_def_defdef_def_def_def_defdef_def_def_defdef_def_defdef_def_def_defdef_def_defdef_def_defdef_def_def_def_defdef_def_def_def_defdef_def_def_def_def_defdef_defdef_def_def_def_defdef_def_def_def_def_def_def_defdef_def_defdef_def_defdef_def_def_defdef_def_def_def_def_def_def_def_def_def_defdef_def_def_defdef_def_def_def_defdef_def_def_def_def_defdef_def_def_def_defdef_def_def_def_def_defdef_def_def_def_def_def_defdef_def_def_def_def_defdef_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_defdef_def **Proposition 2.1**.: _Let a metric weak para-\(f\)-structure be normal. Then \(N_{i}^{\,(3)}\) and \(N_{ij}^{\,(4)}\) vanish and_ \[N_{i}^{\,(2)}(X,Y)=\eta^{i}([\widetilde{Q}X,\,fY]); \tag{19}\] _moreover, the characteristic distribution \(\ker f\) is totally geodesic._ Proof.: Assume \(N^{\,(1)}(X,Y)=0\) for any \(X,Y\in TM\). Taking \(\xi_{i}\) instead of \(Y\) and using the formula of Nijenhuis tensor (5), we get \[0 = [f,f](X,\xi_{i})-2\sum\nolimits_{j}d\eta^{j}(X,\xi_{i})\,\xi_{j} \tag{20}\] \[= f^{2}[X,\xi_{i}]-f[fX,\xi_{i}]-2\sum\nolimits_{j}d\eta^{j}(X, \xi_{i})\,\xi_{j}.\] For the scalar product of (20) with \(\xi_{j}\), using \(f\,\xi_{i}=0\), we get \[d\eta^{j}(\xi_{i},\,\cdot)=0; \tag{21}\] hence, \(N_{ij}^{\,(4)}=0\), see (18). Next, combining (20) and (21), we get \[0=[f,f](X,\xi_{i})=f^{2}[X,\xi_{i}]-f[fX,\xi_{i}]=f\,(\pounds_{\xi_{i}}f)X.\] Applying \(f\) and using (2) and \(\eta^{i}\circ f=0\), we achieve \[0 =f^{2}(\pounds_{\xi_{i}}f)X=Q(\pounds_{\xi_{i}}f)X-\sum\nolimits_{ j}\eta^{j}((\pounds_{\xi_{i}}f)X)\,\xi_{j} \tag{22}\] \[\qquad=Q(\pounds_{\xi_{i}}f)X-\sum\nolimits_{j}\eta^{j}([\xi_{i },fX])\,\xi_{j}.\] Further, (21) and (6) yield \[0=2\,d\eta^{j}(fX,\xi_{i})=(fX)(\eta^{j}(\xi_{i}))-\xi_{i}(\eta^{j}(fX))-\eta^ {j}([fX,\xi_{i}])=\eta^{j}([\xi_{i},fX]). \tag{23}\] Since \(Q\) is non-singular, from (22)-(23) we get \(\pounds_{\xi_{i}}f=0\), i.e, \(N_{i}^{\,(3)}=0\), see (17). Replacing \(X\) by \(fX\) in our assumption \(N^{\,(1)}=0\) and using (5) and (6), we get \[0 =g([f,f](fX,Y)-2\sum\nolimits_{j}d\eta^{j}(fX,Y)\,\xi_{j},\ \xi_{i}) \tag{24}\] \[=g([f^{2}X,fY],\xi_{i})-(fX)(\eta^{i}(Y))+\eta^{i}([fX,Y]),\quad 1 \leq i\leq p.\] Using (2) and \([fY,\eta^{j}(X)\xi_{i}]=(fY)(\eta^{j}(X))\xi_{i}+\eta^{j}(X)[fY,\xi_{i}]\), we rewrite (24) as \[0=\eta^{i}([QX,fY])-\sum\eta^{j}(X)\,\eta^{i}([\xi_{j},fY])+fY(\eta^{i}(X))-fX (\eta^{i}(Y))+\eta^{i}([fX,Y]).\] Since (23) gives \(\eta^{i}([fY,\xi_{j}])=0\), the above equation becomes \[\eta^{i}([QX,fY])+(fY)(\eta^{i}(X))-(fX)(\eta^{i}(Y))+\eta^{i}([fX,Y])=0. \tag{25}\] Finally, combining (25) with (16), we get (19). Using the identity \[\pounds_{\xi_{i}}=\pounds_{\xi_{i}}d+d\,\pounds_{\xi_{i}}, \tag{26}\] from (21) and \(\eta^{i}(\xi_{j})=\delta_{j}^{i}\) we obtain \(\pounds_{\xi_{i}}\,\eta^{j}=d(\eta^{j}(\xi_{i}))+\pounds_{\xi_{i}}d\eta^{j}=0\). On the other hand, by (14) we have \[(\pounds_{\xi_{i}}\,\eta^{j})X=g(X,\nabla\nolimits_{\xi_{i}}\,\xi_{j})+g( \nabla\nolimits_{X}\,\xi_{i},\,\xi_{j}),\quad X\in\mathfrak{X}_{M}.\] Symmetrizing this and using \(\pounds_{\xi_{i}}\,\eta^{j}=0\) and \(g(\xi_{i},\,\xi_{j})=\delta_{ij}\) yield \[\nabla\nolimits_{\xi_{i}}\,\xi_{j}+\nabla\nolimits_{\xi_{j}}\,\xi_{i}=0, \tag{27}\] thus, the distribution \(\ker f\) is totally geodesic. Recall the co-boundary formula for exterior derivative \(d\) on a 2-form \(\Phi\), \[d\Phi(X,Y,Z) = \frac{1}{3}\,\big{\{}X\,\Phi(Y,Z)+Y\,\Phi(Z,X)+Z\,\Phi(X,Y) \tag{28}\] \[-\Phi([X,Y],Z)-\Phi([Z,X],Y)-\Phi([Y,Z],X)\big{\}}.\] By direct calculation we get the following: \[(\pounds_{\xi_{i}}\,\Phi)(X,Y)=(\pounds_{\xi_{i}}\,g)(X,fY)+g(X,(\pounds_{\xi_ {i}}f)Y). \tag{29}\] The following result generalizes [6, Proposition 4]. **Theorem 2.1**.: _On a weak para-\(\mathcal{K}\)-manifold the vector fields \(\xi_{1},\dots,\xi_{p}\) are Killing and_ \[\nabla_{\xi_{i}}\,\xi_{j}=0,\quad 1\leq i,j\leq p; \tag{30}\] _thus, \(\ker f\) is integrable and defines a totally geodesic Riemannian foliation with flat leaves._ Proof.: By Proposition 2.1, the distribution \(\ker f\) is totally geodesic, see (27), and \(N_{i}^{(3)}=\pounds_{\xi_{i}}f=0\). Using \(\iota_{\xi_{i}}\Phi=0\) and condition \(d\Phi=0\) in the identity (26), we get \(\pounds_{\xi_{i}}\Phi=0\). Thus, from (29) we obtain \((\pounds_{\xi_{i}}\,g)(X,fY)=0\). To show \(\pounds_{\xi_{i}}\,g=0\), we will examine \((\pounds_{\xi_{i}}\,g)(fX,\xi_{j})\) and \((\pounds_{\xi_{i}}\,g)(\xi_{i},\xi_{j})\). Using \(\pounds_{\xi_{i}}\,\eta^{j}=0\), we get \[(\pounds_{\xi_{i}}\,g)(fX,\xi_{j})=(\pounds_{\xi_{i}}\,\eta^{j})fX-g(fX,[\xi_ {i},\xi_{j}])=-g(fX,[\xi_{i},\xi_{j}])=0.\] Using (27), we get \((\pounds_{\xi_{i}}\,g)(\xi_{\mu},\xi_{j})=-g(\xi_{i},\nabla_{\xi_{k}}\,\xi_{j }+\nabla_{\xi_{j}}\,\xi_{k})=0\). Thus, \(\xi_{i}\) is a Killing vector field, i.e., \(\pounds_{\xi_{i}}g=0\). By \(d\Phi(X,\xi_{i},\xi_{j})=0\) and (28) we obtain \(g([\xi_{i},\xi_{j}],fX)=0\), i.e., \(\ker f\) is integrable. From this and (27) we get \(\nabla_{\xi_{k}}\,\xi_{j}=0\); thus, the sectional curvature is \(K(\xi_{i},\xi_{j})=0\). **Theorem 2.2**.: _For a weak almost para-\(\mathcal{S}\)-structure, we get \(N_{i}^{\,(2)}=N_{ij}^{\,(4)}=0\) and_ \[(N^{\,(1)}(X,Y))^{\perp}=2\,g(X,f\widetilde{Q}Y)\,\bar{\xi}\,; \tag{31}\] _moreover, \(N_{i}^{\,(3)}\) vanishes if and only if \(\,\xi_{i}\) is a Killing vector field._ Proof.: Applying (12) in (16) and using skew-symmetry of \(f\) we get \(N_{i}^{\,(2)}=0\). Equation (12) with \(Y=\xi_{i}\) yields \(d\eta^{j}(X,\xi_{i})=g(X,f\,\xi_{i})=0\) for any \(X\in\mathfrak{X}_{M}\); thus, we get (21), i.e., \(N_{ij}^{\,(4)}=0\). Using (12) and \[g([f,f](X,Y),\xi_{i})=g([fX,fY],\xi_{i})=-2\,d\eta^{i}(fX,fY)=-2\,\Phi(fX,fY)\] for all \(i\), we also calculate \[\tfrac{1}{2}\,g(N^{\,(1)}(X,Y),\xi_{i})=-d\eta^{i}(fX,fY)-g(\sum_ {j}d\eta^{j}(X,Y)\,\xi_{j},\xi_{i})\] \[=-\Phi(fX,fY)-\Phi(X,Y)=g(X,(f^{3}-f)Y)=g(X,\widetilde{Q}fY),\] that proves (31). Next, invoking (12) in the equality \[(\pounds_{\xi_{i}}\,d\eta^{j})(X,Y)=\xi_{i}(d\eta^{j}(X,Y))-d\eta^{j}([\xi_ {i},X],Y)-d\eta^{j}(X,[\xi_{i},Y]),\] and using (15), we obtain for all \(i,j\) \[(\pounds_{\xi_{i}}\,d\eta^{j})(X,Y)=(\pounds_{\xi_{i}}\,g)(X,fY)+g(X,(\pounds _{\xi_{i}}f)Y). \tag{32}\] Since \(\pounds_{V}=\iota_{V}\circ d+d\circ\iota_{V}\), the exterior derivative \(d\) commutes with the Lie-derivative, i.e., \(d\circ\pounds_{V}=\pounds_{V}\circ d\), and as in the proof of Theorem 2.1, we get that \(d\eta^{i}\) is invariant under the action of \(\xi_{i}\), i.e., \(\pounds_{\xi_{i}}\,d\eta^{j}=0\). Therefore, (32) implies that \(\xi_{i}\) is a Killing vector field if and only if \(N_{i}^{\,(3)}=0\). **Theorem 2.3**.: _For a weak almost para-\(\mathcal{C}\)-structure, we get \(N_{i}^{\,(2)}=N_{ij}^{\,(4)}=0\), \(N^{\,(1)}=[f,f]\), and (30); thus, the distribution \(\ker f\) is tangent to a totally geodesic foliation with the sectional curvature \(K(\xi_{i},\xi_{j})=0\). Moreover, \(N_{i}^{\,(3)}=0\) if and only if \(\,\xi_{i}\) is a Killing vector field._ Proof.: By (16) and (18) and since \(d\eta^{i}=0\), the tensors \(N_{i}^{\,(2)}\) and \(N_{ij}^{\,(4)}\) vanish on a weak almost para-\(\mathcal{C}\)-structure. Moreover, by (4) and (32), respectively, the tensor \(N^{\,(1)}\) coincides with \([f,f]\), and \(N_{i}^{\,(3)}=\pounds_{\xi_{i}}f\) (\(1\leq i\leq p\)) vanish if and only if each \(\xi_{i}\) is a Killing vector. From the equalities \[3\,d\Phi(X,\xi_{i},\xi_{j})=g([\xi_{i},\xi_{j}],fX),\qquad 2\,d\eta^{k}(\xi_{j},\xi_{i})=g([\xi_{i},\xi_{j}],\xi_{k})\] and conditions \(d\Phi=0\) and \(d\eta^{i}=0\) we obtain \[[\xi_{i},\xi_{j}]=0,\quad 1\leq i,j\leq p. \tag{33}\] Next, from \(d\eta^{i}=0\) and the equality \[2\,d\eta^{i}(\xi_{j},X)+2\,d\eta^{j}(\xi_{i},X)=g(\nabla_{\xi_{i}}\,\xi_{j}+ \nabla_{\xi_{j}}\,\xi_{i},X)\] we obtain (27): \(\nabla_{\xi_{i}}\,\xi_{j}+\nabla_{\xi_{j}}\,\xi_{i}=0\). From this and (33) we get (30). We will express \(\nabla_{X}f\) using a new tensor on a metric weak para-\(f\)-structure. The following assertion generalizes [6, Proposition 1]. **Proposition 2.2**.: _For a metric weak para-\(f\)-structure we get_ \[2\,g((\nabla_{X}f)Y,Z)=-3\,d\Phi(X,fY,fZ)-3\,d\Phi(X,Y,Z)-g(N^{\, (1)}(Y,Z),fX)\] \[\quad+\sum_{i}\left(N_{i}^{\,(2)}(Y,Z)\,\eta^{i}(X)+2\,d\eta^{i}( fY,X)\,\eta^{i}(Z)-2\,d\eta^{i}(fZ,X)\,\eta^{i}(Y)\right)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad+N^{\,(5)}(X,Y,Z), \tag{34}\] _where a skew-symmetric w.r.t. \(Y\) and \(Z\) tensor \(N^{\,(5)}(X,Y,Z)\) is defined by_ \[N^{\,(5)}(X,Y,Z) = (fZ)\,(g(X,\widetilde{Q}Y))-(fY)\,(g(X,\widetilde{Q}Z))+g([X,fZ], \widetilde{Q}Y)\] \[\quad-\,g([X,fY],\widetilde{Q}Z)+g([Y,fZ]-[Z,fY]-f[Y,Z],\ \widetilde{Q}X).\] Proof.: Using the skew-symmetry of \(f\), one can compute \[2\,g((\nabla_{X}f)Y,Z)=2\,g(\nabla_{X}(fY),Z)+2\,g(\nabla_{X}Y, fZ)\] \[\quad=X\,g(fY,Z)+(fY)\,g(X,Z)-Z\,g(X,fY)\] \[\quad+g([X,fY],Z)+g([Z,X],fY)-g([fY,Z],X)\] \[\quad+X\,g(Y,fZ)+Y\,g(X,fZ)-(fZ)\,g(X,Y)\] \[\quad+g([X,Y],fZ)+g([fZ,X],Y)-g([Y,fZ],X). \tag{35}\] Using (8), we obtain \[g(X,Z) =-\Phi(fX,Z)-g(X,\widetilde{Q}Z)+\sum_{i}\left(\eta^{i}(X)\,\eta^ {i}(Z)+\eta^{i}(X)\,\eta^{i}(\widetilde{Q}Z)\right)\] \[=-\Phi(fX,Z)+\sum_{i}\eta^{i}(X)\,\eta^{i}(Z)-g(X,\widetilde{Q}Z). \tag{36}\] Thus, and in view of the skew-symmetry of \(f\) and applying (36) six times, (35) can be written as \[2\,g((\nabla_{X}f)Y,Z)=X\,\Phi(Y,Z)+(fY)\,\big{(}-\Phi(fX,Z)+\sum_ {i}\eta^{i}(X)\,\eta^{i}(Z)\big{)}\] \[-(fY)\,g(X,\widetilde{Q}Z)-Z\,\Phi(X,Y)\] \[+\Phi([X,fY],fZ)+\sum_{i}\eta^{i}([X,fY])\eta^{i}(Z)-g([X,fY], \widetilde{Q}Z)+\Phi([Z,X],Y)\] \[-\Phi([fY,Z],fX)-\sum_{i}\eta^{i}([fY,Z])\,\eta^{i}(X)+g([fY,Z], \widetilde{Q}X)+X\,\Phi(Y,Z)\] \[+Y\,\Phi(X,Z)-(fZ)\,\big{(}-\Phi(fX,Y)+\sum_{i}\eta^{i}(X)\,\eta^ {i}(Y)\big{)}+(fZ)g(X,\widetilde{Q}Y)\] \[+\Phi([X,Y],Z)+g(f[-fZ,X],fY)+\sum_{i}\eta^{i}([fZ,X])\eta^{i}(Y) -g([fZ,X],\widetilde{Q}Y)\] \[+g(f[Y,fZ],fX)-\sum_{i}\eta^{i}([Y,fZ])\,\eta^{i}(X)+g([Y,fZ], \widetilde{Q}X).\] We also have \[g(N\,^{(1)}(Y,Z),fX)=g(f^{2}[Y,Z]+[fY,fZ]-f[fY,Z]-f[Y,fZ],fX)\] \[=-g(f[Y,Z],\widetilde{Q}X)+g([fY,fZ]-f[fY,Z]-f[Y,fZ]-[Y,Z],fX).\] From this and (28) we get the required result. **Remark 2.1**.: For particular values of the tensor \(N\,^{(5)}\) we get \[N\,^{(5)}(X,\xi_{i},Z) = -N\,^{(5)}(X,Z,\xi_{i})=g(N_{i}^{(3)}(Z),\,\widetilde{Q}X),\] \[N\,^{(5)}(\xi_{i},Y,Z) = g([\xi_{i},fZ],\widetilde{Q}Y)-g([\xi_{i},fY],\widetilde{Q}Z),\] \[N\,^{(5)}(\xi_{i},Y,\xi_{j}) = N\,^{(5)}(\xi_{i},\xi_{j},Y)=0. \tag{37}\] We will discuss the meaning of \(\nabla_{X}f\) for weak almost para-\(\mathcal{S}\)- and weak para-\(\mathcal{K}\)- structures. The following corollary of Proposition 2.2 and Theorem 2.2 generalizes well-known results with \(Q=\mathrm{id}_{TM}\). **Corollary 2.1**.: _For a weak almost para-\(\mathcal{S}\)-structure we get_ \[2\,g((\nabla_{X}f)Y,Z) =-g(N\,^{(1)}(Y,Z),fX)+2\,g(fX,fY)\,\bar{\eta}(Z)\] \[-2\,g(fX,fZ)\,\bar{\eta}(Y)+N\,^{(5)}(X,Y,Z), \tag{38}\] _where \(\bar{\eta}=\sum_{i}\eta^{i}\). In particular, taking \(x=\xi_{i}\) and then \(Y=\xi_{j}\) in (38), we get_ \[2\,g((\nabla_{\xi_{i}}f)Y,Z)=N\,^{(5)}(\xi_{i},Y,Z),\quad 1\leq i\leq p, \tag{39}\] _and (30); thus, the characteristic distribution is tangent to a totally geodesic foliation with flat leaves._ Proof.: According to Theorem 2.2, for a weak almost para-\(\mathcal{S}\)-structure we have \(d\eta^{i}=\Phi\) and \(N_{i}^{(2)}=N_{ij}^{(4)}=0\). Thus, invoking (12) and using Theorem 2.2 in (34), we get (38). From (39) with \(Y=\xi_{j}\) we get \(g(f\nabla_{\xi_{i}}\,\xi_{j},Z)=0\), thus \(\nabla_{\xi_{i}}\,\xi_{j}\in\ker f.\) Also, \[\eta^{k}([\xi_{i},\xi_{j}])=-2\,d\eta^{k}(\xi_{i},\xi_{j})=-2\,g(\xi_{i},f\xi_ {j})=0;\] hence, \([\xi_{i},\xi_{j}]=0\), i.e., \(\nabla_{\xi_{i}}\,\xi_{j}=\nabla_{\xi_{j}}\,\xi_{i}\). Finally, from \(g(\xi_{j},\xi_{k})=\delta_{jk}\), using the covariant derivative with respect to \(\xi_{i}\) and the above equality, we get \(\nabla_{\xi_{i}}\,\xi_{j}\in f(TM)\). This together with \(\nabla_{\xi_{i}}\,\xi_{j}\in\ker f\) proves (30). ## 3 The tensor field \(h\) Here, we apply for a weak almost para-\(\mathcal{S}\)-manifold the tensor field \(h=(h_{1},\dots,h_{p})\), where \(h_{i}=\frac{1}{2}\,N_{i}^{(3)}=\frac{1}{2}\,\pounds_{\xi_{i}}f\). By Theorem 2.2, \(h_{i}=0\) if and only if \(\xi_{i}\) is a Killing field. First, we calculate \[(\pounds_{\xi_{i}}f)X\stackrel{{(\ref{eq:h_i})}}{{= }}\nabla_{\xi_{i}}(fX)-\nabla_{fX}\,\xi_{i}-f(\nabla_{\xi_{i}}X-\nabla_{X}\, \xi_{i})\] \[=(\nabla_{\xi_{i}}f)X-\nabla_{fX}\,\xi_{i}+f\nabla_{X}\,\xi_{i}. \tag{40}\] For \(X=\xi_{i}\) in (40), using \(g((\nabla_{\xi_{i}}f)\,\xi_{j},Z)=\frac{1}{2}N\,^{(5)}(\xi_{i},\xi_{j},Z)=0\), see (39), and \(\nabla_{\xi_{i}}\,\xi_{j}=0\), see Corollary 2.1, we get \[h_{i}\,\xi_{j}=0. \tag{41}\] The following result generalizes the fact that for an almost para-\(\mathcal{S}\)-structure, each tensor \(h_{i}\) is self-adjoint and commutes with \(f\). **Proposition 3.1**.: _For a weak almost para-\(\mathcal{S}\)-structure, the tensor \(h_{i}\) and its conjugate \(h_{i}^{*}\) satisfy_ \[g((h_{i}-h_{i}^{*})X,Y) = \frac{1}{2}\,N^{\,(5)}(\xi_{i},X,Y), \tag{42}\] \[\nabla\,\xi_{i} = Q^{-1}f\,h_{i}^{*}-f,\] (43) \[h_{i}f+f\,h_{i} = -\frac{1}{2}\,\pounds_{\xi_{i}}\widetilde{Q}. \tag{44}\] Proof.: (i) The scalar product of (40) with \(Y\), using (39), gives \[g((\pounds_{\xi_{i}}f)X,Y)=N^{\,(5)}(\xi_{i},X,Y)+g(f\nabla_{X}\, \xi_{i}-\nabla_{fX}\,\xi_{i},\ Y). \tag{45}\] Similarly, \[g((\pounds_{\xi_{i}}f)Y,X)=N^{\,(5)}(\xi_{i},Y,X)+g(f\nabla_{Y} \,\xi_{i}-\nabla_{fY}\,\xi_{i},\ X). \tag{46}\] Using (16) and \((fX)(\eta^{i}(Y))-(fY)(\eta^{i}(X))\equiv 0\) (this vanishes if either \(X\) or \(Y\) equals \(\xi_{j}\) and also for \(X\) and \(Y\) in \(f(TM)\)), we get \(N_{i}^{\,(2)}(X,Y)=\eta^{i}([fY,X]-[fX,Y])\). Thus, the difference of (45) and (46) gives \[2\,g((h_{i}-h_{i}^{*})X,Y)=N^{\,(5)}(\xi_{i},X,Y)-N_{i}^{\,(2)}(X,Y).\] From this and equality \(N_{i}^{\,(2)}=0\) (see Theorem 2.2) we get (42). (ii) From Corollary 2.1 with \(Y=\xi_{i}\), we find \[g((\nabla_{X}f)\xi_{i},Z)=-\frac{1}{2}\,g(N^{\,(1)}(\xi_{i},Z), fX)-g(fX,fZ)+\frac{1}{2}\,N^{\,(5)}(X,\xi_{i},Z). \tag{47}\] Note that \(\frac{1}{2}\,N^{\,(5)}(X,\xi_{i},Z)=g(h_{i}Z,\widetilde{Q}X)\), see (37). By (5) with \(Y=\xi_{i}\), we get \[[f,f](X,\xi_{i})=f^{2}[X,\xi_{i}]-f[fX,\xi_{i}]=fN_{i}^{\,(3)}(X). \tag{48}\] Using (8), (13) and (48), we calculate \[g([f,f](\xi_{i},Z),fX) =g(f^{2}\,[\xi_{i},Z]-f[\xi_{i},fZ],fX)=-g(f(\pounds_{\xi_{i}}f)Z,fX)\] \[=g((\pounds_{\xi_{i}}f)Z,QX)-\sum\nolimits_{j}\eta^{j}(X)\,\eta^ {j}((\pounds_{\xi_{i}}f)Z). \tag{49}\] From (12) we have \(g([X,\xi_{i}],\xi_{k})=2\,d\eta^{k}(\xi_{i},X)=2\,\Phi(\xi_{i},X)=0\). By (30), we get \(g(\nabla_{X}\,\xi_{i},\xi_{k})=g(\nabla_{\xi_{i}}X,\xi_{k})=-g(\nabla_{\xi_{i }}\xi_{k},X)=0\) for \(X\in f(TM)\), thus \[g(\nabla_{X}\,\xi_{i},\ \xi_{k})=0,\quad X\in TM,\ 1\leq i,k\leq p. \tag{50}\] Using (40), we get \[2\,g((\nabla_{\xi_{i}}f)Y,\xi_{j})\stackrel{{\eqref{eq:g(X, Replacing \(Z\) by \(fZ\) in (54) and using (2), (50) and \(f\,\xi_{i}=0\), we achieve (43): \[g(Q\,\nabla_{X}\,\xi_{i},Z)=g((fQ-h_{i}f)Z,X)=g(f(h_{i}^{*}-Q)X,Z).\] (iii) Using (2), we obtain \[f\nabla_{\xi_{i}}f+(\nabla_{\xi_{i}}f)f=\nabla_{\xi_{i}}\,(f^{2})=\nabla_{ \xi_{i}}\widetilde{Q}-\nabla_{\xi_{i}}(\sum_{j}\eta^{j}\otimes\xi_{j}),\] where in view of (30), we get \(\nabla_{\xi_{i}}(\sum_{j}\eta^{j}\otimes\xi_{j})=0\). From the above and (40), we get (44): \[2(h_{i}f+fh_{i})X=f(\mathcal{E}_{\xi_{i}}f)X+(\mathcal{E}_{\xi_{ i}}f)fX\] \[=f(\nabla_{\xi_{i}}f)X+(\nabla_{\xi_{i}}f)fX+f^{2}\nabla_{X}\, \xi_{i}-\nabla_{f^{2}X}\,\xi_{i}\] \[=-(\nabla_{\xi_{i}}\widetilde{Q})X-\widetilde{Q}\nabla_{X}\, \xi_{i}+\nabla_{\widetilde{Q}X}\,\xi_{i}+\sum_{j}\big{(}g(\nabla_{X}\,\xi_{i },\xi_{j})\,\xi_{j}-g(X,\xi_{j})\nabla_{\xi_{j}}\,\xi_{i}\big{)}\] \[=[\widetilde{Q}X,\xi_{i}]-\widetilde{Q}\,[X,\xi_{i}]=-(\mathcal{ E}_{\xi_{i}}\widetilde{Q})X.\] We used (30) and (50) to show \(\sum_{j}\big{(}g(\nabla_{X}\,\xi_{i},\xi_{j})\,\xi_{j}-g(X,\xi_{j})\nabla_{\xi_ {j}}\,\xi_{i}\big{)}=0\). **Remark 3.1**.: For a weak almost para-\(\mathcal{S}\)-structure, using (51), we find \[2\,g(h_{i}X,\xi_{j})=-g(\nabla_{fX}\,\xi_{i},\xi_{j})\stackrel{{ \eqref{eq:g(X,g)}}}{{=}}0;\] thus, the distribution \(f(TM)\) is invariant under \(h_{i}\); moreover, \(h_{i}^{*}\,\xi_{j}=0\), see also (41). The next statement follows from Propositions 2.1 and 2.2. **Corollary 3.1**.: _For a weak para-\(\mathcal{K}\)-structure, we have_ \[2\,g((\nabla_{X}f)Y,Z)=\sum_{i}\big{(}2\,d\eta^{i}(fY,X)\,\eta^ {i}(Z)-2\,d\eta^{i}(fZ,X)\,\eta^{i}(Y)\] \[+\eta^{i}([\widetilde{Q}Y,\,fZ])\,\eta^{i}(X)\big{)}+N\,^{(5)}(X,Y,Z). \tag{55}\] _In particular, using (42) with \(h_{i}=0\), gives \(2\,g((\nabla_{\xi_{i}}f)Y,Z)=\eta^{i}([\widetilde{Q}Y,\,fZ])\) for \(1\leq i\leq p\)._ ## 4 The rigidity of a para-\(\mathcal{S}\)-structure An important class of metric para-\(f\)-manifolds is given by para-\(\mathcal{S}\)-manifolds. Here, we study a wider class of weak para-\(\mathcal{S}\)-manifolds and prove the rigidity theorem for para-\(\mathcal{S}\)-manifolds. **Proposition 4.1**.: _For a weak para-\(\mathcal{S}\)-structure we get_ \[g((\nabla_{X}f)Y,Z) =g(QX,Z)\,\bar{\eta}(Y)-g(QX,Y)\,\bar{\eta}(Z)+\tfrac{1}{2}\,N\, ^{(5)}(X,Y,Z)\] \[-\sum_{j}\eta^{j}(X)\big{(}\bar{\eta}(Y)\eta^{j}(Z)-\eta^{j}(Y) \bar{\eta}(Z)\big{)}. \tag{56}\] Proof.: Since \((f,Q,\xi_{i},\eta^{i},g)\) is a metric weak \(f\)-structure with \(N\,^{(1)}=0\), by Corollary 2.1, we get (56). **Remark 4.1**.: Using \(Y=\xi_{i}\) in (56), we get \(f\nabla_{X}\,\xi_{i}=-f^{2}X-\tfrac{1}{2}\,(N\,^{(5)}(X,\xi_{i},\,\cdot))^{\flat}\), which generalizes the equality \(\nabla_{X}\,\xi_{i}=-fX\) for a para-\(\mathcal{S}\)-structure, e.g., [6]. It was shown in [11] that a weak almost para-\(\mathcal{S}\)-structure with positive partial Ricci curvature can be deformed to an almost para-\(\mathcal{S}\)-structure. The main result in this section is the following rigidity theorem. **Theorem 4.1**.: _A metric weak para-\(f\)-structure is a weak para-\(\mathcal{S}\)-structure if and only if it is a para-\(\mathcal{S}\)-structure._ Proof.: Let \((f,Q,\xi_{i},\eta^{i},g)\) be a weak para-\(\mathcal{S}\)-structure. Since \(N^{\,(1)}=0\), by Proposition 2.1, we get \(N_{i}^{\,(3)}=0\). By (37), we then obtain \(N^{\,(5)}(\cdot,\xi_{i},\,\cdot\,)=0\). Recall that \(\tilde{Q}X=QX-X\) and \(\eta^{j}(\widetilde{Q}X)=0\). Using the above and \(Y=\xi_{i}\) in (56), we get \[g((\nabla_{X}f)\,\xi_{i},Z)=g(QX,Z)-\eta^{i}(QX)\,\bar{\eta}(Z)+ \sum\nolimits_{j}\eta^{j}(X)\big{(}\eta^{j}(Z)-\delta_{i}^{j}\,\bar{\eta}(Z) \big{)}\] \[=g(QX^{\top},Z)+\sum\nolimits_{j}\eta^{j}(Z)\big{(}\eta^{j}(QX)- \eta^{i}(QX)\big{)}-\sum\nolimits_{j}\eta^{j}(Z)\big{(}\eta^{j}(X)-\eta^{i}(X )\big{)}\] \[=g(QX^{\top},Z)+\sum\nolimits_{j}\eta^{j}(Z)\big{(}\eta^{j}( \widetilde{Q}X)-\eta^{i}(\widetilde{Q}X)\big{)}=g(QX^{\top},Z). \tag{57}\] Using (53), we rewrite (57) as \(g(\nabla_{X}\,\xi_{i},fZ)=g(QX^{\top},Z)\). By the above and (2), we find \[g(\nabla_{X}\,\xi_{i}+fX^{\top},\,f\,Z)=0. \tag{58}\] Since \(f\) is skew-symmetric, applying (56) with \(Z=\xi_{i}\) in (10), we obtain \[g([f,f](X,Y),\xi_{i})=g([fX,fY],\xi_{i})=g((\nabla_{fX}f)Y,\xi_{i })-g((\nabla_{fY}f)X,\xi_{i})\] \[\quad=g(Q\,fY,X)-g(Q\,fY,\xi_{i})\,\bar{\eta}(X)-g(Q\,fX,Y)+g(Q\,fX,\xi_{i})\,\bar{\eta}(Y). \tag{59}\] Recall that \([Q,\,f]=0\) and \(f\,\xi_{i}=0\). Thus, (59) yields for all \(i\), \[g([f,f](X,Y),\xi_{i})=2\,g(QX,fY).\] From this, using the definition of \(N^{\,(1)}\), we get for all \(i\), \[g(N^{\,(1)}(X,Y),\xi_{i})=2\,g(\widetilde{Q}X,fY). \tag{60}\] From \(N^{\,(1)}=0\) and (60) we get \(g(\widetilde{Q}X,fY)=0\) for all \(X,Y\in\mathfrak{X}_{M}\); thus, \(\widetilde{Q}=0\). For a weak almost para-\(\mathcal{S}\)-structure all \(\xi_{i}\) are Killing if and only if \(h=0\), see Theorem 2.2. The equality \(h=0\) holds for a weak para-\(\mathcal{S}\)-structure since it is true for a para-\(\mathcal{S}\)-structure, see Theorem 4.1. We will prove this property of a weak para-\(\mathcal{S}\)-structure directly. **Corollary 4.1**.: _For a weak para-\(\mathcal{S}\)-structure, \(\xi_{1},\dots,\xi_{p}\) are Killing vector fields; moreover, \(\ker f\) is integrable and defines a Riemannian totally geodesic foliation._ Proof.: In view of (53) and \(\bar{\eta}(\xi_{i})=1\), Eq. (56) with \(Y=\xi_{i}\) becomes \[g(\nabla_{X}\,\xi_{i},fZ)=-\eta^{i}(X)\,\bar{\eta}(Z)+g(X,QZ)+ \frac{1}{2}\,N^{\,(5)}(X,\xi_{i},Z). \tag{61}\] Combining (54) and (61), and using (50), we achieve for all \(i\) and \(X,Z\), \[g(h_{i}Z,QX)=\sum\nolimits_{j}\eta^{j}(X)\,\eta^{j}(Z)-\eta^{i}(X )\,\bar{\eta}(Z),\] which implies \(hZ=0\) for \(Z\in f(TM)\) (since \(Q\) is nonsingular). This and (41) yield \(h=0\). By Theorem 2.2, \(\ker f\) defines a totally geodesic foliation. Since \(\xi_{i}\) is a Killing field, we get \[0=(\,\pounds_{\xi_{i}}g)(X,Y)=g(\nabla_{X}\,\xi_{i},Y)+g(\nabla_{Y}\,\xi_{i}, X)=-g(\nabla_{X}Y+\nabla_{Y}X,\ \xi_{i})\] for all \(i\) and \(X,Y\bot\ker f\). Thus, \(f(TM)\) is totally geodesic, i.e., \(\ker f\) defines a Riemannian foliation. For \(p=1\), from Theorem 4.1 we have the following **Corollary 4.2**.: _A weak almost paracontact metric structure on \(M^{2n+1}\) is a weak para-Sasakian structure if and only if it is a para-Sasakian structure, i.e., a normal weak paracontact metric structure, on \(M^{2n+1}\)._ The characteristic of a weak para-\(\mathcal{C}\)-structure An important class of metric para-\(f\)-manifolds is given by para-\(\mathcal{C}\)-manifolds. Recall that \(\nabla_{X}\,\xi_{i}=0\) holds on para-\(\mathcal{C}\)-manifolds. **Proposition 5.1**.: _Let \((f,Q,\xi_{i},\eta^{i},g)\) be a weak para-\(\mathcal{C}\)-structure. Then_ \[2\,g((\nabla_{X}f)Y,Z)=N\,^{(5)}(X,Y,Z), \tag{62}\] \[0=N\,^{(5)}(X,Y,Z)+N\,^{(5)}(Y,Z,X)+N\,^{(5)}(Z,X,Y),\] (63) \[0=N\,^{(5)}(fX,Y,Z)+N\,^{(5)}(fY,Z,X)+N\,^{(5)}(fZ,X,Y). \tag{64}\] _Using (62) with \(Y=\xi_{i}\) and (2), we get_ \[g(\nabla_{X}\,\xi_{i},\,QZ)=-\frac{1}{2}\,N\,^{(5)}(X,\xi_{i},fZ).\] Proof.: For a weak almost para-\(\mathcal{C}\)-structure \((f,Q,\xi_{i},\eta^{i},g)\), using Theorem 2.3, from (34) we get \[2\,g((\nabla_{X}f)Y,Z)=-g([f,f](Y,Z),fX)+N\,^{(5)}(X,Y,Z). \tag{65}\] From (65), using condition \([f,f]=0\) we get (62). Using (28) and (62), we write \[0=3\,d\Phi(X,Y,Z)=g((\nabla_{X}\,f)Z,Y)+g((\nabla_{Y}\,f)X,Z)+g((\nabla_{Z}\, f)Y,X);\] hence, (63) is true. Using (10), (62) and the skew-symmetry of \(f\), we obtain \[0 =2\,g([f,f](X,Y),Z)\] \[=N\,^{(5)}(X,Y,fZ)+N\,^{(5)}(fX,Y,Z)-N\,^{(5)}(Y,X,fZ)-N\,^{(5)}( fY,X,Z).\] This and (63) with \(X\) replaced by \(fX\) provide (64). Recall that \(X^{\perp}=\sum_{i}\eta^{i}(X)\,\xi_{i}\). Consider a weaker condition than (33): \[[\xi_{i},\xi_{j}]^{\perp}=0,\quad 1\leq i,j\leq p. \tag{66}\] In the following theorem, we characterize weak para-\(\mathcal{C}\)-manifolds in a wider class of metric weak para-\(f\)-manifolds using the condition \(\nabla f=0\). **Theorem 5.1**.: _A metric weak para-\(f\)-structure with \(\nabla f=0\) and (66) is a weak para-\(\mathcal{C}\)-structure with \(N\,^{(5)}=0\)._ Proof.: Using condition \(\nabla f=0\), from (10) we obtain \([f,f]=0\). Hence, from (4) we get \(N\,^{(1)}(X,Y)=-2\,\sum_{i}d\eta^{i}(X,Y)\,\xi_{i}\), and from (11) we obtain \[\nabla_{fX}\,\xi_{i}-f\,\nabla_{X}\,\xi_{i}=0,\quad X\in\mathfrak{X}_{M}. \tag{67}\] From (28), we calculate \[3\,d\Phi(X,Y,Z)=g((\nabla_{X}f)Z,Y)+g((\nabla_{Y}f)X,Z)+g((\nabla_{Z}f)Y,X);\] hence, using condition \(\nabla f=0\) again, we get \(d\Phi=0\). Next, \(N_{i}^{(2)}(Y,\xi_{j})=-\eta^{i}([fY,\xi_{j}])=g(\xi_{j},f\nabla_{\xi_{i}}Y)=0\). Setting \(Z=\xi_{j}\) in (34) and using the condition \(\nabla f=0\) and the properties \(d\Phi=0\), \(N_{i}^{(2)}(Y,\xi_{j})=0\) and \(N\,^{(1)}(X,Y)=-2\sum_{i}d\eta^{i}(X,Y)\,\xi_{i}\), we find \(0=2\,d\eta^{j}(fY,X)-N\,^{(5)}(X,\xi_{j},Y)\). By (37) and (67), \[N\,^{(5)}(X,\xi_{j},Y)=g([\xi_{j},fY]-f[\xi_{j},Y],\,\widetilde{Q}X)=g(\nabla _{fY}\,\xi_{j}-f\,\nabla_{Y}\,\xi_{j},\,\widetilde{Q}X)=0;\] hence, \(d\eta^{j}(fY,X)=0\). From this and \(g([\xi_{i},\xi_{j}],\xi_{k})=2\,d\eta^{k}(\xi_{j},\xi_{i})=0\) we get \(d\eta^{j}=0\). By the above, \(N\,^{(1)}=0\). Thus, \((f,Q,\xi_{i},\eta^{i},g)\) is a weak para-\(\mathcal{C}\)-structure. Finally, from (62) and condition \(\nabla f=0\) we get \(N\,^{(5)}=0\). **Corollary 5.1**.: _A normal metric weak para-\(f\)-structure with \(\nabla f=0\) is a weak para-\(\mathcal{C}\)-structure with \(N\,^{(5)}=0\)._ Proof.: By \(N\,^{(1)}=0\), we get \(d\eta^{i}=0\) for all \(i\). As in Theorem 5.1, we get \(d\Phi=0\). **Example 5.1**.: Let \(M\) be a \(2n\)-dimensional smooth manifold and \(\tilde{f}:TM\to TM\) an endomorphism of rank \(2n\) such that \(\nabla\tilde{f}=0\). To construct a weak para-\(\mathcal{C}\)-structure on \(M\times\mathbb{R}^{p}\) (or \(M\times\mathbb{T}^{p}\), where \(\mathbb{T}^{p}\) is a \(p\)-dimensional flat torus), take any point \((x,t_{1},\dots,t_{p})\) and set \(\xi_{i}=(0,d/dt_{i})\), \(\eta^{i}=(0,dt_{i})\) and \[f(X,Y)=(\tilde{f}X,\,0),\quad Q(X,Y)=(\tilde{f}^{\,2}X,\,Y).\] where \(X\in T_{x}M\) and \(Y=\sum_{i}Y^{i}\xi_{i}\in\{\mathbb{R}^{p}_{t},\mathbb{T}^{p}_{t}\}\). Then (2) holds and Theorem 5.1 can be used. For \(p=1\), from Theorem 5.1 we have the following **Corollary 5.2**.: _Any weak almost paracontact structure \((\varphi,Q,\xi,\eta,g)\) with the property \(\nabla\varphi=0\) is a weak para-cosymplectic structure._
2307.13158
Multi-UAV Speed Control with Collision Avoidance and Handover-aware Cell Association: DRL with Action Branching
This paper presents a deep reinforcement learning solution for optimizing multi-UAV cell-association decisions and their moving velocity on a 3D aerial highway. The objective is to enhance transportation and communication performance, including collision avoidance, connectivity, and handovers. The problem is formulated as a Markov decision process (MDP) with UAVs' states defined by velocities and communication data rates. We propose a neural architecture with a shared decision module and multiple network branches, each dedicated to a specific action dimension in a 2D transportation-communication space. This design efficiently handles the multi-dimensional action space, allowing independence for individual action dimensions. We introduce two models, Branching Dueling Q-Network (BDQ) and Branching Dueling Double Deep Q-Network (Dueling DDQN), to demonstrate the approach. Simulation results show a significant improvement of 18.32% compared to existing benchmarks.
Zijiang Yan, Wael Jaafar, Bassant Selim, Hina Tabassum
2023-07-24T22:52:02Z
http://arxiv.org/abs/2307.13158v2
Multi-UAV Speed Control with Collision Avoidance and Handover-aware Cell Association: DRL with Action Branching ###### Abstract This paper develops a deep reinforcement learning solution to simultaneously optimize the multi-UAV cell-association decisions and their moving velocity decisions on a given 3D aerial highway. The objective is to improve both the transportation and communication performances, e.g., collisions, connectivity, and HOs. We cast this problem as a Markov decision process (MDP) where the UAVs' states are defined based on their velocities and communication data rates. We have a 2D transportation-communication action space with decisions like UAV acceleration/deceleration, lane-changes, and UAV-base station (BS) assignments for a given UAV's state. To deal with the multi-dimensional action space, we propose a neural architecture having a shared decision module with multiple network branches, one for each action dimension. A linear increase of the number of network outputs with the number of degrees of freedom can be achieved by allowing a level of independence for each individual action dimension. To illustrate the approach, we develop Branching Dueling Q-Network (BDQ) and Branching Dueling Double Deep Q-Network (Dueling DDQN). Simulation results demonstrate the efficacy of the proposed approach, i.e., 18.32% improvement compared to the existing benchmarks. Unmanned aerial vehicles, HOs, Deep Reinforcement Learning, Velocity, cell-association. ## I Introduction Unmanned aerial vehicles (UAVs) are gaining popularity across a broad range of applications due to their mobility, flexible deployment, gradually decreasing production costs, and line-of-sight (LOS) channels. [1]. A UAV can either require cellular connectivity for its own use (UAV-UEs) or provide cellular coverage as a base station (BS). Nevertheless, controlling UAVs that operate beyond visual line of sight (BVLoS) requires reliable command and control which is crucial for mission safety and security. Existing research primarily focuses on optimizing cellular link availability and quality of service (QoS) using reinforcement learning (RL) algorithms with no considerations to multi-UAV aerial traffic flow and motion dynamics of UAVs. In [2], the authors proposed a RL algorithm that considers disconnectivity, HOs, and energy consumption for trajectory planning and cell association in cargo UAVs. However, the algorithm's actions only consider the direction of motion with no velocity and lane considerations. In research works referenced as [3], the authors present strategies based on deep learning to predict HOs in mmWave communications and optimize HO rates and radio link quality for known UAV trajectories. However, these works have not considered the motion dynamics factors, such as acceleration, deceleration, and lane changes on the aerial highway. Furthermore, the existing works in [4] mostly considered \(Q\)-learning and its variants which can lead to sub-optimal policies and slower convergence. In terms of transportation, achieving high performance for multi-UAV traffic flow and collision avoidance is crucial. On the communication side, UAVs require: **(i)** high data rates and **(ii)** minimal HO losses. Increasing speed can increase traffic flow but results in frequent HOs, which can negatively impact the communications between UAVs and base stations (BSs). Very recently, this trade-off has been investigated in the context of autonomous vehicles [5, 6]. Nevertheless, previous studies have not considered velocity optimization of multiple UAVs on an aerial highway in conjunction with cell-association, while considering collision-avoidance, lane changes, and HO-aware wireless data rates. In this paper, we develop a deep RL (DRL) solution with action branching architecture to jointly optimize cell-association and multi-UAV flying policies on a 3D aerial highway such that **(i)** aerial traffic flow can be maximized with the collision avoidance, and **(ii)** HO-aware data rates can be maximized. Specifically, we first cast this problem as Markov decision process (MDP) where a UAV state is modeled based on their velocities and data rates. Moreover, to deal with the 2D communication-transportation action space, we develop a DRL solution with action branching architecture in which a shared module coordinates among multiple network branches. In our case, the module performs 2D decision making related to UAV acceleration/deceleration, lane-changes, and UAV-BS assignment. To illustrate the approach, we devise branching deep Q (BDQ) network and Branching double deep Q (BDDQN) network-based UAV agents. The proposed BDQN offers improved exploration-exploitation trade-off, enhances robustness and stability compared to conventional DQN. ## II System Model As illustrated in Figs. 1, we assume a 3D area where \(N_{U}\) UAVs in a set \(\mathcal{U}=\{u_{1},\ldots,u_{N_{U}}\}\) are flying along the defined 3D highway lanes, while being connected to terrestrial BSs. The latter are uniformly distributed on the targeted area and constitute a set \(\mathcal{B}=\{b_{1},\ldots,b_{N_{R}}\}\). To simulate the UAVs' movements on a given aerial highway, we consider the continuous intelligent driver model that models acceleration as in [7]. UAVs cannot fly above \(h_{\max}=300\) m [8], and each UAV is identified by its location \(\mathbf{q}_{k}(t)=(x_{k}(t),y_{k}(t),h_{k})\) at any time slot \(t\), \(\forall k\in\mathcal{U}\). Similarly, the BSs are defined by their locations \(\mathbf{q}_{i}=(x_{i},y_{i},h_{i})\), \(\forall i\in\mathcal{B}\). For the sake of simplicity, we assume that \(h_{i}=0\) m, \(\forall i\in\mathcal{B}\). The distance between BS \(i\) and UAV \(k\) is defined as \(q_{ik}(t)=\sqrt{(x_{k}(t)-x_{i})^{2}+(y_{k}(t)-y_{i})^{2}+h_{k}^{2}}\) and the projected distance on the 2D plane (X,Y) is \(d_{ik}(t)=\sqrt{(x_{k}(t)-x_{i})^{2}+(y_{k}(t)-y_{i})^{2}}\). ### _G2A Channel Model_ According to 3GPP [8], the ground-to-air (G2A) channel model is characterized by the BS's antenna gain, and the experienced path loss and line-of-sight (LoS) probability. #### Ii-A1 BS's antenna gain In cellular-connected aerial networks, UAVs rely on the radiating sidelobes to connect to terrestrial BSs. Hence, it is important to accurately model the 3D radiation pattern of BSs for cellular-connected UAVs. We opt here for the 3GPP antenna pattern model [8] that mimics realistic antenna radiation patterns. Specifically, each BS is divided into three sectors, each equipped with cross-polarized antennas to create a uniform linear array (ULA). Each antenna element provides a gain up to \(G_{\max}=8\) dBi through the direction of the main lobe [8]. The antenna element pattern provides different gains on sidelobes depending on the azimuth and elevation angles of the associated UAV [2]. The latter are given by \[G_{\mathrm{az}}(\phi_{ik}(t))=\min\left\{12\left(\frac{\phi_{ik}(t)}{\phi_{ \mathrm{3dB}}}\right),\mathrm{G_{m}}\right\}, \tag{1}\] and \[G_{\mathrm{el}}(\theta_{ik}(t))=\min\left\{12\left(\frac{\theta_{ik}(t)}{\theta _{\mathrm{3dB}}}\right),\mathrm{SLA}\right\}, \tag{2}\] where \(\phi_{ik}(t)=\arctan\left(\frac{h_{k}}{d_{ik}(t)}\right)\) and \(\theta_{ik}(t)=\arctan\left(\frac{y_{k}(t)-y_{i}}{x_{k}(t)-x_{i}}\right)\) are the azimuth and elevation angles between BS \(i\) and UAV \(k\). \(\phi_{\mathrm{3dB}}=\theta_{\mathrm{3dB}}=\frac{65\pi}{180}\) at 3dB bandwidths. In addition, \(\mathrm{G_{m}}\) and \(\mathrm{SLA}\) are the antenna nulls thresholds, which are fixed at 30 dB in our study. The antenna element gain is defined by [2] \[G(\theta_{ik}(t),\phi_{ik}(t)) =G_{\max} \tag{3}\] \[-\min\{-(G_{\mathrm{az}}(\phi_{ik}(t))+G_{\mathrm{el}}(\theta_{ik }(t))),G_{m}\}.\] Assuming that BS \(i\) has \(N\) antennas inter-separated by half of the wavelength distance [2], the array factor, denoted AF, of the ULA of BS \(i\) towards UAV \(k\) is expressed by \[\mathrm{AF}(\theta_{ik}(t))=\frac{\sin(\frac{N\pi}{2}(\sin\theta_{ik}(t)-\sin \theta_{i}^{d}))}{\sqrt{N}\sin(\frac{\pi}{2}(\sin\theta_{ik}(t)-\sin\theta_{i} ^{d}))}, \tag{4}\] where \(\theta_{i}^{d}\) is the down-tilt of BS \(i\)'s ULA. Finally, the array radiation pattern from BS \(i\) towards UAV \(k\) is written as \[G_{ik}(t)=G(\theta_{ik}(t),\phi_{ik}(t))+\mathrm{AF}(\theta_{ik}(t)),\;\forall i \in\mathcal{B},\forall k\in\mathcal{U}. \tag{5}\] #### Ii-A2 LoS probability The likelihood of UAV \(k\) having a LoS with BS \(i\) primarily relies on the altitude of the UAV and the surrounding environment. Assuming that \(h_{k}\in[22.5,100]\) m, the probability of LoS is given by [2] is \[P_{\mathrm{LoS}}(q_{ik}(t))=\begin{cases}1,&d_{ik}(t)\leq d_{1}\\ \frac{d_{1}}{d_{ik}(t)}+e^{-\frac{d_{ik}(t)}{p_{1}}}\left(1-\frac{d_{1}}{d_{ik }(t)}\right),&\text{otherwise},\end{cases} \tag{6}\] where \(d_{1}=\max\{460\log_{10}(h_{k})-700,18\}\) and \(p_{1}=4300\log_{10}(h_{k})-3800\). If \(h_{k}\in[100,300]\) m, \(P_{\mathrm{LoS}}(q_{ik}(t))=1\). Thus, the probability of Non-LoS (NLoS) is written as \(P_{\mathrm{NLoS}}(q_{ik}(t))=1-P_{\mathrm{LoS}}(q_{ik}(t))\). #### Ii-A3 Path loss For the sake of simplicity, we consider the mean path loss since we focus here on the long-term operation of cellular-connected UAVs rather than the short term [9]. The probabilistic mean path loss between BS \(i\) and UAV \(k\) at time slot \(t\) can be expressed by \[L_{ik}(t) =L_{i}^{\mathrm{LoS}}P_{\mathrm{LoS}}(r_{ik}(t)) \tag{7}\] \[+L_{i}^{\mathrm{NLoS}}P_{\mathrm{NLoS}}(r_{ik}(t)),\forall i\in \mathcal{B},\] where \(L_{i}^{\mathrm{LoS}}\) and \(L_{i}^{\mathrm{NLoS}}\) are the path loss related to LoS and NLoS communication links, respectively, as defined in [8, Tables B-1 and B-2]. ### _Received Power and Achievable Data Rate Analysis_ Assuming that UAV \(k\) has an omni-directional antenna, and using the G2A channel model, the average power received from BS \(i\) can be expressed by \[P_{ik}(t)=P_{T}+G_{ik}(t)-L_{ik}(t)-P_{n},\forall i\in\mathcal{B},\forall k\in \mathcal{U}, \tag{8}\] where \(P_{T}\) is the transmit power of any BS \(i\) and \(P_{n}\) is the noise power (in dBm). The quality of the link between UAV \(k\) and BS \(i\) is determined by the strength of the received signal from the latter, evaluated with \(P_{ik}(t)\). However, since the aerial highways can be served by several terrestrial BSs with Figure 1: Illustration of the proposed aerial network model (top view). Blue circles represent BSs; Solid/dash lines represent desired/interference link. the same frequency, mainly due to the strong LoS between BSs and UAVs, then significant interference can be generated. Consequently, the quality of a communication link is rather evaluated using the signal-to-interference-ratio (SIR)1. The latter is written by Footnote 1: In practice, the quality of the link should be evaluated using the signal-to-interference-plus-noise-ratio (SINR). However, due to the significant interference generated in the considered system model, we ignore the noise’s effect. \[\text{SIR}_{ik}(t)=\frac{P_{ik}(t)}{\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N_{R}}P_{jk}(t)},\forall i=1,\ldots,N_{R},\forall k=1, \ldots,N_{U}. \tag{9}\] Assuming that all BSs use the same bandwidth \(W_{R}\), then the achievable data rate between BS \(i\) and UAV \(k\) is given by \[R_{ik}(t)=W_{R}\log_{2}(1+\text{SIR}_{ik}(t)),\forall i\in\mathcal{B},\forall k \in\mathcal{U}. \tag{10}\] ### _Handovers_ Since a flying UAV \(k\) can evaluate \(P_{ik}(t)\) from neighbouring BSs, i.e., \(i\in\mathcal{B}_{k}(t)\) where \(\mathcal{B}_{k}(t)\) is the set of the closest \(n_{\text{rf}}\) BSs to UAV \(k\) that can serve UAV \(k\), i.e., BS \(i\in\mathcal{B}_{k}(t)\) if \(q_{ik}(t)\leq d_{th}\) and \(\text{SIR}_{ik}(t)\geq\gamma_{th}\), where \(d_{th}\) is the maximal communication distance between any BS and UAV \(k\) and \(\gamma_{th}\) is the UAV's reception sensitivity. Subsequently, a UAV can trigger a HO event whenever required, e.g., when \(\text{SIR}_{i_{0}k}(t)<\gamma_{th}\), where BS \(i_{0}\) is the one that UAV \(k\) is currently associated with. To reflect HO events, let \(c_{k}(t)\) be the index of the BS to which UAV \(k\) is associated at time slot \(t\) and \(\eta_{k}\) be the HO binary variable, such that \(\eta_{k}(t,t+1)=1\) if \(c_{k}(t)\neq c_{k}(t+1)\) and \(\eta_{k}(t,t+1)=0\) otherwise. Frequent HOs can severely impact the received SIR due to HO overhead and risk of HO ping-pong effect [3]. ## III Problem Formulation as MDP and Proposed DRL with Action Branching Given the described system model, we aim to collaborative optimize the autonomous motion of multiple UAVs travelling along a 3D highway, such that both the transportation and communication performances, e.g., collisions, connectivity, and HOs, are improved. First, we specify the state-action space and rewards of our system. Then, we present the proposed collaborative RL-based solutions to control the UAVs. These solutions are based on the BDQN and BDDQN algorithms. ### _Observation and State Space_ The observation space \(\mathcal{O}\) provides RL agents with the necessary information to take actions that result in rewards. For our system, our observation space is composed of transportation and communication observations. The transportation observation space is known as kinematics and is included in the _highway-env_ environment [10]. The kinematics observation consists of a \(V\times F\) array that describes a list of nearby UAVs \(V<N_{U}\) based on \(F\) specific features, namely \((\textbf{q}_{k}(t),\textbf{v}_{k}(t),n_{\text{rf}})\), where \(\textbf{v}_{k}(t)=[v_{k}^{x}(t),v_{k}^{y}(t)]\) is the directional speed of UAV \(k\) on (X,Y) plane at time \(t\), \(v_{k}^{x}(t)\) represents the longitudinal speed, and \(v_{k}^{y}(t)\) represents the latitudinal speed. \(n_{\text{rf}}\) is number of feasible BSs in radius of 1000m of target UAV. The feature values may be normalized within a predetermined range, with the normalization relative to the UAV that is about to take an action. Moreover, each UAV \(k\) observes communications-related features such as the received SIR levels from BSs in \(\mathcal{B}_{k}(t)\). The presence of several UAVs allows to simulate different traffic flow scenarios as in [5]. Indeed, as the density of UAVs in the highway increases, higher competition is expected to connect to the best terrestrial BSs among them. This holds true assuming that BS \(i\) cannot be associated to more than \(Q_{i}\) UAVs at the same time, \(\forall i\in\mathcal{B}\). Thus, each BS \(i\) has to continuously keep track of the number of associated UAVs to it all the time, denoted \(n_{i}(t)\), \(\forall i\in\mathcal{B}\). Consequently, a state \(s_{t}\) for an RL agent at UAV \(k\) is constituted from several observations as \(s_{t}=(\textbf{q}_{1}(t),\ldots,\textbf{q}_{V}(t),\textbf{v}_{1}(t),\ldots, \textbf{v}_{V}(t),n_{\text{rf}},\text{SIR}_{1k}(t),\)\(\ldots,\text{SIR}_{n_{\text{rf}}k},Q_{1},\ldots,Q_{n_{\text{rf}}},n_{1}(t), \ldots,Q_{n_{\text{rf}}}(t))\). ### _Action Space_ At each time step \(t\), UAV \(u_{k}\) selects action \(a_{t}=(a_{t}^{\text{tran}},a_{t}^{tele})\in\mathcal{A}_{\text{tran}}\times \mathcal{A}_{\text{tele}}\), where \(a_{t}^{\text{tran}}\) is the moving transportation action, i.e., trajectory action and \(a_{t}^{tele}\) is the communication-related action, i.e., association with a terrestrial BS. \(\mathcal{A}_{\text{tran}}=\{a_{\text{tran}}^{1},\ldots,a_{\text{tran}}^{3}\}\), where \(a_{\text{tran}}^{1}\) is the change lane to the left lane action, \(a_{\text{tran}}^{2}\) is maintaining the same lane, \(a_{\text{tran}}^{3}\) is the change lane to the right one, \(a_{\text{tran}}^{4}\) is accelerating within the same lane, and \(a_{\text{tran}}^{5}\) is decelerating within the same lane. Similarly, the communication action space for \(u_{k}\) at time \(t\) can be given by \(\mathcal{A}_{k,\text{tran}}(t)=\left\{c_{k}^{1}(t),\ldots,c_{k}^{n}(t)\right\}\), where \(c_{k}^{i}(t)\) is the \(i^{th}\) potential BS to be associated with. Based on the quota of each BS \(Q_{i}\), UAV computes a _weighted rate metric_, denoted WR, that encourages traffic load balancing between BSs and discourages unnecessary HOs. It is expressed by \[\text{WR}_{ik}(t)=\frac{R_{ik}(t)}{\min{(Q_{i},n_{i}(t))}}(1-\mu),\forall i=1, \ldots,n, \tag{11}\] where \(\mu\) denotes the HO penalty, written as \[\mu=\begin{cases}0.1,&\text{if HO is triggered},\\ 0,&\text{otherwise}.\end{cases} \tag{12}\] This criterion will be taken into account to further reduce the set of potential BSs. Specifically, the final BSs' set should be composed with \(n\leq n_{\text{rf}}\) candidates, which belong to \(\mathcal{B}_{k}(t)\), satisfy \(n_{i}(t)<Q_{i}\), and obtained the best WSIR values. The UAV computes \(P_{ik}\) by substituting \(\mu=0\) and choose to connect to BS with the maximum \(T_{ik}\). if \(Q_{j}\geq n_{i}(t)\), Otherwise, UAV recursively selects the next vacant best-performing BS in order by performance of \(T_{ik}\). In this case we still consider the competitive resource sharing in terms of \(n_{s}\) and \(Q_{R}\) constraints. **(3)**_Network Selection based on Maximum data rate:_ The AV computes \(P_{ik}=T_{ik}\) and chooses to connect to a BS with the maximum data rate. ### _Reward Function Design_ The definition of the associated reward function is directly related to the optimization of both UAV transportation and communication performances. #### Iii-C1 UAV Transportation Reward We define the UAV transportation reward as follows [10]: \[r_{k}^{\rm tran}(t)=\omega_{1}\left(\frac{||\mathbf{v}_{k}(t)||-v_{\rm min}}{v_{ \rm max}-v_{\rm min}}\right)-\omega_{2}\cdot\delta,\forall k\in\mathcal{U}, \tag{13}\] where \(v_{\rm min}\) and \(v_{\rm max}\) are the minimum and maximum speed limits, and \(\delta\) is the collision indicator. \(\omega_{1}\) and \(\omega_{2}=1-\omega_{1}\) are the weights that adjust the value of the UAV transportation reward with its collision penalty. It is important to note that negative rewards are not allowed since they might encourage the agent to prioritize ending an episode early, by causing a collision, instead of taking the risk of receiving a negative return if no satisfactory trajectory is available. #### Iii-C2 UAV Communication Reward We define the communication reward as follows: \[r_{k}^{\rm tele}(t)=\omega_{3}R_{i_{0}k}(t)\left(1-\text{min}(1,\xi_{k}(t)) \right), \tag{14}\] where \(R_{i_{0}k}(t)\) is the achievable data rate when associated with BS \(i_{0}\), and \(\xi_{k}(t)\) is the HO probability, computed by dividing the number of HOs accounted until the current time \(t\) by the time duration of previous time slots in the episode. ### _Proposed Branching Dueling Q-Network-based Methods_ The use of discrete-action algorithms has contributed to many recent successes in deep reinforcement learning. However, implementing these algorithms in high-dimensional action tasks is challenging due to the exponential increase in the size of action space. In our study, for each time step \(t\), we need to apply both communication action and transportation action on \(N_{U}\) RL agents. To cope with such complex action design, authors of [11] introduced a novel RL agent based on branching dueling Q-network (BDQ) and illustrate the performance of branching deep Q-network (BDQN) or dueling double deep Q-network (BDDQN). BDQ features a decision module shared among multiple network branches, each corresponding to an action dimension, e.g., the transportation and communication action dimensions in our work. This approach allows for independent handling of each individual action dimension, resulting in a linear increase in the number of network outputs with the degrees of freedom. It also demonstrates the importance of the shared decision module in coordinating the distributed action branches. In this work, we take advantage of this method by deploying BDQ agents at the UAVs, and each of them makes actions branching for \(\mathcal{A}_{\rm tran}\) and \(\mathcal{A}_{\rm tele}\). According to III-B, we have two action dimensions and a total of \(5\times n\) sub-actions for each UAV at each time step. For an action dimension \(d\in\{1,2\}\), each individual branch Q-value on state \(s\in S\) and sub-action \(a_{d}\in\mathcal{A}_{d}\) (\(\mathcal{A}_{1}=\mathcal{A}_{\rm tran}\) and \(\mathcal{A}_{2}=\mathcal{A}_{\rm tele}\)) is defined by \[Q_{d}(s,a_{d})=(A_{d}(s,a_{d})-\max_{a_{d}^{\prime}\in\mathcal{A}_{d}}A_{d}(s, a_{d}^{\prime}),\forall d\in\{1,2\}. \tag{15}\] Each sub-action affects the aggregating layer of \(Q_{d}\) regard to dimension \(d\). Based on the double DQN algorithm, we update the state-value estimator and loss function as in [11]. We also adopt the common state-value estimator based on dueling architecture. Dueling architecture reduces similar action redundancies and learning is shared by two branches visualized in Fig. 2. The target \(Q\) function \(\hat{y}_{k}\) in BDDQN defined by 2 Footnote 2: For the sake of simplicity, we define \(\hat{y}_{k}=r_{k}^{\rm tele}(t)+r_{k}^{\rm tran}(t)+\frac{\gamma}{2}\sum_{d}Q_{ d}^{-}\)\((s_{k}^{\prime},\underset{a_{d}^{\prime}\in\mathcal{A}_{d}}{\rm argmax}(Q_{d}(s_{k},a_{d}(t)))\) \[\hat{y}_{k}=r_{k}^{\rm tele}(t)+r_{k}^{\rm tran}(t)+\frac{\gamma}{2}\sum_{d}Q_{ d}^{-}(s_{k}^{\prime},\underset{a_{d}^{\prime}\in\mathcal{A}_{d}}{\rm argmax}(Q_{d}(s_{k},a_{d}(t)))\] (16) where \(Q_{d}^{-}\) is the branch \(d\) of the target network \(Q^{-}\). The operation of the proposed BDQN/BDDQN-based approaches are summarized within Algorithm 1. DQN aims at compute weight sum \(Q\) values for each aggregate actions tuple. This approach is eager to contribute unbalanced trade off between transportation reward and communication reward. However, The benefit for BDQN is finding optimized \(Q_{d}\) in terms of \(d\in\mathrm{A}_{d}\), which draws 2 optimal policies regarding to communication and transportation perspectives. ## IV Numerical Results and Discussions In this section, we present the results of the suggested algorithms (BDQN and BDDQN) and emphasize the intricate relationships among wireless connectivity, handover rates, traffic flow of group UAVs, and the speed of UAVs. Unless explicitly mentioned, we employ the subsequent simulation parameters. BSs operating on 2.1 GHz and maximum support \(Q_{R}=5\) UAV users. We define \(\eta_{\mathrm{LOS}}\)\(\eta_{\mathrm{NLLoS}}\) as 1 and 20, respectively. There are 5 aerial highway lanes. BS Transmission power \(P_{T}\) is 40 dBm. BDQN training learning rate \(\alpha\), discount factor \(\gamma\), batch size are \(5\mathrm{e}{-4},0.8,32\) respectively. The proposed BDQ agent is represented in Fig. 2. To improve the performance of training and reduce the training complexity, we deploy a fully-connected feed-forward neural network (FNN) \(N(s)\) with weights \(\{\theta\}\) to approximate the \(Q\)-value for a given action and state [6]. FNN takes the state as an input and outputs shared-observation in Fig. 2. Since Q-values are real, the FNN performs a multivariate linear regression task. We apply ReLU activation function, i.e., \(f_{r}(x)=\max(0,x)\), as the first layer. There are \(2\) FNN hidden layers and \(256\) neurons on each layer. There are single layers with \(128\) Neurons on each Branching dueling network on the branching stage. Linear activation function is at the output layer. Fig. 3 states the training for UAV transportation rewards, communication rewards and HO rate in terms of algorithms and velocities of UAVs. Fig. 3(a) states transportation rewards and communication rewards reduce with higher UAVs target velocities. This is correlated to higher velocity contributes higher collision probabilities. By contrast, from Fig. 3(b), the communication rewards diverge in the initial training phase, gradually narrowing the gap among velocities. It validate the proposed algorithm that UAV agents optimize the total communication rewards due to reach the optimal accumulate HO aware data rate regardless the velocity. Fig. 3(c) illustrates the HO rate downward convergence to 0.1 since UAVs attempt to avoid HO penalty and disconnection outage. Fig. 4 depicts the average communication and transportation reward and average HO rate among steps in episode as a function of desire velocity of UAVs. It compares the performance of BDQN, BDDQN and Shortest Distance Based-BS selection (SDB). From Fig. 4, BDDQN and BDQN perform better than the benchmarks. Despite the total communication rewards is approximate the same for BDQN and BDDQN in Fig. 3, the counterparts' average communication rewards differ since the BDDQN travelling timesteps are greater than BDQN. Both average communication and transportation rewards reduce with increasing desire velocities. In average, average transportation rewards, communication rewards and HO probability improves \(16.7\%\)\(,\)\(23.4\%\)\(,\)\(10.9\%\) compared between BDDQN and SDN benchmark. Fig. 5 indicates the the average communication reward, average transportation reward and average HO rate among steps in episode with 5,10,15,20,25 BSs on the experiment range. Less BSs distribution gain advantage on average transportation rewards since the preference SDN BS is fulfilled in priority. Each agent will focus more on the moving perspective to improve the transportation reward since there is less communication rewards difference for them to select BS to connect. Increasing the number of BSs from 5 to 15 improve the communication rewards. However, there is a slight reduction for communication reward from futher increasing number of BSs from 15 to 20 leads to more HOs for UAV agents selecting and switching BS. Fig. 5(c) indicate the handover rate reduced among the training. ## V Conclusion In this work, we proposed BDQN and BDDQN algorithms to jointly optimize the network selection and autonomous moving actions such as change speed of the UAVs, switch the lane to maximize both HO-aware data rate and 3D aerial highway traffic flow. In the future, we will focus more on UAVs precise positioning on intelligent transportation system.
2303.08931
Designing Participatory AI: Creative Professionals' Worries and Expectations about Generative AI
Generative AI, i.e., the group of technologies that automatically generate visual or written content based on text prompts, has undergone a leap in complexity and become widely available within just a few years. Such technologies potentially introduce a massive disruption to creative fields. This paper presents the results of a qualitative survey ($N$ = 23) investigating how creative professionals think about generative AI. The results show that the advancement of these AI models prompts important reflections on what defines creativity and how creatives imagine using AI to support their workflows. Based on these reflections, we discuss how we might design \textit{participatory AI} in the domain of creative expertise with the goal of empowering creative professionals in their present and future coexistence with AI.
Nanna Inie, Jeanette Falk, Steven Tanimoto
2023-03-15T20:57:03Z
http://arxiv.org/abs/2303.08931v1
# Designing Participatory AI: Creative Professionals' Worries and Expectations about Generative AI ###### Abstract. Generative AI, i.e., the group of technologies that automatically generate visual or written content based on text prompts, has undergone a leap in complexity and become widely available within just a few years. Such technologies potentially introduce a massive disruption to creative fields. This paper presents the results of a qualitative survey (\(N=23\)) investigating how creative professionals think about generative AI. The results show that the advancement of these AI models prompts important reflections on what defines creativity and how creatives imagine using AI to support their workflows. Based on these reflections, we discuss how we might design _participatory AI_ in the domain of creative expertise with the goal of empowering creative professionals in their present and future coexistence with AI. participatory AI, participatory design, generative AI, creative professionals, creativity support + Footnote †: ccs: Computing methodologies \(\rightarrow\)_Philosophical/theoretical foundations of artificial intelligence,_**Human-centered computing \(\rightarrow\) Empirical studies in HCI**; _HCI design and evaluation methods._ + Footnote †: ccs: Computing methodologies \(\rightarrow\)_Philosophical/theoretical foundations of artificial intelligence,_**Human-centered computing \(\rightarrow\) Empirical studies in HCI**; _HCI design and evaluation methods._ + Footnote †: ccs: Computing methodologies \(\rightarrow\)_Philosophical/theoretical foundations of artificial intelligence,_**Human-centered computing \(\rightarrow\) Empirical studies in HCI**; _HCI design and evaluation methods._ + Footnote †: ccs: Computing methodologies \(\rightarrow\)_Philosophical/theoretical foundations of artificial intelligence,_**Human-centered computing \(\rightarrow\) Empirical studies in HCI**; _HCI design and evaluation methods._ + Footnote †: ccs: Computing methodologies \(\rightarrow\)_Philosophical/theoretical foundations of artificial intelligence,_**Human-centered computing \(\rightarrow\) Empirical studies in HCI**; _HCI design and evaluation methods._ + Footnote †: ccs: Computing methodologies \(\rightarrow\)_Philosophical/theoretical foundations of artificial intelligence,_**Human-centered computing \(\rightarrow\) Empirical studies in HCI**; _HCI design and evaluation methods._ + Footnote †: ccs: Computing methodologies \(\rightarrow\)_Philosophical/theoretical foundations of artificial intelligence,_**Human-centered computing \(\rightarrow\) Empirical studies in HCI**; _HCI design and evaluation methods._ + Footnote †: ccs: Computing methodologies \(\rightarrow\)_Philosophical/theoretical foundations of artificial intelligence,_**Human-centered computing \(\rightarrow\) Empirical studies in HCI**; _HCI design and evaluation methods._ + Footnote †: ccs: Computing methodologies \(\rightarrow\)_Philosophical/theoretical foundations of artificial intelligence,_**Human-centered computing \(\rightarrow\) Empirical studies in HCI**; _HCI design and evaluation methods._ + Footnote †: ccs: Computing methodologies \(\rightarrow\)_Philosophical/theoretical foundations of artificial intelligence,_**Human-centered computing \(\rightarrow\) Empirical studies in HCI**; _HCI design and evaluation methods._ + Footnote †: ccs: Computing methodologies \(\rightarrow\)_Philosophical/theoretical foundations of artificial intelligence,_**Human-centered computing \(\rightarrow\) Empirical studies in HCI**; _HCI design and evaluation methods._ + Footnote †: ccs: Computing methodologies \(\rightarrow\)_Philosophical/theoretical foundations of artificial intelligence,_**Human-centered computing \(\rightarrow\) Empirical studies in HCI**; _HCI design and evaluation methods._ + Footnote †: ccs: Computing methodologies \(\rightarrow\)_Philosophical/theoretical foundations of artificial intelligence,_**Human-centered computing \(\rightarrow\) Empirical studies in HCI**; _HCI design and evaluation methods._ + Footnote †: ccs: Computing methodologies \(\rightarrow\)_Philosophical/theoretical foundations of artificial intelligence,_**Human-centered computing \(\rightarrow\) Empirical studies in HCI**; _HCI design and evaluation methods._ + Footnote †: ccs: Computing methodologies \(\rightarrow\)_Philosophical/theoretical foundations of artificial intelligence,_**Human-centered computing \(\rightarrow\) Empirical studies in HCI**; _HCI design and evaluation methods._ + Footnote †: ccs: Computing methodologies \(\rightarrow\)_Philosophical/theoretical foundations of artificial intelligence,_**Human-centered computing \(\rightarrow\) Empirical studies in HCI**; _HCI design and evaluation methods._ + Footnote †: ccs: Computing methodologies \(\rightarrow\)_Philosophical/theoretical foundations of artificial intelligence,_**Human-centered computing \(\rightarrow\) Empirical studies in HCI**; _HCI design and evaluation methods._ + Footnote †: ccs: Computing methodologies \(\rightarrow\)_Philosophical/theoretical foundations of artificial intelligence,_**Human-centered computing \(\rightarrow\) Empirical studies in HCI**; _HCI design and evaluation methods._ + Footnote †: ccs: Computing methodologies \(\rightarrow\)_Philosophical/theoretical foundations of artificial intelligence,_**Human-centered computing \(\rightarrow\) Empirical studies in HCI**; _HCI design and evaluation methods._ + Footnote †: ccs: Computing methodologies \(\rightarrow\)_Philosophical/theoretical foundations of artificial intelligence,_**Human-centered computing \(\rightarrow\) Empirical studies in HCI**; _HCI design and evaluation methods._ + Footnote †: ccs: Computing methodologies \(\rightarrow\)_Philosophical/theoretical foundations of artificial intelligence,_**Human-centered computing \(\rightarrow\) Empirical studies in HCI**; _HCI design and evaluation methods._ + Footnote †: ccs: Computing methodologies \(\rightarrow\)_Philosophical/theoretical foundations of artificial intelligence,_**Human-centered computing \(\rightarrow\) Empirical studies in HCI**; _HCI design and evaluation methods._ + Footnote †: ccs: Computing methodologies \(\rightarrow\)_Philosophical/theoretical foundations of artificial intelligence,_**Human-centered computing \(\rightarrow\) Empirical studies in HCI**; _HCI design and evaluation methods._ + Footnote †: ccs: Computing methodologies \(\rightarrow\)_Philosophical/theoretical foundations of artificial intelligence,_**Human-centered computing \(\rightarrow\) Empirical studies in HCI**; _HCI design and evaluation methods._ or everyday life"_(Cord et al., 2017), the goal of participatory design in its essence. Our contributions include the following. (1) We introduce new concepts about what constitutes creativity in relation to generative AI. (2) We categorize some reasons why creatives are and are _not_ concerned about novel generative AI. (3) We categorize reasons why some creatives are curious and excited about AI and how it might augment their creative processes. (4) We discuss possible foci for the design of participatory AI aimed at helping creative professionals _Understand_ AI, _Cope_ with AI, _Adapt_ to AI, and _Exploit_ AI. ## 2. Methods We conducted a qualitative survey with open-ended questions designed to encourage longer answers and reflection. The survey format let respondents participate asynchronously, while allowing us to discover themes and directions for further in-depth research. The survey was circulated to the authors' networks of creative professionals as well as on social media. The call was posted as an open question of 'Are you a creative professional/professional creative, and do you have opinions about generative AI that you would like to share with us?' The term 'creative' was left to self-definition, and we asked the respondents to explain the role of creativity in their profession. We collected responses over a period of approximately two months in late 2022. We offered a draw of five $25 gift cards to Amazon as symbolic compensation for participation. The study and survey were approved by the ethical committees of the authors' universities. ### Participants We received 23 responses to the survey from creatives residing in Denmark (10), Germany (4), the United Kingdom (4), USA (3), Turkey (1), and Morocco (1). Respondents were between 21 and 55 years old, distributed as 21-25 (4), 26-30 (1), 31-35 (8), 36-40 (3), 41-45 (6), and 51-55 (1). 10 respondents identified as female, 12 as male, and 1 as non-binary. The respondents worked in a variety of fields, from computer science research to design of UK/UI and games to teaching. Most respondents came from software-oriented creative domains, and our findings should be read with this limitation in mind (see Section 5 for a discussion of this limitation). We were more interested in people self-qualifying as a "creative professional" where creativity plays a significant role in their work, than we were in specific job titles. The responses, as well as a detailed overview of respondents, are presented in the supplementary material. ### Survey and analysis The survey consisted of both demographic questions and six questions related to our research interest (which we list below). We designed the questions to elicit respondents' general understanding of and attitudes towards AI and creativity. We sought to prompt a deeper level of reflection and tried to avoid overloading respondents with questions. 1. In your own words, how would you define what AI (Artificial Intelligence) is? 2. Do you believe computers can be creative? Why/why not? 3. A standard definition of a creative idea is that it is: 1. original (new, either to the creator or to human history in general), 2. useful (in some context), and 3. surprising (it seems unlikely but possible). Given this definition, do you believe a computer/an AI algorithm can generate creative ideas? Why/why not?1 Footnote 1: This definition is a compilation of three-criterion definitions by, e.g., Boden (Boden, 2018) and Simonton (Simonton, 2018). 4. Are you excited about AI contributing to creative work in your profession? Why/why not? 5. Do you worry about AI replacing creative work in your profession? Why/why not? 6. Which role do you think AI will play in your profession in the near and far future? Questions (4) and (5) were swapped for approximately half the respondents (in two different instances of the survey) to avoid priming respondents in any specific direction. We performed a thematic analysis as described by Braun and Clarke (Braun and Clarke, 2018) on the responses. We tagged responses individually with different codes and then clustered them into sub-themes, which we highlight in bold throughout Section 3. ## 3. Survey responses ### How intelligent or creative is generative AI? In order to inform the design of participatory AI, it is relevant to understand how creatives currently conceive of AI and its limits. These factors can inform decisions about how to design participation processes to, for instance, include more or less information and discussion about the state of AI. #### 3.1.1. What is AI? Answers to the question "In your own words, how would you define AI?" varied, especially on two scales: **technical depth** (from superficial to deep understanding) and **agency of AI** (from no agency to high agency). In terms of **technical depth**, some respondents, naturally, had a deeper understanding of AI algorithms than others, e.g., from _"... digital solutions that are trained to be helpful in specific ways"_ (P21) (technically superficial) to _"A system capable of making dynamic choices based on input, dynamic as in non-binary evaluation of input referencing data model, a model which would ideally evolve through feedback of external verification of multiple processes"_ (P2) (technically advanced). We also saw interesting variation in the level of **agency** ascribed to the AI system, from no agency at all: _"AI is a set of rules, defined by humans, which a computer can follow"_ (P3) to a high degree of agency: _"it's a computer that over time improves itself in the tasks it has to solve by collecting information and inputs from humans"_ (P7). These understandings may influence creatives' judgments of the degree to which AI can support them and contribute to/replace tasks in their creative processes. We tagged 7 responses as portraying a relatively deep technical understanding with no agency to the computer. Six responses were tagged with a more superficial technical understanding and no agency ascribed to the computer. 10 responses were tagged as superficial technical understanding with a high degree of computational agency, and no responses were tagged as deep technical understanding and high agency of the computer. An overview is shown in the supplementary material, Figure 1. #### 3.1.2. New definitions of creativity The presence of generative AI encourages us to reevaluate and question our understanding of creativity and creative ideas. Most respondents who denied that AIs can be considered creative disputed the computer's capacity to generate _original_ output since it is trained only on already existing (human) input. However, one respondent wrote in answer to "Do you believe computers can be creative?": _"I kind of resent it - but yeah. If creativity is defined as something useful and new, then yeah I think so. Even though AI's [sic] rely on training data and existing man-made patterns (which some might use to criticize AI's as being derivative or as simply reproducing what already exists) the process of combining stuff into a new "something" isn't really THAT different from what humans do... it's just bigger in scale and I guess you might argue that humans are also just "trained on" a bunch of data... we also carry around a repertoire of input we can draw on to come up with ideas [...] ideas are always rooted in some pre-existing thing(s)" (P5). Another participant noted that "What is my brain if not a computer that takes in all this provided data and produces its own result from a mix of the inputs? If that result is 'creative', then why is an AI not?"_ (P20). This understanding is consistent with a traditional definition of creative ideas as being "**novel', "useful", and'surprising', e.g., (Bordes et al., 2016; Bordes et al., 2016). However, one respondent noted that "_Computers aren't creative by themselves as they only follow the orders that someone gives them_" (P6). In P6's understanding, creativity entails **agency** or **initiative**, which is not historically a property of the three-criterion definition of creativity. **Intention and **sentience** were described as criteria for creativity by some respondents: "_There is not intention_" (P1), "_Creativity stems from personal experiences/knowledge/emotions and the need to express/communicate/use this [...] Creativity lies not in the creation, but in why we create. Programs can emulate this, but without true sentence, it will always be [an] emulation_" (P10), and "_The computer still isn't creative, it's still just doing what it's told [...] Maybe I think it needs feelings to be truly creative?_" (P23). Other conditions for creativity were also evoked in the answers, such as **(self-)awareness**: "_I think that true creativity requires a sense of self and self-awareness_" (P8). "_They are not creative in themselves; they are producing content unaware of the value they just created_" (P21). Even **experiences** and **inspiration** were evoked: "_Computers can solve problems and create art and everything, but it will all be logic and calculated and not because it got a sudden burst of inspiration or remembered something that happened in the second grade_" (P23). Even if we do not assume that these definitions should be unanimously integrated into a scholarly or theoretical definition of creativity, it is interesting that reflecting on creativity in relation to the role of generative AI raises different conceptions of what creativity entails. ### I Am Not Worried (Yet) Only three of our 23 respondents unambiguously answered yes to being worried about AI replacing their work: "_Yes, the market needs to adjust heavily and I don't think the revolution will be entirely peaceful_" (P2); "_the idea of AI is mostly uncanny right now_" (P4), and "_Yes I [worry]. (...) a lot of tasks such as writing micro copy for websites etc which UX writers currently do would be automated_" (P14). Three more noted that they worry to some degree, or that they worry but are optimistic, e.g.: _"I worry about it, but I hope the reality will be that AI becomes another tool_" (P5). Nine respondents noted that they did not worry at all, while six reported that they do not worry _yet_, e.g. _"for now only the boring parts would be replaced. But this take-my-job-away argument was made countless times in history, there will always be something new. We can't be held back by this fear"_ (P12). We group **reasons for concern** (aside from losing work) into the following themes. _1. Worse quality output._ P8 observed: "_It concerns me already that video games are becoming something of an echo chamber, and the sheer volume of games being released are diluting the market and making it harder for indie games to get the recognition they need to do well_." The concern expressed here is not only that humans may become obsolete in the development process, but that the volume of output (in this case, of games) that AI (co-)creation makes possible will increase quantity but reduce quality of video games. P9 wrote _"I certainly don't intend to replace all my hires with AI but some people will. They may achieve early success and they may also bring the genre into disrepute if they pump out a lot of lazy AI-written content."_ This indicates worries that extend beyond individuals and their job security to concerns about an entire genre of creative content. This perspective assumes that AI produces creative output of worse quality than humans produce, which we could consider a reason _not_ to worry about AI-generated content. However, in this case, the potential of such content to 'dilute' or 'bring into disrepute' a whole genre or field presents a threat or concern to some creatives. _2. Weakening the creative process._ Most respondents pointed out that humans will still be required in AI-facilitated creative processes or that the computer will simply help automate the 'boring tasks.' However, a few also reflected on what that might mean to the creative processes, e.g., _"I also don't like the way AI image generators get you results instantly. They skip the creative process and just take you straight to the result... [...] that just overlooks a super important part of a creative process, which is exploration. And emergence, where stuff just kind of comes out of the process but you never imagined it would. Or happy accidents! In that sense I think AIs could actually lead to a stagnation in the history of creativity, if AI turns out to weaken the "creative muscle"_ (P5). P11 further noted that "_the meaning of 'creative' seems to be increasingly twisted to mean merely 'original/surprising,' and partly because there is a tendency for many to be unaware of the amount of creativity that my work involves [...] A lot is being lost._" This observation raises seminal questions similar to those raised in other fields where complex human thought processes have historically been replaced or at least disrupted, such as the introduction of calculators in algebra: How does it affect human cognition if computational processes take over (part of) our thinking? Will we lose our ability to use those parts of our brain, or will it simply free up cognitive reserve to consider new and more significant issues? _3. Copyright issues._ Generative AI works only because a dataset exists that it can be trained on, and this raises new copyright issues, as P16 notes, "_the ethical implications of AI stealing other people's work without credit [...] make me a bit wary._" Many established artists have raised concerns about this issue since those whose art is currently visible on the internet lack means to opt out of image training databases or otherwise control how their art is used (Bartos et al., 2016; Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2014). Interestingly, this concern was mentioned directly by only one respondent, suggesting that either it is not a matter that appears to be a threat to creatives we surveyed _or_ that they expect that a technological solution will emerge to address it; indeed, measures to protect intellectual properties of images, such as _watermarks_, are currently being developed (Krizhevsky et al., 2014). **Reasons for _not_ worrying about generative AI having a deleterious effect on their professions were described in three themes: _1. AI cannot produce output without human input._ As described in Section 3.1.2, several respondents questioned a computer's ability to produce truly original output. This was also described as a reason not to worry about AI replacing creative production or problem solving since human input is needed for datasets to be trained on and verified: _Being able to generate a Rothko at the click of a button is only possible because Rothko himself had original thoughts - that isn't creativity_ (P8), and _"human input is still needed to verify and maintain AI's work"_ (P22). _2. AI output is not convincing._ Several respondents also noted that they do not find AI-generated output completely 'convincing' or original: _"I don't see any authentic or convincing AI in artistic fields at all"_ (P8); _"I know it can create pretty, but I don't think it can create "Wow! I have never seen anything like it!"_ (P23); and _"at the moment it's a tool that when used skillfully can create awesome images, but there still needs to be someone with creative taste and an eye for imagery at the helm. Als also tend to generate'samey' images to me"_ (P20). Although this theme resembles the preceding one (AI needs human input to produce output), which pertains more to requiring a human in the _process_ of creating and maintaining generative Als, whereas the current theme critiques generative Als' _output._ _3. My work/creative process is too complex for AI to imitate._ Finally, several respondents observed that their work process is too complex for AI to replace it: _"No, the complexity and dependencies is [sic] too high in my work"_ (P21); _"[I do not worry] for user interface design, there's so much to consider and think through that I can't see an AI making something fluid yet"_ (P16). Particularly in processes of original problem solving and client communication, human cognition was described as indispensable: _"Even the new code that they write is still going to be unoriginal in terms of problem solving"_ (P15); _"We work very closely with clients and our work requires a lot of thought process behind it. Our main product is communication ideas and solving problems visually. Often we can do that better with a scribble than a fancy looking piece of art. You can never ask the AI about the intention/thoughts/feelings behind the product"_ (P10). ### Exciting Times Ahead! Thirteen respondents noted that they are more or less unequivocally excited about AI contributing to creative work in their profession (such as "_Yes!_" or "_Absolutely, exciting times ahead_" (P12)). Four volunteered some version of "yes _and_ no; e.g., _"To some extent. I think some people will be able to use it in a nice way"_ (P7). We grouped specific reasons for being excited about the advent and adoption of generative AI technology in creative professions into three themes: _1. AI can raise productivity for the individual or for larger processes._ Several respondents imagined AI being used to raise productivity, either in terms of individual efficiency (e.g., eliminating repetitive tasks and thus allowing creatives to focus on'more important' work): _"there are things that are more efficient to leave to machines which should pair with things that humans will be better at for the foreseeable future."_ (P19)) or in terms of cultivating higher output rates by streamlining processes: _"it would streamline many of the standard questions in the field"_ (P1). _2. AI can offer inspiration._ In fields that require creativity, it is perhaps not surprising that respondents highlighted using quickly generated output as a source of _migration_ in their creative process. Creative professionals often rely on readily available examples of design for inspiration (Krizhevsky et al., 2014), and the availability of AI to generate innumerable novel examples was seen as a powerful opportunity for 'opening up new solution spaces; e.g.: _"It will allow me to iterate through a much bigger possibility space"_ (P12); and _"it will make some work a lot easier/more efficient as you can try out different ideas in a very short amount of time"_ (P6). In this role, AI is imagined to augment what we call the **divergent** parts of the creative process by offering examples and opening up novel and larger solution spaces (Bartos et al., 2016). _3. AI can lead to higher quality output._ Finally, some respondents highlighted the opportunity for AI to yield higher quality output, partially for the two reasons above (offering novel inspiration and freeing up time to work on tasks more central to the creative core), and partially due to qualities inherent in the AI itself: _"Any creative work is better as a team effort and differences are a driving force. AI is very different and I want to work with them"_ (P2); _"it's a powerful tool that can enhance my work [...] I can see it slotting into a step between browsing Pinterest for reference art and sketching my own stuff"_ (P20). Two respondents also mentioned using AI for **convergent** parts of the creative process, for instance, decision making and evaluation: _"It can augment decision making"_ (P14); and _"it opens up to possibilities to create new solutions and evaluate in new ways"_ (P21), although specific ways for evaluation to occur were not described further. ## 4. Discussion: Opportunities for Participation Although complex, it seems prudent and timely to tackle the issue of how to encourage populations to participate in the development of AI more broadly (Bartos et al., 2016). We consolidate our preliminary analysis into four categories of potential focus for the design of participatory AI for creatives: (1) **Understanding AI, (2) Coping with AI, (3) Adapting to AI**, and (4) **Exploiting AI**. These categories align with the participatory design approach presented by Sanders (Sanders, 2017) by considering what end-users _know_ (= understand AI), _feel_ (= coping with AI), _do_ (= adapt to AI), and _dream_ (= exploit AI). The categories offer a framework for engaging professional creatives in participatory AI design in a meaningful way. One could ask questions that align with the framework, e.g., "How might we help future users _understand_ this technology" or "How might we help future users _adapt_ to new work flows? ### Understanding AI Some survey responses identify a superficial understanding of the technical side of AI. This is acceptable, just as it is not a requirement of driving a car that one understands how the engine works. However, creatives will be better prepared to use AI as creativity support tools and design materials if they have a working understanding of the tools and their limitations (Friedman, 2010; Friedman et al., 2011; Friedman et al., 2012), particularly the level of _agency_ that computers can be ascribed (as we saw, no responses that demonstrated a deep level of technical understanding also portrayed the computer as having a high degree of agency). We suggest that facilitating a truthful _understanding_ of AI is the first step in empowering these users to co-create with AI technology. It is easy to brush this responsibility off as a creatives-only undertaking. However, we believe that AI developers share an ethical responsibility to make their systems accessible and explainable to a broader public, in line with the HCI research agenda for explainable, accountable and intelligible systems (Bahdan et al., 2016). ### Coping with AI In the longer term, it is inevitable that AI-generated content of many kinds will be ubiquitous in most of our lives. How should we cope? We posit that creatives should home their skills in creating and in evaluating creativity. The responses to our survey suggest that they can recognize and celebrate indispensable human properties of creativity and art, e.g.: _"human[-like] creativity is due to a combination of experiences and impressions that are connected in ways that are largely defined by human culture, and also feelings/sensations [...] that are mostly haphazard, and which AI don't have"_ (P11). Sharing worries, excitement, and coping strategies -- including avoiding AI, see, e.g, (Friedman et al., 2011; Friedman et al., 2012; Friedman et al., 2013) -- as well as celebrating what is uniquely creative about human approaches seems an important and achievable goal of designing participatory AI. We imagine a future where generative AI openly celebrates the sources from which its data are harvested, and where creators of generative AI include input from end-users in their design processes. ### Adapting to AI When photography was invented, artists adjusted their activities to focus less on realism and more on interpretation, whether through impressionism, abstraction, or surrealism (see, e.g., (Srivastava et al., 2016) for a more elaborate discussion of this). As writing, translation, paraphrasing and poetry become increasingly automated, professional writers and editors may become "bosses to bots," instructing them on what to write, how to tailor material, and what to re-write when results do not meet professional or personal standards. Where by _coping_ we mean respectfully considering the new reality that these technologies bring about, by _adapting_ we suggest more comprehensive inclusion of creatives in the development of specific generative AI models. Several respondents shared excitement about the possibilities of using AI to help automate bure-reaucracy, repetitive tasks, and boring work. The responsibility of facilitating adaptation, however, does not fall only on creatives. By understanding creative needs and processes, generative AI developers may tailor AI systems to help specific professions and crafts in a way that is not only meaningful for creatives, but that may enhance the development of AI itself, similar to how PD was originally meant not only to improve information systems but also to empower workers (Friedman et al., 2013). ### Exploiting AI Photography changed what painters did, but it also opened up a field and a new profession: photographer. Technologies such as ChatGPT will change what writers do. Journalistis are likely to spend more of their efforts on investigation and acquiring stories and less time on wordsmithing the reports on those stories. A mystery writer may give increased attention to plot features and less to the word-by-word narrative. Completely new tools and media may come out of the new AI technologies, including new types of creative jobs; as P3 notes, _"the far future might include both 2D and 3D assets, generated in real time, as the player interacts with the experience [...] Experiences still need to be controlled, to ensure a good user experience. Therefore it would probably increase the number of creative/technical positions within game companies."_ (P3). We hypothesize that such technology can reach its full potential only if creative professionals truly participate in its development. AI has sometimes been described as "a new shiny hammer in search of nails" (Friedman et al., 2013), i.e., the technology or tool is being developed ahead of its specific purpose. We posit that if generative AI is developed with participation from creatives, there is a chance not only of better integration of AI in specific creative work practices, but also of leveraging creative competencies to imagine completely new avenues for these technologies. ## 5. Conclusion and Future Work The insights presented in this abstract illustrate some of the ways in which creative professionals speculate about and anticipate how AI may impact their creative work practices. Based on the insights, we encourage engaging creatives in the development of generative AI, both in developing concrete technology and in managing larger project issues as representatives of their peers, in line with the ideals of participatory design (Friedman et al., 2011; Srivastava et al., 2016). Pathways for developing more participatory AI should consider how creatives may better _understand, cope with, adapt to_ as well as _exploit AI_. While the scope of our study is limited, we believe that both technology development and opinions towards AI are changing so quickly that it is relevant to share these preliminary results. We hope they will spark discussions and inform future research into how to develop and use AI in a way that encourages and requires participation of the people who will be affected most by these technologies in the future. Since most creative fields represented in our study are software-oriented, it is possible that the expressed views are more open and welcoming towards AI. Future research should include a more evenly distributed representation from different creative fields as well as obtain richer data by conducting interview studies. Furthermore, the respondents came from different creative industries, and their everyday work lives may therefore not necessarily be impacted in the same ways by generative AI. We have also not characterized how each individual's understanding of AI relates to, for instance, their level of worry or expectations since we believe this would require a larger participant group and deeper investigation. Future work could categorize different creative industries and identify _which_ and _how_ specific work tasks within these industries may be impacted by generative AI, as well as investigate different ways to support these creative practices with AI. ## Acknowledgments This research has been supported by the VILLUM Foundation, grant 37176 (ATTiKA: Adaptive Tools for Technical Knowledge Acquisition) and by the Austrian Science Fund (FWF) [P34226-N].
2307.04424
About the algebraic closure of formal power series in several variables
Let $K$ be a field of characteristic zero. We deal with the algebraic closure of the field of fractions of the ring of formal power series $K[[x_1,\ldots,x_r]]$, $r\geq 2$. More precisely, we view the latter as a subfield of an iterated Puiseux series field $\mathcal{K}_r$. On the one hand, given $y_0\in \mathcal{K}_r$ which is algebraic, we provide an algorithm that reconstructs the space of all polynomials which annihilates $y_0$ up to a certain order (arbitrarily high). On the other hand, given a polynomial $P\in K[[x_1,\ldots,x_r]][y]$ with simple roots, we derive a closed form formula for the coefficients of a root $y_0$ in terms of the coefficients of $P$ and a fixed initial part of $y_0$.
Michel Hickel, Mickaël Matusinski
2023-07-10T08:59:06Z
http://arxiv.org/abs/2307.04424v1
# About the algebraic closure of formal power series in several variables. ###### Abstract. Let \(K\) be a field of characteristic zero. We deal with the algebraic closure of the field of fractions of the ring of formal power series \(K[[x_{1},\ldots,x_{r}]]\), \(r\geq 2\). More precisely, we view the latter as a subfield of an iterated Puiseux series field \(\mathcal{K}_{r}\). On the one hand, given \(y_{0}\in\mathcal{K}_{r}\) which is algebraic, we provide an algorithm that reconstructs the space of all polynomials which annihilates \(y_{0}\) up to a certain order (arbitrarily high). On the other hand, given a polynomial \(P\in K[[x_{1},\ldots,x_{r}]][y]\) with simple roots, we derive a closed form formula for the coefficients of a root \(y_{0}\) in terms of the coefficients of \(P\) and a fixed initial part of \(y_{0}\). Key words and phrases:multivariate power series, algebraic closure, implicitization, closed form for coefficients 2020 Mathematics Subject Classification: 13J05, 13F25, 14J99 and 12-08 ###### Contents * 1 Introduction. * 2 Preliminaries * 3 A nested depth lemma. * 4 Total reconstruction of vanishing polynomials for several algebraic series. * 4.1 Total reconstruction in the algebraic case. * 4.2 Total algebraic reconstruction in the non-homogeneous case. * 4.3 Total algebraic reconstruction with several algebraic series. * 5 Reconstruction of an equation for an algebroid series. * 5.1 The reconstruction algorithm * 5.2 Proof of Theorem 1.1 * 5.3 Plan of the algorithm and example * 6 A generalization of the Flajolet-Soria Formula. * 7 Closed-form expression of an algebroid multivariate series. ## 1. Introduction. Let \(K\) be a field of characteristic zero and \(\overline{K}\) its algebraic closure. Let \(\underline{x}:=(x_{1},\ldots,x_{r})\) be an \(r\)-tuple of indeterminates where \(r\in\mathbb{Z}\), \(r\geq 2\). Let \(K[\underline{x}]\) and \(K[[\underline{x}]]\) denote respectively the domains of polynomials and of formal power series in \(r\) variables with coefficients in \(K\), and \(K(\underline{x})\) and \(K((\underline{x}))\) their fraction fields. Both fields embed naturally into \(K((x_{r}))((x_{r-1}))\cdots((x_{1}))\), the latter being naturally endowed with the lexicographic valuation in the variables \((x_{1},\ldots,x_{r})\) (see Section 2). By iteration of the classical Newton-Puiseux theorem (see e.g. [20, Theorem 3.1] and [21, p. 314, Proposition]),
2306.13271
Variational Counterfactual Prediction under Runtime Domain Corruption
To date, various neural methods have been proposed for causal effect estimation based on observational data, where a default assumption is the same distribution and availability of variables at both training and inference (i.e., runtime) stages. However, distribution shift (i.e., domain shift) could happen during runtime, and bigger challenges arise from the impaired accessibility of variables. This is commonly caused by increasing privacy and ethical concerns, which can make arbitrary variables unavailable in the entire runtime data and imputation impractical. We term the co-occurrence of domain shift and inaccessible variables runtime domain corruption, which seriously impairs the generalizability of a trained counterfactual predictor. To counter runtime domain corruption, we subsume counterfactual prediction under the notion of domain adaptation. Specifically, we upper-bound the error w.r.t. the target domain (i.e., runtime covariates) by the sum of source domain error and inter-domain distribution distance. In addition, we build an adversarially unified variational causal effect model, named VEGAN, with a novel two-stage adversarial domain adaptation scheme to reduce the latent distribution disparity between treated and control groups first, and between training and runtime variables afterwards. We demonstrate that VEGAN outperforms other state-of-the-art baselines on individual-level treatment effect estimation in the presence of runtime domain corruption on benchmark datasets.
Hechuan Wen, Tong Chen, Li Kheng Chai, Shazia Sadiq, Junbin Gao, Hongzhi Yin
2023-06-23T02:54:34Z
http://arxiv.org/abs/2306.13271v1
# Variational Counterfactual Prediction under Runtime Domain Corruption ###### Abstract To date, various neural methods have been proposed for causal effect estimation based on observational data, where a default assumption is the same distribution and availability of variables at both training and inference (i.e., runtime) stages. However, distribution shift (i.e., domain shift) could happen during runtime, and bigger challenges arise from the impaired accessibility of variables. This is commonly caused by increasing privacy and ethical concerns, which can make arbitrary variables unavailable in the entire runtime data and imputation impractical. We term the co-occurrence of domain shift and inaccessible variables _runtime domain corruption_, which seriously impairs the generalizability of a trained counterfactual predictor. To counter runtime domain corruption, we subsume counterfactual prediction under the notion of domain adaptation. Specifically, we upper-bound the error wr.t. the target domain (i.e., runtime covariates) by the sum of source domain error and inter-domain distribution distance. In addition, we build an adversarially unified variational causal effect model, named VEGAN, with a novel two-stage adversarial domain adaptation scheme to reduce the latent distribution disparity between treated and control groups first, and between training and runtime variables afterwards. We demonstrate that VEGAN outperforms other state-of-the-art baselines on individual-level treatment effect estimation in the presence of runtime domain corruption on benchmark datasets. Causal Effect Estimation, Runtime Domain Corruption, Adversarial Domain Adaptation ## 1 Introduction In predictive analytics, causal inference is increasingly important in guiding decision-making in high-stake domains, such as healthcare [1], education [2], e-commerce [3], etc. Normally, randomized control trial (RCT) is the gold standard for estimating the causal effect. Given that implementing RCTs is costly, time-consuming, and sometimes ethically intractable, various applications alternatively turn to use the passively collected observational data to perform causal inference in a data-driven fashion [4, 5, 6]. Denoting input variables as \(\mathbf{x}\), treatment as \(t\), outcome as \(y\), the observational dataset with \(N\) samples \(\{(\mathbf{x}_{i},t_{i},y_{i})\}_{i=1}^{N}\), commonly does not satisfy the RCT standard due to _unmeasured confounders_ and _selection bias_, which are two prominent challenges in causal inference. Specifically, the untestable unconfoundedness assumption assumes no unobserved confounders. Unfortunately, such an assumption cannot be satisfied in many cases, rendering the estimation erroneous [7, 8]. Meanwhile, the selection bias between the treated and control groups causes the imbalanced covariate distributions, which could introduce undesirable spurious effect due to the imbalance [5]. In the extreme case, it can even violate the positivity assumption and result in non-identifiable causal effect [9]. Thus, this issue further weakens the correctness of causal effect estimation. An example to explain these two challenges is that, if only the rich can afford drug A while the poor have to use the cheaper drug B, then people's financial status could be a hidden confounder if unmeasured, resulting in invalid estimation. If measured, it causes selection bias, and the effectiveness of drug A and drug B cannot be validly compared based on the skewed distribution of variables due to people's financial status. By addressing either of the two challenges or both, several neural approaches [10, 11, 12] are made available for causal effect estimation with observational data. Despite a variety of methods that tackle distributional imbalance caused by the selection bias, such domain shifts are only restricted between the treated and control groups that are both used for training, where the runtime variables are assumed to be drawn from the same distribution as the training data. In fact, domain shift also widely exists between training and runtime data, e.g., when a model trained on one race is asked to perform predictions on a different minority race, and it challenges the generalizability of the trained model. On top of that, the unavailable/missing variables and corresponding countermeasures are also largely understudied. For instance, real-world applications commonly have medical diagnostic models learned with high-quality open benchmarks, but in the deployment stage, not all end-users are able to provide the same set of variables due to accessibility issues (e.g., high-cost medical checks), privacy constraints (e.g., historical treatments), and ethical concerns (e.g., gender and race). In this paper, we refer to the co-existence of the shifted and unavailable variables in the inference data as _runtime domain corruption_. Runtime domain corruption can be interpreted as one step above observing domain/covariate shift during infer ence, where the model not only faces changed covariate distribution but also the ubiquitous missing values. In short, in our definition, runtime domain corruption is caused by the co-occurrence of domain shift and missing values. Compared with domain shift, runtime domain corruption more aggressively challenges the generalizability of the trained counterfactual prediction model, because variables deemed important in training might no longer be present during inference, and the domain-invariant patterns are unable to be mapped to those missing variables. Therefore, a high corruption rate of runtime variables can make the counterfactual predictor merely learned on full training data incur large generalization errors. Though one can consider discarding the unavailable variables in the training set, the reduced variables may lead to an underfitting issue. Also, for real-world deployment, it is impractical to assume prior knowledge on which variables are corrupted during runtime, especially considering the inaccessible variables can differ among individuals (e.g., users may choose to withhold different personal information). This work focuses on causal inference using the Neyman-Rubin potential outcome framework [13, 14] under the runtime domain corruption circumstance. In this work, we aim to learn a robust, causal, and domain-invariant latent representation \(\mathbf{z}\) of variable \(\mathbf{x}\), for which the latent distributions across various domains are well-balanced to counter the aforementioned three challenges, i.e., unmeasured confounders, selection bias, and runtime domain corruption, simultaneously. Our main contributions are: * We identify an important performance bottleneck for causal inference methods, namely runtime domain corruption that combines two largely unexplored yet important settings: domain shift and unavailable variables during runtime. In our paper, we propose the first systematic investigation of it for causal effect estimation. * We derive the upper bound of the generalization error by extending the in-sample causal inference to the corrupted out-of-sample scenario. To efficiently optimize the multiple Kullback-Leibler (KL) divergence terms in our VAE-based model, we propose a two-stage domain adaptation scheme, namely the Variational autoEncoder Generative Adversarial Network (VEGAN) for unifying multiple inter-domain distances. * We compare VEGAN to state-of-the-art baselines for performing predictions on both in-sample covariates and out-of-sample, corrupted runtime covariates. The empirical results demonstrate our model's stronger robustness to runtime domain corruption. ## 2 Related Work Back in time, researchers have been seeking to approach observational data-based causal inference from various perspectives. The re-weighting method, e.g., inverse probability weighting (IPW), uses the propensity score [15, 16] to mitigate the selection bias by re-weighting each unit's treatment outcome according to its estimated probability of being assigned a treatment. However, such a method strongly relies on the correctness of the estimated propensity score. To alleviate this strong dependency, the proposed doubly robust (DR) [17] method considers the outcome regression together with IPW for re-weighing purposes. The DR method tries to secure the causal effect estimation with an additional "insurance" that comes from the correctness of the outcome modeling which is in fact no one can assure. In addition to re-weighting, other methods such as the non-parametric tree-based model, e.g., BART [18] combines the tree method and Bayesian inference. However, all the above-mentioned methods mainly focus on estimating the average treatment effect (ATE) and are not expressive enough to handle the high-dimensional dataset for individual-level estimations. Nowadays, with the strong expressive power of deep learning [19, 20], new algorithms are proliferating by leveraging the deep learning framework to learn the deconfounded latent representation on top of the observed covariates and model the personalized treatment effect. We relate our work to the representation learning branch in causal inference, which is overlapped with the domain adaptation field due to the unique counterfactual nature of estimating treatment effect. The TARNet [10] builds a shared feature extractor followed by a two-headed neural network to model the outcomes for each type of treatment separately. Its variants can incorporate the integral probability metric (IPM), e.g., Wasserstein distance [21], and maximum mean discrepancy (MMD) [22], to minimize the distance of the learned latent covariate distribution between treated and control groups to mitigate the selection bias. Following that, a variational autoencoder (VAE) framed CEVAE model [11] emphasizes handling the confounding problem by building robust latent representation, and its performance is stated to be more robust than many previous methods. Dragonnet [12] leverages the neural net-enhanced propensity estimation and the innovative targeted regularization for causal effect estimation to achieve an asymptotic consistent ATE estimator. In addition, other works such as GANITE [23] and DeepMatch [24] adopt generative adversarial network (GAN) [25] and build their own designated GAN learning systems. Recently, many latent variable disentanglement methods, e.g., DR-CFR [26], TVAE [27], TEDVAE [28], are proposed to discover the disentanglement of the latent instrumental, risk, and confounding factors from the observed covariates to better capture the selection bias. The unique point of difference in our work is the additional consideration of the runtime domain corruption situation where the trained causal model's performance could dramatically decline when deployed to other environments. In addition, it is noted that [9] integrate the Monte Carlo Dropout [29] method to the state-of-the-art neural methods and allow the upgraded models, e.g., BARNET, BCEVAE, to estimate epistemic uncertainty in high-dimensional conditional average treatment effect (CATE) estimation, thus to inform the decision maker to be vigilant when making recommendations if the high uncertainty present. Their method only considers the domain shift between the treated and control groups during training, and it emphasizes making no treatment recommendation if the epistemic uncertainty exceeds a certain threshold. Hence, our work differs from it as we focus on more accurate treatment effect es timation when runtime domain corruption occurs during inference stage. It should also be noted that some existing works [30, 31, 32] have been proposed for treatment effect estimation with missing values, where the core is to leverage imputation algorithms to handle the missing values. Since runtime domain corruption also includes domain shift, the imputed target domain data could still deviate heavily from the source domain, rendering those methods inaccurate in such conditions. Furthermore, the imputation algorithm is not capable of imputing accurately when the number of missing values is large, it even becomes useless when the attributes are completely missing at the distribution level during the inference stage. Lastly, we also relate our work to algorithmic fairness topics, e.g., disparate learning processes (DLPs), in which the ethically concerned, privacy-related features are not available or impermissible to be used during runtime [33, 34]. A similar approach to DLPs is a doubly-robust counterfactual prediction model with additional handling of the confounding problem during training [35]. However, it differs from the common causal effect estimation as it assumes that one of the potential outcomes is a known constant for a binary treatment, and is hence inapplicable to the problem studied in this paper. ## 3 Methodology ### _Preliminaries_ For simplicity, we consider binary treatment \(t\) of 1 or 0 to denote the treated group and the control group, respectively. The individual treatment effect (ITE) for a variable vector \(\mathbf{x}\) is defined as: \[\tau(\mathbf{x})=\mathbb{E}[Y_{1}-Y_{0}|\mathbf{x}], \tag{1}\] where \(Y_{1}\) and \(Y_{0}\) are the unobserved potential outcomes with treatment \(t=1\) and \(t=0\) respectively. As a common practice in causal inference research, to validly identify the true treatment effect \(\tau(\mathbf{x})\) of instance \(\mathbf{x}\), we make the following standard assumptions. **Assumption 1** (Stable Unit Treatment Value Assumption): _The SUTVA assumption [16] states that: a) The potential outcomes for any unit do not vary with the treatment assigned to other units. b) For each unit, there are no different forms or versions of each treatment level, which leads to different potential outcomes._ **Assumption 2** (Unconfoundedness): _Treatment assignment is independent of the potential outcomes given the pre-treatment covariate \(\mathbf{x}\), i.e., \(t\perp\{Y_{0},Y_{1}\}|\mathbf{x}\)._ **Assumption 3** (Positivity): _For every instance \(\mathbf{x}\in\mathcal{X}\), we have its corresponded treatment assignment mechanism \(p(t|\mathbf{x})\), such that \(0<p(t=1|\mathbf{x})<1\)._ The assumptions further lead us to the proposition below. **Proposition 1** (Identifiability): _The causal effect is identifiable if and only if the SUTVA, the unconfoundedness, and the positivity assumptions hold._ **Proof 1**: _Under SUTVA and unconfoundedness, the ITE for instance \(\mathbf{x}\) is:_ \[\mathbb{E}[Y_{1}-Y_{0}|\mathbf{x}]= \mathbb{E}[Y_{1}|\mathbf{x}]-\mathbb{E}[Y_{0}|\mathbf{x}]\] \[= \mathbb{E}[Y_{1}|X=\mathbf{x},t=1]-\mathbb{E}[Y_{0}|X=\mathbf{x},t=0]\] \[= \mathbb{E}[y_{1}|X=\mathbf{x},t=1]-\mathbb{E}[y_{0}|X=\mathbf{x},t=0], \tag{2}\] _where \(y_{1}\) and \(y_{0}\) are the observed responses after the interventions \(t=1\) and \(t=0\) have been taken, respectively. The last terms are identifiable as we assume \(0<p(t=1|\mathbf{x})<1\). The first equality is by the operation of expectation, the second equality is based on the unconfoundedness, and the third equality is by the expected value of the observed outcomes \(\{y_{1},y_{0}\}\) equals the unobserved potential outcomes \(\{Y_{1},Y_{0}\}\)._ ### _Problem Definition_ Let \(\Psi:\mathcal{X}\times\{0,1\}\rightarrow\mathbb{R}\) be the hypothesis, our goal is to build the treatment effect regression model \(\Psi_{t}(\mathbf{x},t)=\mathbb{E}[y_{t}|X=\mathbf{x},T=t]\) with _observed outcome_\(y_{t}\) based on the training data, which can accurately recover the treatment effect for test instance \(\mathbf{x}_{*}\), thus the causal effect for the test instance \(\mathbf{x}_{*}\) can be estimated as \(\Psi_{1}-\Psi_{0}\). However, the _untestable unconfoundedness_ and _selection bias_ challenges arise when the observational dataset does not follow the RCT standard, making the trained models \(\Psi_{1}\) and \(\Psi_{0}\) unable to accurately reflect the true treatment outcomes for \(\mathbf{x}\). We perceive the observed covariates of treated and control groups from the conventional domain shift perspective, in which covariate \(\mathbf{x}\) is a noisy measurement, normally less informative and more confounded [36, 37], than the domain-invariant latent representation \(\mathbf{z}\). Therefore, the unconfoundedness changes from \(t\perp\{Y_{0},Y_{1}\}|\mathbf{x}\) to \(t\perp\{Y_{0},Y_{1}\}|\mathbf{z}\). In addition to the treated and control groups from the in-sample set, this paper uniquely considers causal effect estimation where the out-of-sample set is affected by runtime domain corruption. In what follows, we formally define the runtime domain corruption problem. **Definition 1** (Runtime Domain Corruption): _We define each variable vector \(\mathbf{x}=[x_{1},x_{2},...,x_{d}]\in\mathbb{R}^{d}\) as a **non-zero** concentration of categorical features (i.e., encodings) and numerical features. During training, all entries \(x_{*}\), for \(1\leq s\leq d\), are available and assigned corresponding values. Then, during inference, runtime domain corruption occurs when: (1) the covariate distribution shifts in the test domain: \(p_{test}(\mathbf{x})\neq p_{train}(\mathbf{x})\); and (2) each vector \(\mathbf{x}\) contains an arbitrary number of unavailable variables \(x_{*^{\prime}}\), for \(1\leq s^{\prime}\leq d\), which are all zeroed out by setting \(x_{*^{\prime}}=0\)._ **Rationale of zero-padding.** Specifically, during runtime, the unavailable features are not straightforwardly discarded when performing prediction, instead, we pad zeros to entries that correspond to missing variables such that the dimensionality is kept unchanged. It is worth noting that, during training, a non-zero property is maintained for every instance \(\mathbf{x}\) whose features are all available. This can be easily achieved via standard preprocessing steps, e.g., rescaling/normalization for numerical features, and using 1 and -1 to respectively represent relevant and irrelevant categorical features in multi-hot encodings. Thus, using zero-padding to mark unknown/corrupted variables during run time is viable, because the semantics of zeros are exclusively reserved for the unknown status of variables. Also, zero-padding is a more feasible practice in real applications, as each runtime instance \(\mathbf{x}\) may have an arbitrary number and combination of attributes missing, rendering it impractical to train a specific latent feature extractor for each case. In contrast, zero-padding is a more flexible and scalable approach for learning domain-invariant latent representations with a shared feature extractor, where all zero-valued entries of \(\mathbf{x}\) will be filtered out during projection. ### _Target Domain Error Upper Bound_ Shalit et al. [10] show that the accuracy metric for causal inference - expected Precision in Estimation of Heterogeneous Effect (PEHE), denoted as \(\epsilon_{\text{PEHE}}\), is upper-bounded by both the trained model error \(\epsilon_{\text{F}}\) on actual outcomes and the distance between treated and control distributions, measured by integral probability metric (IPM). However, as their derived upper bound for \(\epsilon_{\text{PEHE}}\) does not consider runtime domain corruption on out-of-sample variables, we fill this gap by deriving the bound in **Theorem** 1. **Theorem** 1: _Let \(\phi:\mathcal{X}\rightarrow\mathcal{Z}\) be the invertible latent representation mapping function (a.k.a. feature extractor) with inverse \(\Phi\). Let \(\Psi:\mathcal{Z}\times\{0,1\}\rightarrow\mathbb{R}\) be the updated hypothesis that maps latent variables \(\mathbf{z}\in\mathcal{Z}\) to each treatment's outcome. Let \(\mathcal{F}=\{f|f:\mathcal{Z}\rightarrow\mathbb{R}\}\) be a family of functions. The source domain is the observational data for treated and control groups, and the target domain is the runtime test/inference set with corrupted variables. We derive the upper bound of target domain error (i.e., generalization error) as1:_ Footnote 1: We follow some notation conventions set in [10] and [38]. \[\epsilon_{\text{PEHE}}^{tr} \leq 2\bigg{[}\epsilon_{\text{F}}^{t=1}+\epsilon_{\text{F}}^{t=0}+B_ {\phi}\bigg{(}\text{IPM}_{\mathcal{F}}(\mathbb{P}_{\phi}^{t=1},\mathbb{P}_{ \phi}^{t=0}) \tag{3}\] \[+\frac{1}{2}\text{IPM}_{\mathcal{F}}(\mathbb{P}_{\phi}^{tr}, \mathbb{P}_{\phi}^{sr})\bigg{)}\bigg{]},\] _where \(\epsilon_{\text{F}}^{t}\) denotes the factual training error, \(\mathbb{P}_{\phi}^{t}\) is the probability measure within treatment group \(t\) in the training set, \(\epsilon_{\text{PEHE}}^{tr}\) and \(\epsilon_{\text{PEHE}}^{sr}\) respectively indicate the target and source domain errors, \(\mathbb{P}_{\phi}^{tr}\) and \(\mathbb{P}_{\phi}^{sr}\) are probability measures which respectively denote the covariate distributions in target domain and source domain, and \(B_{\phi}\) is a bounded constant._ To prove **Theorem** 1, we first provide some preliminary definitions. **Definition 2**: _Let \(\mathcal{F}=\{f|f:\mathcal{Z}\rightarrow\mathbb{R}\}\) be a family of functions. The distribution distance measure - integral probability metric (IPM) between the target and source distributions \(\mathbb{P}^{sr}\) and \(\mathbb{P}^{tr}\) over \(\mathcal{Z}\) is defined as:_ \[\text{IPM}_{\mathcal{F}}(\mathbb{P}^{tr},\mathbb{P}^{sr})=\sup_{f\in\mathcal{ F}}\bigg{|}\int_{\mathcal{Z}}f(\mathbf{z})(p^{tr}(\mathbf{z})-p^{sr}(\mathbf{z})) d\mathbf{z}\bigg{|}. \tag{4}\] **Definition 3**: _Let \(\phi:\mathcal{X}\rightarrow\mathcal{Z}\) be the latent mapping, and let \(\Psi:\mathcal{Z}\times\{0,1\}\rightarrow\mathbb{R}\) be the updated hypothesis over the latent space \(\mathcal{Z}\), the estimated ITE for variable \(\mathbf{x}\) is:_ \[\hat{\tau}(\mathbf{x})=\Psi_{1}(\phi(\mathbf{x}),1)-\Psi_{0}(\phi(\mathbf{x}),0). \tag{5}\] **Definition 4**: _The expected Precision in Estimation of Heterogeneous Effect (PEHE) of the causal model \(\{\phi,\Psi\}\) with squared loss metric \(L(\cdot,\cdot)\) is defined as:_ \[\epsilon_{\text{PEHE}}(\phi,\Psi)=\int_{\mathcal{X}}L_{\phi,\Psi}(\mathbf{x} )p(\mathbf{x})d\mathbf{x}, \tag{6}\] _where we denote \(L(\hat{\tau}(\mathbf{x}),\tau(\mathbf{x}))\) as \(L_{\phi,\Psi}(\mathbf{x})\) for notation simplicity. The \(\tau(\mathbf{x})\) is the true treatment effect defined in Eq. 1 and \(\hat{\tau}(\mathbf{x})\) is the estimated one defined in Eq. 5._ **Lemma 2**: _Let \(\phi:\mathcal{X}\rightarrow\mathcal{Z}\) be the invertible latent representation mapping function with inverse \(\Phi\). Let \(\Psi:\mathcal{Z}\times\{0,1\}\rightarrow\mathbb{R}\) be the updated hypothesis. Let \(\mathcal{F}=\{f|f:\mathcal{Z}\rightarrow\mathbb{R}\}\) be a family of functions. Assume we have \(B_{\phi}>0\) s.t. \(\frac{1}{B_{\phi}}L_{\phi,\Psi}(\Phi(\mathbf{z}))\in\mathcal{F}\). The tightness of target domain error w.r.t. the source domain one is bounded by the distribution distance denoted by IPM:_ \[|\epsilon_{\text{PEHE}}^{tr}-\epsilon_{\text{PEHE}}^{sr}| \tag{7}\] \[= \bigg{|}\int_{\mathcal{Z}}L_{\phi,\Psi}(\Phi(\mathbf{z}))p_{\phi}^ {tr}(\mathbf{z})d\mathbf{z}-\int_{\mathcal{Z}}L_{\phi,\Psi}(\Phi(\mathbf{z}) )p_{\phi}^{sr}(\mathbf{z})d\mathbf{z}\bigg{|}\] \[\leq B_{\phi}\text{IPM}_{\mathcal{F}}(\mathbb{P}_{\phi}^{tr}, \mathbb{P}_{\phi}^{sr}),\] _where \(\epsilon_{\text{PEHE}}^{tr}\) and \(\epsilon_{\text{PEHE}}^{sr}\) indicate target domain error and source domain error respectively, \(\mathbb{P}_{\phi}^{tr}\) and \(\mathbb{P}_{\phi}^{sr}\) denote covariate distribution in target domain and source domain respectively, and \(B_{\phi}\) is a bounded constant._ **Proof of Lemma 2**: _We denote the expected PEHE in Eq. 6 in target domain and source domain as \(\epsilon_{\text{PEHE}}^{tr}\) and \(\epsilon_{\text{PEHE}}^{sr}\) respectively, also \(tr\) and \(sr\) indicates the test set and training set where \(p^{tr}(\mathbf{x})\neq p^{sr}(\mathbf{x})\) if domain corruption exists._ \[|\epsilon_{\text{PEHE}}^{tr}-\epsilon_{\text{PEHE}}^{sr}|=\bigg{|} \int_{\mathcal{X}}L_{\phi,\Psi}(\mathbf{x})p_{\phi}^{tr}(\mathbf{x})d\mathbf{x} -\int_{\mathcal{X}}L_{\phi,\Psi}(\mathbf{x})p_{\phi}^{sr}(\mathbf{x})d\mathbf{x}\] \[= \bigg{|}\int_{\mathcal{Z}}L_{\phi,\Psi}(\Phi(\mathbf{z}))p_{\phi}^ {tr}(\mathbf{z})d\mathbf{z}-\int_{\mathcal{Z}}L_{\phi,\Psi}(\Phi(\mathbf{z}) )p_{\phi}^{sr}(\mathbf{z})d\mathbf{z}\bigg{|}\] \[= \bigg{|}B_{\phi}\int_{\mathcal{Z}}\frac{1}{B_{\phi}}L_{\phi,\Psi}( \Phi(\mathbf{z}))(p_{\phi}^{tr}(\mathbf{z})-p_{\phi}^{sr}(\mathbf{z}))d\mathbf{z} \bigg{|}\] \[\leq \bigg{|}B_{\phi}\sup_{f\in\mathcal{F}}\bigg{|}\int_{\mathcal{Z}}f( \mathbf{z})(p_{\phi}^{tr}(\mathbf{z})-p_{\phi}^{sr}(\mathbf{z}))d\mathbf{z}\bigg{|}\] \[= \big{|}B_{\phi}\text{IPM}_{\mathcal{F}}(\mathbb{P}_{\phi}^{tr}, \mathbb{P}_{\phi}^{sr})\big{|}\] \[= B_{\phi}\text{IPM}_{\mathcal{F}}(\mathbb{P}_{\phi}^{tr},\mathbb{P}_{ \phi}^{sr}).\] _The first equality is by **Definition** 4, the second equality is by change of variable, the first inequality is by the premise that \(\frac{1}{B_{\phi}}L_{\phi,\Psi}\) belongs to the function family \(\mathcal{F}\), the fourth equality is by **Definition** 2, the last equality is by the property that IPM is non-negative._ **Proof of Theorem 1**: _Under the conditions of **Lemma** 2 and the auxiliary theorem 1 in [10], thus conclude the proof of **Theorem** 1:_ \[\epsilon_{\text{PEHE}}^{tr}\leq \epsilon_{\text{PEHE}}^{sr}+B_{\phi}\text{IPM}_{\mathcal{F}}( \mathbb{P}_{\phi}^{tr},\mathbb{P}_{\phi}^{sr}) \tag{9}\] \[\leq 2[\epsilon_{\text{F}}^{t=1}+\epsilon_{\text{F}}^{t=0}\] \[+B_{\phi}(\text{IPM}_{\mathcal{F}}(\mathbb{P}_{\phi}^{t=1}, \mathbb{P}_{\phi}^{t=0})+\frac{1}{2}\text{IPM}_{\mathcal{F}}(\mathbb{P}_{\phi}^{tr}, \mathbb{P}_{\phi}^{sr}))],\] _where the first inequality is by **Lemma** 2 and the second inequality is by the auxiliary theorem 1 from [10]. We align the function family \(\mathcal{F}\) to the one used in [10], as different choices of function family \(\mathcal{F}\) will require different assumptions about the joint distribution \(p(\mathbf{z},t,y_{1},y_{0})\), the representation mapping function \(\phi\), and the hypothesis \(\Psi\). Thus, we share the same bounded constant \(B_{\phi}\)._ In summary, the upper bound given in **Theorem 1** suggests that, to bring down the target domain error \(\epsilon_{\text{FEHE}}^{tr}\) during runtime, we are essentially minimizing: (1) the prediction errors on observed outcomes; (2) the imbalance between treated and control groups; (3) the discrepancy between the training and test sets altogether. It guides our algorithm design in general for runtime causal inference. Note that if no domain corruption exists, which means \(\mathbb{P}_{\phi}^{tr}=\mathbb{P}_{\phi}^{sr}\) and thus \(\text{IPM}(\mathbb{P}_{\phi}^{tr},\mathbb{P}_{\phi}^{sr})=0\), the runtime error becomes identical to the source domain error \(\epsilon_{\text{FEHE}}^{tr}=\epsilon_{\text{FEHE}}^{sr}\). ### _Variational Inference_ Our solution is built upon the variational autoencoder (VAE). To start with, in this section we introduce the minimization of the factual error \(\epsilon_{\text{F}}\), the distribution disparity between treated and control groups first, and between training and runtime domains afterwards. #### 3.4.1 Evidence Lower Bound For modelling the _observed treatment outcome_\(y\), we use the maximum likelihood estimation (MLE) to approximate the parameters. For simplicity, \(\log\) is commonly used to decompose the joint marginal likelihood \(p(\mathbf{y})\) into: \[\begin{split}\log p(\mathbf{y})=&\sum_{k=1}^{N}\log p (y_{k})\\ =&\sum_{i=1}^{N_{1}}\log p(y_{i})+\sum_{j=1}^{N_{0}} \log p(y_{j}),\end{split} \tag{10}\] where \(\mathbf{y}=[y_{1},y_{2},\dots,y_{N}]\) is a vector hosting all \(N\) samples' observations, \(N=N_{1}+N_{0}\), with \(N_{1}\) and \(N_{0}\) respectively denoting the number of samples in treated and control groups. Thus, to maximize the joint marginal log-likelihood of observing \(\mathbf{y}\), we can maximize each individual log-likelihood \(\log p(y)\). As we assume that there exists a latent representation \(\mathbf{z}\) and treatment \(t\) that causally determine the observed treatment response \(y_{t}\), i.e., \(y_{t}\sim p(y|\mathbf{z},t)\) in a probabilistic way, while the observed proxy \(\mathbf{x}\) does not have any causal relations but statistical correlations with \(y\). Due to the potentially high dimensionality of \(\mathbf{z}\), the marginal likelihood \(p(\mathbf{y})\) is intractable. Here, we apply the variational methodology [39] to our scenario to tackle \(p(\mathbf{y})\) by establishing an encoder network \(\phi_{t}\) to learn the posterior latent representation \(\mathbf{z}_{t}\sim p_{\phi_{t}}(\mathbf{z}|\mathbf{x})\), and a decoder network \(\Psi_{t}\) to estimate treatment response \(y_{t}\sim p_{\Psi_{t}}(y|\mathbf{z},t)\). According to the decomposed joint likelihood in Eq. 10, we can separately derive the the evidence lower bound (ELBO\({}_{t}\)) for each of the treatment group \(t\) in a similar manner used by [39] as follows: \[\begin{split}\sum_{i=1}^{N_{t}}\log p(y_{i})\geq& \text{ELBO}_{t}\\ =&\mathbb{E}_{\overline{\rho}_{\phi_{t}}}[\log p_{ \Psi_{t}}(y|\mathbf{z},t)]-D_{\text{KL}}(\mathbb{P}_{\phi_{t}}||\mathbb{Q}_{ \mathbf{z}}),\end{split} \tag{11}\] where \(\mathbb{P}_{\phi_{t}}\) and \(\mathbb{Q}_{\mathbf{z}}\) are posterior and prior distributions respectively over the latent space \(\mathcal{Z}\). \(D_{KL}(\cdot)\) returns the Kullback-Leibler (KL) divergence between two distributions. As such, the task of maximizing the intractable \(\log p(\mathbf{y}_{t})\) can be indirectly solved by pushing up its associated ELBO\({}_{t}\), thus minimizing the factual error \(\epsilon_{\text{F}}^{t}\). According to the decomposition in Eq. 10, our objective is to maximize the sum of two ELBOs for treated and control groups: \[\text{ELBO}=\sum_{t\in\{0,1\}}\text{ELBO}_{t}. \tag{12}\] It is worth noting that, our derived bound ELBO can be easily extended from our binary treatment setting to scenarios that involve multiple treatments. #### 3.4.2 Treated/Control Domain Adaptation According to the second term in Eq. 11, for \(t\in\{0,1\}\), we have both KL divergence terms that regularize the posterior distribution \(\mathbb{P}_{\phi_{t}}\) and the prior distribution \(\mathbb{Q}_{\mathbf{z}}\), i.e., \(D_{\text{KL}}(\mathbb{P}_{\phi_{t}}||\mathbb{Q}_{\mathbf{z}})\) and \(D_{\text{KL}}(\mathbb{P}_{\phi_{0}}||\mathbb{Q}_{\mathbf{z}})\). By pushing up the ELBO in Eq. 12, one can notice that both posteriors \(\mathbb{P}_{\phi_{t}}\) and \(\mathbb{P}_{\phi_{0}}\) are regularized to approach the same prior distribution \(\mathbb{Q}_{\mathbf{z}}\), e.g., standard normal distribution \(\mathcal{N}(\mathbf{0},\mathbf{1})\). Thus, the domain adaptation (DA) for both groups can be naturally achieved to balance their latent distributions and counter selection bias by adjusting the priors using the VAE framework. It is worth noting that KL divergence is an unbounded asymmetric distribution distance measure [40] which does not belong to IPM, so we replace it with a bounded symmetric distribution similarity measurement in Section 3.5 as a better approximation. #### 3.4.3 Training/Runtime Domain Adaptation In addition to the DA across treated and control groups within the training set, we would also like to do DA between the entire training and runtime sets to minimize the tightness bound \(B_{\phi}\text{IPM}_{\mathcal{F}}(\mathbb{P}_{\phi}^{tr},\mathbb{P}_{\phi}^{sr})\) given in **Theorem 1** and thus alleviate runtime domain corruption. As such, for a well-trained model, we aim to make the out-of-sample performance as good as the in-sample performance, i.e., the out-of-sample results would not deviate from the in-sample ones drastically while keeping good in-sample performance. Intuitively, if the VAE prediction framework is applied to the full runtime test set \(\{(\mathbf{x}_{j}^{tr},t_{j}^{tr},y_{j}^{tr})\}_{j=1}^{N^{\prime}}\) on the target domain, one can end up with an objective to be maximized similar to the ELBO\({}_{t}\) presented in Eq. 11 as follows: \[\Gamma_{\phi_{t}^{tr},\Psi_{t}^{tr}}=\mathbb{E}_{\overline{\phi}_{t}^{tr}}[\log p _{\Psi_{t}}^{tr}(y|\mathbf{z},t)]-D_{\text{KL}}(\mathbb{P}_{\phi_{t}}||\mathbb{Q }_{\mathbf{z}}^{tr}). \tag{13}\] However, the label \(y^{tr}\) and treatments \(t^{tr}\) are apparently unknown in practice, and such an objective cannot be optimized. Since the only available information is the runtime covariates which can be used to extract the domain-invariant representation from DA, the second term in Eq. 13 can be utilised for such purpose with a mild modification. Precisely, we alternatively walk around to minimize the KL divergence between the runtime posterior \(\mathbb{P}_{\phi}^{tr}\) and the entire training set posterior \(\mathbb{P}_{\phi}^{sr}\), namely \(D_{\text{KL}}(\mathbb{P}_{\phi}^{tr}||\mathbb{P}_{\phi}^{sr})\), where \(\phi\) is a shared feature extractor. Thus, to achieve the second-stage DA, our proposed ultimate evidence lower bound (ELBO\({}_{ulti}\)) for the intractable joint log-likelihood \(p(y)\) is: \[\log p(y) \geq\text{ELBO} \tag{14}\] \[\geq\text{ELBO}_{ulti}\] \[=\text{ELBO}-D_{\text{KL}}(\mathbb{P}_{\phi}^{tr}||\mathbb{P}_{\phi }^{sr}).\] ### _Adversarial Learning_ Thus far, we have three \(D_{\text{KL}}(\cdot)\) terms in our optimization objective: two from the in-sample treated/control groups, which align the posteriors to the same prior \(\mathcal{N}(\mathbf{0},\mathbf{1})\), and the one from out-of-sample train/test adaptation that aligns the posteriors of training set and runtime set. As the direct calculation of KL divergence is computationally inefficient and may even be infeasible with high dimensional data [41, 42], we propose to implicitly minimize them and unify these terms into a compact generative adversarial network (GAN) [25] shown in Figure 1. Apart from that, optimizing the minimax game in GAN is equivalent to minimizing the Jensen-Shannon divergence [25], which is a bounded symmetric distribution similarity measurement [43]. Such technique resonates with adversarial variational Bayes introduced in [44], while our motivation and implementation differ from theirs. Here, we present our **V**A** variational autoEncoder **G**enerative **A**dversarial **N**etwork runtime counterfactual regression model, coined as VEGAN. In what follows, we unfold the design details of VEGAN. Firstly, we instantiate \(\phi\), the shared feature extractor among \(\mathbf{x}_{i=1}^{sr}\), \(\mathbf{x}_{i}^{sr}\) and \(\mathbf{x}^{tr}\). It includes \(G_{\phi}\) and the following two multi-layer perceptrons (MLPs) that map all the data from the original \(\mathbb{R}^{d}\) into latent space \(\mathbb{R}^{l}\). Due to the variational nature of the model, the \(j\)-th latent dimension of individual \(i\), denoted by \(\mathbf{x}_{i}^{sr}\), is modelled by a Gaussian distribution with its dedicated mean \(\mu_{ij}\) and variance \(\sigma_{ij}^{2}\) as follows: \[p_{\phi}^{sr}(\mathbf{z}_{i}|\mathbf{x}_{i})=\prod_{j=1}^{l}\mathcal{N}(\mu_{ ij},\sigma_{ij}^{2}), \tag{15}\] where mean \(\mu_{ij}\) and standard deviation \(\sigma_{ij}\) are respectively the \(j\)-th element of latent representations \(\mathbf{\mu}_{i}\) and \(\mathbf{\sigma}_{i}\). In VEGAN, \(\mathbf{\mu}_{i},\mathbf{\sigma}_{i}\in\mathbb{R}^{l}\) are denoted as: \[\begin{cases}\mathbf{\mu}_{i}=\text{MLP}_{\mu}(G_{\phi}(\mathbf{x}_{i}^{sr}))\\ \mathbf{\sigma}_{i}=\text{MLP}_{\sigma}(G_{\phi}(\mathbf{x}_{i}^{sr}))\end{cases}, \tag{16}\] which allows us to obtain the latent representation \(\mathbf{z}_{i}\) for the subsequent DAs and inference. Secondly, for the treated/control group DA, we propose an adversarial way to implicitly reduce inter-domain distribution distance. In this regard, \(G_{\phi}\) is essentially our generator for notation simplicity, and a discriminator \(D_{\delta}\) is thus designed to pair up the generator to facilitate adversarial learning. The minimax game is designed as: the discriminator \(D_{\delta}\) tries to differentiate the standard Gaussian sample \(\mathbf{n}\sim\mathcal{N}(\mathbf{0},\mathbf{1})\) from \(\mathbf{z}^{sr}\) learned from the training sample; in the meantime, feature extractor \(G_{\phi}\) tries to update the latent representation \(\mathbf{z}^{sr}\) to make it indistinguishable from \(\mathbf{n}\). When an equilibrium state is reached, the treated and control domains are well adapted because both latent representations \(\mathbf{z}^{sr}\) fall in the same distribution. Thus, the two terms \(D_{\text{KL}}(\mathbb{P}_{\phi_{1}}||\mathbb{Q}_{\mathbf{z}})\) and \(D_{\text{KL}}(\mathbb{P}_{\phi_{0}}||\mathbb{Q}_{\mathbf{z}})\), are minimized in an adversarial way. As Figure 1 shows, the output \(p_{i}=D_{\delta}(\mathbf{w}_{i})\) is the scalar probability of being a Gaussian sample, where \(\mathbf{w}_{i}=\eta_{i}\mathbf{n}_{i}+(1-\eta_{i})\mathbf{z}_{i}^{sr}\) with \(\eta_{i}\in\{1,0\}\) labelling the \(i\)-th sample from two buckets (1 for Gaussian samples, and 0 for training samples). Note that \(\mathbf{n}\) is resampled for every training instance \(i\in\mathcal{I}\), where \(\mathcal{I}\) is the collection of instances from the training set. In our supervised learning setting, we have the cross-entropy loss for the discriminator \(D_{\delta}(\cdot)\): \[l(\mathbf{w}_{i})=\eta_{i}\log D_{\delta}(\mathbf{w}_{i})+(1-\eta_{i})\log(1- D_{\delta}(\mathbf{w}_{i})). \tag{17}\] Then, in an adversarial setting, the minimization of two KL-divergence terms for treated/control domain adaptation is replaced by the following: \[\begin{split}&\min_{\phi}\max_{\delta}\mathbb{E}_{\mathcal{I}}[ \eta_{i}\log D_{\delta}(\mathbf{w}_{i})+(1-\eta_{i})\log(1-D_{\delta}(\mathbf{ w}_{i}))]\\ &\iff\min_{\phi}\max_{\delta}\mathbb{E}_{\mathcal{N}(\mathbf{0}, \mathbf{1})}[\log D_{\delta}(\mathbf{n})]+\mathbb{E}_{\mathcal{I}}[\log(1-D_{ \delta}(\mathbf{z}_{i}^{sr}))].\end{split} \tag{18}\] Analogously, for the train/runtime domain adaptation, we design another discriminator \(D_{\beta}(\cdot)\) to form the second GAN system between \(G_{\phi}(\cdot)\) and \(D_{\beta}(\cdot)\), where \(D_{\beta}(\cdot)\) predicts the probability \(p_{j}^{\prime}\) of the sample \(j\), where \(j\in\mathcal{J}\) is the collection of the test set, from the source domain (i.e., training set). The only difference from the first GAN system is that, it takes the training sample as real while the runtime sample is treated as fake. Thus, \(D_{\text{KL}}(\mathbb{P}_{\phi}^{tr}||\mathbb{P}_{\phi}^{sr})\) is replaced by the following: \[\min_{\phi}\max_{\beta}\mathbb{E}_{\mathcal{I}}[\log D_{\beta}(\mathbf{z}_{i}^{ sr})]+\mathbb{E}_{\mathcal{J}}[\log(1-D_{\beta}(\mathbf{z}_{j}^{tr}))]. \tag{19}\] Finally, to build the probabilistic model \(p(y|\mathbf{z},t)\), we model each of the treatment classes through two separate MLPs, namely \(\Psi_{1}\) and \(\Psi_{0}\) respectively, thus the general representation of modelling the observed outcome \(y\) for individual \(i\) is given as \(p_{\Psi_{i}}(y_{i}|\mathbf{z}_{i},t_{i})=\mathcal{N}(\hat{\mu}_{i},\hat{\sigma} _{i}^{2})\), where \(\hat{\mu}_{i,i}=\Psi_{i_{i}}(\mathbf{z}_{i}^{sr},t_{i})\), and we follow [11] to set \(\hat{\sigma}_{i}^{2}=1\) for simplicity. In a nutshell, to promote a computationally efficient algorithm, we propose to minimize the following loss function \(\mathcal{L}\) along with optimizing the minimax game together such that the ELBO\({}_{ulti}\) in Eq. 14 will be maximized: \[\min_{\phi,\Psi_{1},\Psi_{0}}\max_{\delta,\beta} \ \bigg{\{}\mathbb{E}_{\mathcal{N}}[\log D_{\delta}(\mathbf{n})]+ \mathbb{E}_{\mathcal{Z}^{tr}}[\log(1-D_{\delta}(\mathbf{z}))]\] \[+\mathbb{E}_{\mathcal{Z}^{tr}}[\log D_{\beta}(\mathbf{z})]+ \mathbb{E}_{\mathcal{Z}^{tr}}[\log(1-D_{\beta}(\mathbf{z}))]+\mathcal{L} \bigg{\}}, \tag{20}\] where \[\mathcal{L}=-\left(\mathbb{E}_{p_{\phi_{1}}^{sr}}[\log p_{\Psi_{1}}(y|\mathbf{ z},t=1)]+\mathbb{E}_{p_{\phi_{0}}^{sr}}[\log p_{\Psi_{0}}(y|\mathbf{z},t=0)]\right). \tag{21}\] We summarize our VEGAN model optimization scheme in Algorithm 1. Please note that the notation changed accordingly as the reparameterization trick \(\mathbf{z}=\omega(\boldsymbol{\mu}_{\phi},\boldsymbol{\sigma}_{\phi}^{2}, \boldsymbol{\epsilon})\)[39] is applied as a necessity to get the gradient \(\nabla_{\phi}\) for the feature extractor \(G_{\phi}\). Also, the original minimax game in Eq. 20 is adjusted to the double minimization tradition for gradient descent. ## 4 Experiments In this section, we evaluate the proposed VEGAN framework in dealing with the runtime domain corruption by answering the following research questions (RQs): * **RQ1**: How does VEGAN perform compared with other state-of-the-art models? * **RQ2**: How effective is the proposed dual-stage DA in VEGAN? * **RQ3**: Is VEGAN computationally efficient compared to other VAE-based models? * **RQ4**: As a classic solution to missing variables in prediction tasks, is data imputation on par with VEGAN's performance when handling runtime domain corruption? * **RQ5**: Is our proposed second-stage plug-in applicable to other existing methods? ### _Experimental Setup_ #### 4.1.1 Datasets and Domain Corruption Simulation We utilize two popular semi-synthetic datasets in the causal inference literature, which are introduced below. * **Infant Health and Development Program (IHDP) [45]**. The IHDP dataset contains 25 covariates and 747 samples, assessing the effectiveness of early childhood interventions for low-birth-weight infants. To evaluate the causal model, the treatment outcomes are simulated according to [45]. In our test setting, seven privacy-related features are selected as target variables, i.e., (momage, sex, twin, b.marr, cig, drugs, work_dur), which are corrupted at different corruption levels (CLs), where the CL denotes the severity of the domain corruption ranging from 0% to 100%. While the rest 18 features remain unchanged. This \begin{table} \begin{tabular}{c c c c} \hline Module & \#Layers & \#Neurons & Learning Rate & Weight Decay \\ \hline \(G_{\phi}\) & 3 & 100 & \\ \(\Psi_{1}\) & 2 & 200 & \\ \(\Psi_{0}\) & 2 & 200 & \\ \(D_{\delta}\) & 2 & 100 & \\ \hline \end{tabular} \end{table} TABLE I: Tuned hyperparameters of VEGAN. Fig. 1: Unifying the KL-divergences by GAN. The black flows between \(G_{\phi}\) and MLPs denote the internal link, as the feature extractor is two-headed modelling the mean and standard deviation of the latent distribution. The pink flows indicate second-stage domain adaptation between train and test sets. is to mimic a typical runtime domain corruption scenario where individuals provide none or falsified privacy-related information for the trained model. We test \(\text{CL}\in\{5\%,12.5\%,20\%,33.3\%,100\%\}\) on IHDP. * **Atlantic Causal Inference Conference (ACIC) 2019 [46]**. The ACIC 2019 dataset is a high-dimensional dataset of 200 covariates and 1,000 samples, which are drawn from publicly available data and the treatment outcomes are also simulated. In our test setting, since there is no clear definition of sensitive features, we treat all covariates as target variables for runtime domain corruption. As such, we test CL in \(\{5\%,12.5\%,20\%,33.3\%\}\), given that CL=100% will wipe out all the covariates in ACIC. Each of the dataset is randomly split with 3:1 ratio for train and test. As per our definition, runtime domain corruption entails both a shift in the covariate distribution and missing values in the test set. To simulate the distribution shift, for each target feature \(x_{s}\in\mathbf{x}_{i}\), we perform the following with the probability specified by each CL: (1) we add noise drawn from Gaussian distribution \(\mathcal{N}(\bar{\mu},0.1)\) to \(x_{s}\) if \(x_{s}\) is continuous; (2) we flip its value if \(x_{s}\) is binary. To simulate the missing values, we scan each target feature \(x_{s}\) and drop it (via zero-padding) with probability CL. The two corruption steps are performed independently on the same test set. #### 4.1.2 Baselines and Evaluation Metrics We compare VEGAN with nine causal inference baselines as the following: * **TARNet**[10] is a base deep learning framework with a shared feature extractor and two decoders modelling the treated and control effects, respectively. * **CF\({}_{\text{WASS}}\)[10]** is a variant of TARNet, with a latent distribution balancing regularization (Wasserstein distance) to overcome the confounding bias introduced by the imbalance between the treated and control groups. * **CEVAE**[11] is a variational autoencoder framework which focuses on modelling the robust latent variable to handle the confounding bias from a probabilistic perspective. * **SITE**[47] explores the importance of the local similarity preservation as a constraint to improve ITE estimation, and proposes a deep representation learning method to help preserve the local similarity and balance data distribution altogether. * **Dragonnet\({}_{\text{Base}}\)[12]** is based on TARNet, but it additionally provides an end-to-end procedure for predicting propensity score to adjust the confounding bias when estimating the treatment effects. * **Dragonnet\({}_{\text{TR}}\)[12]** is built on top of Dragonnet\({}_{\text{Base}}\), the updated model introduces the novel targeted regularization based on the non-parametric estimation theory, which provides an asymptotic property with a suitable downstream estimator. * **BTARNET**[9] enhances the decoders of TARNet with Monte Carlo dropout technique to quantify the uncertainty when estimating the treatment effect. * **BCEVAE**[9] takes CEVAE as a base model, and incorporates the Monte Carlo dropout into its generative network for uncertainty quantification. * **TEDVAE**[28] is a latent variable disentangle model based on a three-headed variational autoencoder, which tries to learn the disentangled latent instrumental, risk, and confounding factors, respectively, from the observed covariates. #### 4.1.3 Implementation Our model is implemented with PyTorch [48]. The hyperparameters are tuned according to the models' performance on validation set. Our tuned hyperparameters are shown in Table I, respectively. All the experiments are conducted with RTX-3090 on Ubuntu 22.04 LTS platform where GPU training is enabled, otherwise the 12th Gen Intel i7-12700K 12-Core 20-Thread CPU is used. ### _Performance Evaluation (RQ1)_ #### 4.2.1 Out-of-Sample Prediction under Runtime Domain Corruption **IHDP Dataset**. For predictions on the corrupted, out-of-sample instances, we conduct the tests on the test set with five corruption ratios \(\text{CL}\in\{5\%,12.5\%,20\%,33.3\%,100\%\}\). Notably, \(\text{CL}=100\%\) represents an extreme case where all the seven sensitive features are completely inaccessible during runtime and only the remaining 18 variables are available for prediction. As Table II demonstrates, VEGAN yields the second best performance when the domain corruption is relatively restrained, and obtains the highest accuracy after the corruption ratio increases to and beyond 20%. The best baseline is \(\text{CF}_{\text{WASS}}\) when CL is low, but it overfits the training set significantly and thus does not generalize to a higher domain corruption level, while VEGAN is more robust to the stronger corruption on IHDP's private variables. **ACIC**. Since there is no clear definition for all 200 features on ACIC 2019 dataset, we allow the corruption to take place for all the features in ACIC dataset with a ratio of \(\text{CL}\in\{5\%,12.5\%,20\%,33.3\%\}\). With this, we can mimic situations where individuals can withhold an arbitrary combination of variables in privacy-sensitive applications. Note, that we omit \(\text{CL}=100\%\) in ACIC dataset as it will set all variables to zero and thus make any predictions infeasible. As a result, VEGAN outperforms all the other models for out-of-sample prediction as shown in Table III. #### 4.2.2 In-Sample Prediction without Runtime Domain Corruption Besides the out-of-sample prediction under the run-time corruption, we also investigate the traditional in-sample inference, where there no corruption happens, i.e., there is neither distribution shift nor missing variables. Table IV and 5 show the in-sample prediction results on both CATE and ITE estimation, for which our model performs the best in estimating CATE while staying competitive for ITE estimation on the IHDP dataset, and outperforms all the other models on the ACIC dataset. #### 4.2.3 Volatility Analysis It is noted that when the domain corruption level climbs, the fluctuations of prediction errors are small in magnitude on ACIC 2019 dataset. To better quantify the advantage of VEGAN under domain corruption on ACIC, we analyse the deviation (\(\Delta\)) of each model's performance between in-sample and corrupted prediction tasks in Figure 2, i.e., \(\Delta=100\%\times|\epsilon_{\text{in-sample}}-\epsilon_{\text{corrupted}}|/ \epsilon_{\text{in-sample}}\). \(\Delta\) quantifies the instability of the model, as we commonly rely on models obtained with the training set and prefer lower generalization errors. All models become more volatile as CL increases, while VEGAN maintains an excellent stability with only 0.22% variation at corruption level \(33.3\%\) and achieves the best accuracy in terms of \(\sqrt{\epsilon_{\text{PEHE}}}\). ### _Effectiveness of Second-Stage DA (RQ2)_ As VEGAN's main highlight is the second-stage adversarial DA as a plug-in component, we conduct an ablation study to compare the performance of VEGAN and VEGAN\({}_{\text{I}}\) on both datasets, where VEGAN\({}_{\text{I}}\) is a degraded version with the second-stage DA removed. The results in Figure 3 indicate that, when CL is low, both two models are comparable. However, when CL goes higher, the advantage of the second-stage plug-in becomes significant. Thus, with our proposed second-stage DA, VEGAN is shown to have higher generalization ability than VEGAN\({}_{\text{I}}\) across different scenarios. ### _Computational Efficiency & Stability of VEGAN (RQ3)_ One core motivation for utilizing GAN to replace the straightforward KL divergence optimization is to preserve training efficiency under high dimensionality. Hence, we further test VEGAN's efficiency by comparing its training time (in seconds) per 100 epochs with CEVAE and VEGAN\({}_{\text{I}}\). To ensure a fair comparison, the tests are performed on 12th Gen Intel i7-12700K 12-Cores 20-Threads CPU on Ubuntu 22.04 LTS. Figure 4 shows that VEGAN\({}_{\text{I}}\), which can be \begin{table} \begin{tabular}{l c c c c} \hline Model & 5\% & 12.5\% & 20\% & 33.3\% & 100\% \\ \hline TARNet & \(1.725\pm.17\) & \(2.216\pm.26\) & \(2.455\pm.26\) & \(3.524\pm.42\) & \(5.845\pm.76\) \\ CFR\({}_{\text{WASS}}\) & \(\mathbf{1.610\pm.15}\) & \(\mathbf{2.070\pm.24}\) & \(2.469\pm.27\) & \(3.503\pm.42\) & \(5.979\pm.85\) \\ SITE & \(1.794\pm.19\) & \(2.136\pm.25\) & \(2.488\pm.27\) & \(3.427\pm.43\) & \(5.531\pm.76\) \\ Dragonnet\({}_{\text{Base}}\) & \(2.111\pm.24\) & \(2.272\pm.26\) & \(2.507\pm.28\) & \(3.190\pm.39\) & \(4.521\pm.60\) \\ Dragonnet\({}_{\text{TR}}\) & \(2.023\pm.23\) & \(2.219\pm.25\) & \(2.516\pm.28\) & \(3.221\pm.40\) & \(4.760\pm.63\) \\ CEVAE & \(3.074\pm.37\) & \(2.910\pm.32\) & \(2.987\pm.35\) & \(3.672\pm.46\) & \(4.164\pm.53\) \\ BTARET & \(2.375\pm.28\) & \(2.457\pm.29\) & \(2.566\pm.29\) & \(3.080\pm.39\) & \(4.229\pm.50\) \\ BCVAE & \(2.506\pm.30\) & \(2.598\pm.32\) & \(2.783\pm.34\) & \(3.136\pm.39\) & \(4.170\pm.52\) \\ TEDVAE & \(2.232\pm.32\) & \(3.237\pm.34\) & \(2.582\pm.39\) & \(3.347\pm.41\) & \(4.453\pm.62\) \\ VEGAN & \(1.720\pm.16\) & \(2.099\pm.23\) & \(\mathbf{2.326\pm.25}\) & \(\mathbf{2.954\pm.35}\) & \(\mathbf{3.918\pm.47}\) \\ \hline \end{tabular} \end{table} TABLE II: \(\sqrt{\epsilon_{\text{PEHE}}}\) of out-of-sample prediction on IHDP dataset with different corruption levels on private features Fig. 3: Ablation study on second-stage adversarial plug-in on VEGAN framework. \begin{table} \begin{tabular}{l c c c c} \hline Model & 5\% & 12.5\% & 20\% & 33.3\% \\ \hline TARNet & \(0.813\pm.05\) & \(0.738\pm.04\) & \(0.798\pm.05\) & \(0.754\pm.04\) \\ CFR\({}_{\text{WASS}}\) & \(0.730\pm.04\) & \(0.655\pm.03\) & \(0.738\pm.04\) & \(0.644\pm.04\) \\ SITE & \(1.482\pm.08\) & \(1.361\pm.08\) & \(1.365\pm.09\) & \(1.586\pm.10\) \\ Dragonnet\({}_{\text{Base}}\) & \(2.250\pm.02\) & \(1.292\pm.02\) & \(2.142\pm.02\) & \(2.063\pm.02\) \\ Dragonnet\({}_{\text{TR}}\) & \(2.517\pm.02\) & \(2.456\pm.02\) & \(2.406\pm.02\) & \(2.342\pm.02\) \\ CEVAE & \(0.646\pm.02\) & \(0.629\pm.02\) & \(0.594\pm.02\) & \(0.581\pm.03\) \\ BTARET & \(0.705\pm.02\) & \(0.652\pm.02\) & \(0.653\pm.02\) & \(0.671\pm.02\) \\ BCEVAE & \(0.495\pm.02\) & \(0.507\pm.02\) & \(0.492\pm.02\) & \(0.532\pm.01\) \\ IDVAE & \(0.736\pm.03\) & \(0.702\pm.03\) & \(0.775\pm.03\) & \(0.674\pm.03\) \\ VEGAN & \(\mathbf{0.490\pm.01}\) & \(\mathbf{0.493\pm.01}\) & \(\mathbf{0.471\pm.01}\) & \(\mathbf{0.455\pm.00}\) \\ \hline \end{tabular} \end{table} TABLE III: \(\sqrt{\epsilon_{\text{PEHE}}}\) of out-of-sample prediction on ACIC dataset with different corruption levels on all features. Fig. 2: Performance volatility \(\Delta\) of all models on ACIC dataset under different CLs. viewed as an amplified version of CEVAE with the introductions of GAN, has significantly faster training speed (over 6\(\times\) speedup). Furthermore, the introduction of our second-stage adversarial DA in VEGAN is still able to maintain high computational efficiency, witnessed by over 4\(\times\) speedup over CEVAE. As GAN is known to be unstable during training, We provide the stability analysis in terms of prediction loss convergence and equilibrium status between feature extractor and discriminators. Figure 5 shows the models' convergence on root mean square error (RMSE) during training. It is noted that a higher corruption level brings more challenges in the adversarial training, but we observe an equilibrium state in the majority of the cases. Taking the more challenging 33.3% CL in ACIC dataset as an example, both discriminators (for treated/control and training/runtime adaptations) can quickly converge to the equilibrium by returning an average binary cross-entropy loss of 0.69, which means the discriminators are completely deceived by the feature extractors, and always give 0.5 probability for the samples from each of the groups. As such, training VEGAN in an adversarial setting is completely attainable. ### _Comparison with Imputation Method (RQ4)_ As imputation is a natural choice to handle missing values, we test the effectiveness of VEGAN against data imputation methods on ACIC's corrupted out-of-sample test sets. Specifically, we implement the imputation algorithm MICE [49], which has been widely adopted in treatment effect estimation [31, 32]. We denoted imputation-enhanced models with "*" in Table VI. The results indicate that, when the corruption rate is low, using imputation is generally helpful for slightly increasing the prediction performance compared to Table III, but the improvements remain marginal and less significant compared with VEGAN. In short, data imputation has very limited benefits under the domain corruption setting. Furthermore, in scenarios where an attribute is completely missing for all instances, it is infeasible to impute this attribute based on its distribution within existing test samples for prediction. ### _Applicability of Second-Stage DA to Other Baselines (RQ5)_ To demonstrate the applicability of our proposed second-stage adversarial DA plug-in to other state-of-the-arts, we study its compatibility with the most representative baseline IARNet. The experiments are conducted using the ACIC dataset, and the results are presented in Table VII. We denote the IARNet with adversarial plug-in as \(\text{IARNet}_{+}\). As the results suggest, there is a transferable benefit to the other baseline with our proposed second-stage adversarial plug-in when the corruption level becomes higher, the benefit of the second-stage domain adaptation will be enlarged. When the adversarial plug-in is in use, it effectively helps IARNet reduce prediction risks under runtime domain corruption as the volatility of the \(\text{IARNet}_{+}\) is stabilized at around 2%. ## 5 Conclusion This paper formalizes the runtime causal inference problem under domain corruption, where novel strategies are proposed to counter the imbalance between treated and control groups and the inter-domain discrepancy between training \begin{table} \begin{tabular}{c c c c c} \hline \hline Model & 5\% & 12.5\% & 20\% & 33.3\% \\ \hline \(\text{IARNet}^{*}\) & \(0.807\pm.04\) & \(0.728\pm.04\) & \(0.783\pm.04\) & \(0.763\pm.04\) \\ \(\text{CFR}_{\text{WAS}^{*}}\) & \(0.722\pm.04\) & \(0.640\pm.03\) & \(0.710\pm.04\) & \(0.679\pm.04\) \\ \(\text{SITE}^{*}\) & \(1.322\pm.08\) & \(1.210\pm.07\) & \(1.358\pm.08\) & \(1.268\pm.08\) \\ \(\text{Dragonnet}_{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{ \text and inference domain. We further adopt adversarial learning to replace the direct calculation of KL-divergence to improve computational efficiency. For our proposed approach VEGAN framework with second-stage domain adaptation, its performance exceeds other state-of-the-arts under the runtime domain corruption setting in semi-synthetic and full-synthetic benchmark datasets. In addition, the second-stage adversarial plug-in is demonstrated as applicable to the off-the-shelf models to reduce generalization errors. ## Acknowledgement This work is supported by the Australian Research Council under the streams of Industrial Transformation Training Centre (No. IC200100022), Future Fellowship (No. FT210100624), Discovery Project (No. DP190101985), and Discovery Early Career Researcher Award (No. DE230101033).
2303.10818
A Re-Examination of the Foundations of Cost of Capital for Regulatory Purposes
In regulatory proceedings, few issues are more hotly debated than the cost of capital. This article formalises the theoretical foundation of cost of capital estimation for regulatory purposes. Several common regulatory practices lack a solid foundation in the theory. For example, the common practice of estimating a single cost of capital for the regulated firm suffers from a circularity problem, especially in the context of a multi-year regulatory period. In addition, the relevant cost of debt cannot be estimated using the yield-to-maturity on a corporate bond. We suggest possible directions for reform of cost of capital practices in regulatory proceedings.
Darryl Biggar
2023-03-20T01:22:28Z
http://arxiv.org/abs/2303.10818v1
# A re-examination of the foundations of the cost of capital for regulatory purposes ###### Abstract In regulatory proceedings, few issues are more hotly debated than the cost of capital. This article formalises the theoretical foundation of cost of capital estimation for regulatory purposes. Several common regulatory practices lack a solid foundation in the theory. For example, the common practice of estimating a single cost of capital for the regulated firm suffers from a circularity problem, especially in the context of a multi-year regulatory period. In addition, the relevant cost of debt cannot be estimated using the yield-to-maturity on a corporate bond. We suggest possible directions for reform of cost of capital practices in regulatory proceedings. ## 1 Introduction In almost all public utility rate cases, the regulator is required to determine a parameter known as the cost of capital. Debates over this parameter are routinely amongst the most contentious issues in regulatory proceedings. Corporate finance theory provides some guidance over these debates, but much of that theory was developed for other purposes and is not always relevant or directly applicable to regulatory proceedings. This article takes a fresh look at cost of capital theory to explore whether we can strengthen the theoretical foundation of cost of capital for regulatory practices. We find that several practices that are common in regulatory proceedings do not have a sound foundation in the underlying theory. In practice, in the estimation of cost of capital, heuristics, assumptions and approximations are common. In many cost-of-capital applications, especially in corporate finance, these can be useful and reasonable. However, in the context of litigious regulatory proceedings, regulators and courts need a basis on which to stand when choosing between competing theories, methodologies and approaches. This article starts from the perspective that in making regulatory decisions, courts and regulators should rely - as far as possible - on the underlying theory. Without a theoretical foundation on which to stand regulators and courts cannot make reasoned and rational decisions whether to prefer one approach to estimating the cost of capital over another, or even determine whether the question or dispute in from of them makes sense. As we will see, this re-examination of the underlying theory raises questions about several common practices in cost of capital estimation for regulatory purposes. Wherever possible I propose alternative approaches which are more consistent with the underlying theory. However those alternative approaches may require access to information that courts or regulators find difficult to obtain. These issues will need to be worked through in regulatory practice. At this stage this article seeks to identify potential problems and to suggest potential solutions. This article does not assume any particular methodology for the estimation of the cost of capital - whether that is the Capital Asset Pricing Model (CAPM) (or its variants), Arbitrage Pricing Theory, or the Fama-French 3-factor model.1 Rather, this article uses the concept of the 'valuation functional' or 'valuation operator' which is independent of any particular approach to estimating the cost of capital. The results set out here are therefore independent of, and more fundamental, than a particular methodology for valuing a cash-flow stream. We set out the properties of the value functional in section 2 below. Footnote 1: See the Wikipedia entry on Asset Pricing. Cost of capital is important in regulatory processes for one central reason: In all cost-based regulatory processes a key goal is to set prices or allowed revenue which allow the regulated firm to 'cover its costs' and no more.2 This is expressed formally in the objective that the regulated firm's cash-flow stream should be set to achieve a Net Present Value (NPV) of zero. In a context in which the recovery of investment costs is spread out over time, achieving the objective of NPV=0 requires an estimation of the relevant discount rate or cost of capital for each cash-flow. Errors in estimation of the cost of capital translate into errors in the NPV. Increased uncertainty in cost of capital estimation therefore results in either (a) a greater risk of under-investment or default by the regulated firm; or (b) the regulator increasing the size of the 'buffer' in the allowed revenues to ensure that the regulated firm does not under-invest or default. In principle, improving the quality of cost of capital estimation would allow the regulator to lower prices to customers and/or reduce the risk to the regulated firm of under-investment or default. There are many different ways to regulate to achieve NPV=0. The precise definition of the cost of capital in a regulatory process depends very strongly on the precise approach to regulation. It is therefore important to carefully formalise the statement of the regulatory process, which we do below. Current regulatory practice in Australia and New Zealand can be summarised as follows: In the typical case of a regulatory period of five years, a single cost of capital parameter is estimated which applies to the entire five year period. Starting from the opening regulatory asset base, the cost of capital (together with estimates of opex, capex, and the choice of depreciation) is used to determine the allowed revenues for each year of the regulatory period, and the closing asset base. The cost of capital parameter in this process is typically estimated using the yield on a long-term government bond (typically with a term of ten years), plus a premium for risk. The risk premium is estimated as a weighted average of the risk premium for equity and the risk premium for debt. The risk premium for equity is typically estimated using a version of the CAPM. The risk premium for debt is typically estimated in one of two ways: (a) as the risk premium on a representative corporate bond of the correct term and credit rating; or (b) as a trailing average of a representative portfolio of corporate bonds over given term and credit rating. Although the details vary, this broad approach is consistent with a number of other regulatory jurisdictions. Within this framework we would like to give formal answers to the following questions: * Should the cost of capital for a regulated firm vary with the level of the regulatory asset base or the level of the allowed revenues, holding all else constant? Does this give rise to a problem of circularity? * In the context of a multi-year regulatory period what is the right 'term' of the cost of capital? Is the right term a 'one year' rate, a term equal to the length of the regulatory period, a longer term, or something else entirely? * If we estimate the cost of capital for the regulated firm as a weighted average of the cost of equity and the cost of debt, what is the right approach to estimating the cost of debt? Can we estimate the cost of debt by observing the appropriate yield-to-maturity on a corporate bond of the right term? If so, what is the right term? * Is it possible to express the single cost-of-capital for a multi-year regulatory period as a weighted average of the cost of equity and the cost of debt? The key conclusions of this article may be summarised as follows: * In common regulatory practice the regulator estimates a single cost of capital for the combined cash-flow of the regulated firm (the sum of the allowed cash-flow of the firm and the closing regulatory asset base). But this cost of capital depends, itself, on the level of the allowed cash-flow of the firm. This gives rise to a problem of circularity: A choice of the cost of capital affects the allowed cash-flow, which affects the cost of capital. At a minimum, this complicates the task of estimating the correct cost of capital for the firm. This problem can be resolved by estimating a separate cost of capital for the one-period cash-flow and for the closing asset base of the regulated firm. * one for each of the single-period cash-flows during the regulatory period and one for the closing asset base. * In common regulatory practice, the cost of capital for the firm as a whole is divided into a separate cost of equity and a cost of debt. However, we demonstrate that the cost of capital(s) for the debt payment stream typically cannot be observed directly from observations of the current yield-to-maturity on bonds traded in the market. In particular, neither the 'on the day' approach (that is, the yield to maturity on a zero-coupon bond with a term equal to the length of the regulatory period) nor the 'trailing average' approach (that is, the weighted average yield to maturity on the portfolio of bonds sold by the regulated firm) yields the correct cost of capital parameter for the payments to debtholders. * In the context of a multi-year regulatory period, the cost of capital parameter cannot be expressed as a weighted average of a cost of capital for equity and a cost of capital for debt. This calls into question the value of estimating a Weighted-Average Cost of Capital for regulatory purposes. There have been surprisingly few papers dealing with cost of capital issues in a regulatory context in the last fifty years. Following a key article by Myers (1972_a_), the use of the Capital Asset Pricing Model (CAPM) became widespread in public utility regulatory practice.3 Over the next few decades, as advances in corporate finance theory developed new tools for estimation of the cost of capital (including arbitrage pricing theory, discounted cash flow, multi-factor models, or dividend growth models), these were proposed and considered in the context of public utility regulation (see, e.g., the surveys in Kolbe et al. (1984); Alexander et al. (2000); Villadsen et al. (2017)).4 On the whole, however, public utility regulators have tended to adopt developments in corporate finance theory.5 Footnote 4: Roll and Ross (1983) advocate for the use of the arbitrage pricing theory in the context of public utility regulation. Footnote 5: One exception is Michelfelder and Theodossiou (2013). This article has four main sections. Section 2 introduce some fundamental principles of cost of capital, for both individual cash-flows and streams of cash-flows. Section 3 starts with the simplest case of a regulatory period that lasts one year. This section introduces the circularity problem and shows how it can be resolved by using a different cost of capital for the components of the cash-flow of the firm. Section 4 extends that analysis to a regulatory period of \(T\) years. Section 5 explores the issues associated with estimating the cost of debt and argues that there is no value (and it may not be possible) to estimate a separate cost of equity and cost of debt. Section 6 concludes. ## 2 Preliminaries As set out below, we will define the concept of the cost of capital with reference to the properties of the 'valuation functional' or 'valuation operator'. We will begin, therefore, by setting out the properties of this functional. ### Definition of the present-value functional Let's suppose we are currently at time \(0\) and there is an uncertain future cash-flow arriving at time \(t\). The uncertain payoff of this cash-flow at time \(t\) is assumed to be reflected in the value of the random variable \(X_{t}\). The **expected value** of \(X_{t}\), in the light of the information available at time zero, denoted \(\mathbb{E}_{0}(X_{t})\), is a function (technically, a functional) from a set of random variables representing payoffs at time \(t\) to the expected value (or mean) of those payoffs, when viewed from time zero. Similarly, the **present value** of the future cash-flow at time zero, denoted \(\mathbb{V}_{0\to t}(X_{t})\) is a function from the set of random variables representing payoffs at time t, to the present value of that random variable at time zero (Skiadis, 2022; Hansen and Scheinkman, 2009; Anderson, 2012; Ross, 1978). The present value of a future cash-flow at time 0 is also the price at which the right to receive that future cash-flow would trade at time 0 in an efficient market. Both the expected value \(\mathbb{E}_{0}(\cdot)\) and the present value \(\mathbb{V}_{0\to t}(\cdot)\) of a cash-flow are linear functions. If \(X_{t}\) and \(Y_{t}\) are both cash-flows arriving in at time \(t\), and \(a\) and \(b\) are real numbers, the present value function satisfies: \[\mathbb{V}_{0\to t}(aX_{t}+bY_{t})=a\mathbb{V}_{0\to t}(X_{t})+b \mathbb{V}_{0\to t}(Y_{t}) \tag{1}\] When a cash-flow arrives further in the future (say, in period 2), with uncertain payoff \(X_{2}\), the present value of the cash-flow at time 1 is denoted \(\mathbb{V}_{1\to 2}(X_{2})\). Viewed from time zero, this value is itself a random variable. The present value functional satisfies a recursion property: \[\forall 1\leq t\leq T,\ \ \mathbb{V}_{0\to t}(\mathbb{V}_{t\to T}(X_{T}))= \mathbb{V}_{0\to T}(X_{T}) \tag{2}\] For regulatory purposes we often need to determine the present value of an indefinite _stream of cash-flows_\(X_{1},X_{2},X_{3},\ldots\), arriving at time 1, 2, 3 and so on. The present value of a stream of cash-flows is just the sum of the present value of the individual cash-flows. \[\mathbb{V}_{0}(X_{1},X_{2},X_{3},\ldots)=\sum_{t=1}\mathbb{V}_{0 \to t}(X_{t}) \tag{3}\] Using the recursion property of the present value function, the present value of an indefinite stream of cash-flows is equal to the present value of a finite stream of cash-flows, truncated in year \(T\), say, where we simply add the 'terminal value' of the cash-flow stream to the final individual cash-flow: \[\mathbb{V}_{0}(X_{1},X_{2},X_{3},\ldots) =\sum_{t=1}\mathbb{V}_{0\to t}(X_{t}) \tag{4}\] \[=\sum_{t=1}^{T}\mathbb{V}_{0\to t}(X_{t})+\sum_{t=T}\mathbb{V}_{0 \to t}(X_{t})\] (5) \[=\sum_{t=1}^{T}\mathbb{V}_{0\to t}(X_{t})+\sum_{t=T+1}\mathbb{V}_{0 \to T}(\mathbb{V}_{T\to t}(X_{t}))\] (6) \[=\sum_{t=1}^{T}\mathbb{V}_{0\to t}(X_{t})+\mathbb{V}_{0 \to T}(\mathbb{V}_{T}(X_{T+1},X_{T+2},\ldots)\] (7) \[=\mathbb{V}_{0}(X_{1},X_{2},\ldots,X_{T-1},X_{T}+\mathbb{V}_{T}( X_{T+1},X_{T+2},\ldots)) \tag{8}\] In the case where \(T=1\), the present value of a stream of cash-flows is the sum of two terms: The one-period cash-flow \(X_{1}\) and the terminal value at time \(1\)\(\mathbb{V}_{1}=\mathbb{V}_{1}(X_{2},X_{3},X_{4},\ldots)\): \[\mathbb{V}_{0}=\mathbb{V}_{0}(X_{1}+\mathbb{V}_{1}) \tag{9}\] ### Definition of the cost of capital The **cost of capital** for a cash-flow represented by the random variable \(X_{t}\), denoted \(\mathbb{R}_{0\to t}(X_{t})\) is the ratio of the expected value of the cash-flow to the present value (provided the present value of the cash-flow is not zero, in which case the cost of capital is undefined).6 Footnote 6: It is also common to define the cost of capital as equation 10 minus one. We have a slight preference for the approach set out here as it saves having to add and/or subtract one from different equations. \[\mathbb{R}_{0\to t}(X_{t})\equiv\frac{\mathbb{E}_{0}(X_{t})}{\mathbb{V}_{0 \to t}(X_{t})} \tag{10}\] The present value of a cash-flow arriving at time zero, viewed from time zero, is just the expected value: \[\mathbb{V}_{0\to 0}(X_{0})=\mathbb{E}_{0}(X_{0}) \tag{11}\] It follows that the cost of capital at time zero for any cash-flow arriving at time zero is equal to one.7 Footnote 7: More generally, of course, \(\mathbb{R}_{t\to t}(X_{t})=1\). The present value of a certain cash-flow arriving at time \(t\) in the future is the ratio of the certain cash-flow to a value known as the 'risk-free rate'. Let's suppose that the cash-flow \(X_{t}\) yields the certain value \(a\) at time \(t\), then: \[\mathbb{V}_{0\to t}(X_{t})=\frac{a}{RF_{0\to t}} \tag{12}\] Here \(RF_{0\to t}\) is the discount rate or interest rate for a certain cash-flow at time \(t\), viewed from time zero. It follows that the cost of capital for a cash-flow with a fixed payoff \(a\) arriving at time \(t\) is just the risk-free rate between time zero and time \(t\): \[\mathbb{R}_{0\to t}(a)=\frac{\mathbb{E}_{0}(a)}{\mathbb{V}_{0\to t}(a)}=RF_{0 \to 1t} \tag{13}\] Due to the linearity properties of the expectation and value operators, the cost of capital of any cash-flow is _independent of the scale_ of the cash-flow:8 Footnote 8: It follows that we need only define the cost of capital for uncertain cash-flows with a mean of one. \[\mathbb{R}_{0\to t}(aX_{t})=\frac{\mathbb{E}_{0}(aX_{t})}{\mathbb{V}_{0 \to t}(aX_{t})}=\mathbb{R}_{0\to t}(X_{t})\text{ for any }a\neq 0 \tag{14}\] Now consider the cost of capital for the sum of two cash-flows \(X_{t}\) and \(Y_{t}\). It follows from the linearity properties of the present value function that the cost of capital for the sum of two cash-flows can be written as the _weighted average_ of the cost of capital of the individual cash-flows. If \(X_{t}\) and \(Y_{t}\) are both cash-flows arriving at time \(t\), the cost of capital for the sum can be expressed as a function of the cost of capital of the components in two ways. In the first formulation, the weighting depends on the share of the present value of each component in the sum: \[\mathbb{R}_{0\to t}(X_{t}+Y_{t})=\alpha\mathbb{R}_{0\to t}(X_{t})+(1- \alpha)\mathbb{R}_{0\to t}(Y_{t}) \tag{15}\] Where: \[\alpha=\frac{\mathbb{V}_{0\to t}(X_{t})}{\mathbb{V}_{0\to t}(X_{t}+Y_{t})} \tag{16}\] In the second formulation, the weighting depends on the share of the expected value of each component in the sum: \[\mathbb{R}_{0\to t}(X_{t}+Y_{t})^{-1}=\alpha\mathbb{R}_{0\to t}(X_{t})^{-1}+(1 -\alpha)\mathbb{R}_{0\to t}(Y_{t})^{-1} \tag{17}\] Where: \[\alpha=\frac{\mathbb{E}_{0\to t}(X_{t})}{\mathbb{E}_{0\to t}(X_{t}+Y_{t})} \tag{18}\] Finally, let's suppose we have a cash-flow \(X_{T}\) arriving at time \(T\). As we have seen, the present value of this cash-flow at an earlier time \(t\) is denoted \(V_{t\to T}(X_{T})\). This is also a random variable. The cost of capital for this random variable at time zero is the ratio of the expected future price \(\mathbb{E}_{0}(\mathbb{V}_{t\to T}(X_{T}))\) to the current price \(\mathbb{V}_{0\to T}(X_{T})\):9 Footnote 9: There is also a formula for the cost of capital for a cash-flow arriving in two periods, as a function of the one-period cash-flows from time 0 to time 1 and from time 1 to time 2, as follows: \(\mathbb{R}_{0\to 2}(X_{2})=\mathbb{R}_{0\to 1}(\mathbb{V}_{1}(X_{2}))\mathbb{E}_{0}(X_{2}) \mathbb{E}_{0}(\frac{\mathbb{E}_{1}(X_{2})}{\mathbb{R}_{1\to 2}(X_{2})})^{-1}\). \[\mathbb{R}_{0\to t}(\mathbb{V}_{t\to T}(X_{T}))=\frac{\mathbb{E}_{0}( \mathbb{V}_{t\to T}(X_{T}))}{\mathbb{V}_{0\to t}(\mathbb{V}_{t\to T}(X_{T}))}= \frac{\mathbb{E}_{0}(\mathbb{V}_{t\to T}(X_{T}))}{\mathbb{V}_{0\to T}(X_{T})} \tag{19}\] This approach is independent of the methodology used to value cash-flows. But it may be worth pointing out that, with some additional assumptions, the Capital Asset Pricing Model emerges from this approach. In the special case where the preferences of the individual investors take the mean-variance form, the cost of capital for any cash-flow \(X_{1}\) arriving at time 1 is given by an equation which is analogous to the standard Capital Asset Pricing Model. This result is proved in appendix A. \[\mathbb{R}_{0\to 1}(X_{1})^{-1}=RF_{0\to 1}^{-1}-(RF_{0\to 1}^{-1}-\mathbb{R}_{0\to 1}(M_{1})^{-1}) \beta_{0\to 1}(X_{1}) \tag{20}\] Where \(M_{1}\) is the cash-flow from holding the'market portfolio' at time 1, and \(\beta_{0\to 1}(X_{1})\) is the 'beta' of the cash-flow \(X_{1}\) defined as: \[\beta_{0\to 1}(X_{1})=\frac{\mathbb{E}_{0}(M_{1})}{\mathbb{E}_{0}(X_{1})} \frac{Cov(X_{1},M_{1})}{Var(M_{1})} \tag{21}\] Equation 20 is the correct formulation of the CAPM in this context. In this article, however, we will present more general results derived from the properties of the value functional, rather than a special case (such as the CAPM). ## 3 One-year regulatory period Let's now turn to examine questions regarding the estimation of cost of capital in the context of regulatory proceedings. At the outset it is important to be clear that there is no single correct definition of the cost of capital in all regulatory proceedings. There are many ways to regulate that achieve the overall objective of NPV=0. The correct cost of capital for any given regulatory process depends on precisely how the regulatory proceeding is formulated. Therefore we must be clear exactly how the regulatory process operates. In order to keep things simple, let's start by considering the issues that arise when the regulatory period (that is, the period between price reviews or'regulatory resets') lasts exactly one year. This yields the simplest and most familiar form of the regulatory process. ### The standard regulatory process when the regulatory period lasts one year We will assume that the regulator follows a standard regulatory process, known in Australia as the 'Building Block Model'. In its most general form this is described as follows (and illustrated in figure 1): 1. At the start of each regulatory period (which we will label time zero) the regulator observes the value of the opening asset base \(RAB_{0}\), forecasts apex, capex and sales during the regulatory period, and chooses a set of prices to apply during the regulatory period and the closing asset base \(RAB_{1}\). 2. At the end of the regulatory period (time one), the out-turn values of opex, capex and revenue (and therefore the cash-flow of the firm \(X_{1}\)) are realised, together with the closing asset base. This becomes the opening asset base for the subsequent regulatory period. This process continues over the life of the firm. At the end of the life of the firm the regulator ensures that the closing regulatory asset base is equal to zero. Formally the regulatory process operates as follows: At the start of each regulatory period the regulator chooses the cash-flow of the firm \(X_{1}\) and the closing asset base \(RAB_{1}\) in such a way that the present value of the sum of these two is equal to the opening asset base \(RAB_{0}\): \[RAB_{0}=\mathbb{V}_{0}(X_{1}+RAB_{1}) \tag{22}\] Under these assumptions the Fundamental Theorem of Regulation (see Appendix B) shows that, at each point in time the asset base is equal to the future stream of cash-flows \(RAB_{t}=\mathbb{V}_{t}(X_{t+1},X_{t+2},\ldots)\) and the regulated firm achieves NPV=0. We will refer to \(X_{1}\) as the **one-period cash-flow** of the firm, and the sum \(X_{1}+RAB_{1}\) as the **combined cash-flow**. Equation 22 may look at first abstract, but we can move a step closer to regulatory practice by expanding the present value using the definition of the cost of capital and re-writing the equation as an expression for the allowed level of the cash-flow: \[RAB_{0} =\mathbb{V}_{0}(X_{1}+RAB_{1})=\frac{\mathbb{E}_{0}(X_{1}+RAB_{1} )}{\mathbb{R}_{0\to 1}(X_{1}+RAB_{1})}\] \[\implies \mathbb{E}_{0}(X_{1}) =\mathbb{R}_{0\to 1}(X_{1}+RAB_{1})\times RAB_{0}-\mathbb{E}_{0}(RAB_{1}) \tag{23}\] This equation can be made to look more familiar by adopting the conventional definition of the cost of capital \(1+r_{0\to t}(X_{t})=\mathbb{R}_{0\to t}(X_{t})\) and expanding the cash-flow allowance into its components: \[X_{1}=\underbrace{R_{1}}_{\text{Revenue}}-\underbrace{O_{1}}_{\text{OPEX}}- \underbrace{K_{1}}_{\text{Capex}} \tag{24}\] With these definitions, equation 23 can be written in the following way: The expected revenue allowance10 can be written as the normal sum of opex,'return Figure 1: Time evolution of a standard one-year regulatory process on' and'return of' capital: \[\mathbb{E}_{0}(R_{1})=\underbrace{r_{0\to 1}(X_{1}+RAB_{1})\times RAB_{0}}_{ \text{`Return on capital'}}+\underbrace{\mathbb{E}_{0}(O_{1})}_{\text{Opex}}+ \underbrace{\mathbb{E}_{0}(Dep_{0\to 1})}_{\text{`Return of capital'}}. \tag{25}\] Here: \[Dep_{0\to 1}=RAB_{0}+K_{1}-RAB_{1} \tag{26}\] Equations 25 and 26 are the familiar two equations which define the Building Block Model (the standard regulatory process used in Australia and around the world). ## Problems in the estimation of the cost of capital The previous section noted that equation 23 (or its more familiar variants, equations 25 and 26) captures the standard formulation of the regulatory process. But it is already apparent that there is a problem. As equation 23 shows, the fundamental task of the regulator is to choose a set of regulated prices so that the value for the expected cash-flow \(X_{1}\) satisfies equation 23. But \(X_{1}\) appears on both sides of equation 23. The regulator cannot obtain a formally correct cash-flow allowance \(\mathbb{E}_{0}(X_{1})\) (or cost of capital \(\mathbb{R}_{0\to 1}(X_{1}+RAB_{1})\)) without knowing how the cost of capital depends on the cash-flow allowance.11 Footnote 11: A similar circularity problem can arise when estimating the cost of capital using a weighted average formula: the weightings on the cost of equity and the cost of debt depend, in part on the value of the firm, which depends on the cost of capital. See Mohanty (2003). In addition, the cost of capital \(\mathbb{R}_{0\to 1}(X_{1}+RAB_{1})\) will, in general, depend on the choice of the closing asset base \(RAB_{1}\).12 Because the regulatory asset base varies over the life of the firm, we can expect that the cost of capital \(\mathbb{R}_{0\to 1}(X_{1}+RAB_{1})\) also varies over the life of the firm, even if all other factors in the environment remain constant. Footnote 12: In cases where the closing asset base is stochastic the cost of capital may also depend on the variation in the closing asset base. The standard resolution of this problem in regulatory practice is just to assume that the cost of capital \(\mathbb{R}_{0\to 1}(X_{1}+RAB_{1})\) is independent of the cash-flow allowance \(\mathbb{E}_{0}(X_{1})\) and independent of the choice of the closing asset base \(RAB_{1}\). This is a heuristic which is virtually universal in regulatory practice. Is there a formulation of the regulatory process which does not suffer from this circularity problem? It turns out that there is. We can see this by returning to equation 22, and re-writing it as follows: \[RAB_{0} =\mathbb{V}_{0}(X_{1})+\mathbb{V}_{0}(RAB_{1})\] \[=\frac{\mathbb{E}_{0}(X_{1})}{\mathbb{R}_{0\to 1}(X_{1})}+ \frac{\mathbb{E}_{0}(RAB_{1})}{\mathbb{R}_{0\to 1}(RAB_{1})} \tag{27}\] Therefore, we can re-formulate the Building Block Model (equation 23) in a way which resolves the circularity problem:13 Footnote 13: As noted in equation 14 the cost-of-capital for the one-period cash-flow \(\mathbb{R}_{0\to 1}(X_{1})\) is independent of the level of the expected cash-flow \(\mathbb{E}_{0}(X_{1})\). \[\mathbb{E}_{0}(X_{1})=\mathbb{R}_{0\to 1}(X_{1})\times RAB_{0}-\mathbb{E}_{0}( RAB_{1})\frac{\mathbb{R}_{0\to 1}(X_{1})}{ \mathbb{R}_{0\to 1}(RAB_{1})} \tag{28}\] Comparing equations 28 and 23 we can see that this version of the Building Block Model is similar to the standard approach except for the following: * The'return on capital' is defined by multiplying the cost of capital for the one-period cash-flow \(\mathbb{R}_{0\to 1}(X_{1})\) (rather than the cost of capital for the combined cash-flow \(\mathbb{R}_{0\to 1}(X_{1}+RAB_{1})\)) by the opening asset base; and * In the'return of capital', the closing asset base is scaled by the ratio of the costs-of-capital of the components (i.e., the ratio \(\mathbb{R}_{0\to 1}(X_{1})/\mathbb{R}_{0\to 1}(RAB_{1})\). We are now in a position to answer the first two of the questions set out above. Does the cost of capital for a regulated firm vary with the level of the regulatory asset base or the level of the allowed cash-flow? The answer is as follows: In the conventional historic regulatory practice in Australia, the relevant cost of capital (that is, the multiplier of the asset base in the revenue allowance equation) is the cost of capital for the combined cash-flow \(\mathbb{R}_{0\to 1}(X_{1}+RAB_{1})\). This will, in general vary with both the expected level of the one-period cash-flow \(\mathbb{E}_{0}(X_{1})\) and the expected level of the closing asset base \(\mathbb{E}_{0}(RAB_{1})\) even if nothing else in the environment changes. In particular, the cost of capital will depend on the ratio of the level the one-period cash-flow relative to the closing asset base. If this ratio is small, the correct cost of capital is closer to the cost of capital for the closing asset base alone \(\mathbb{R}_{0\to 1}(RAB_{1})\) which would normally be expected to be close to the risk-free rate. If this ratio is large, the correct cost of capital is closer to the cost of capital for the one-period cash-flow \(\mathbb{R}_{0\to 1}(X_{1})\) which could be very large. As the ratio changes the relevant cost of capital will change, even if there are no other changes in the environment. There are examples of this effect in table 1 and figure 2 below. The dependence of the relevant cost of capital on the level of the cash-flow gives rise to a problem of circularity in the regulatory process. This problem of circularity can be resolved by changing the formulation of the Building Block Model to the formulation set out in equation 28 in which the relevant cost of capital is the cost of capital for the one-period cash-flow alone. This cost of capital does not depend on the level of the cash-flow. ### Do these concerns make a difference in practice? The sections above have identified possible concerns with the conventional approach to setting the cost of capital. But do these issues make a material difference in practice? To make this assessment let's explore how much the cost of capital of the combined cash-flow \(\mathbb{R}_{0\to 1}(X_{1}+RAB_{1})\) might vary even if the costs of capital on the component cash-flows \(\mathbb{R}_{0\to 1}(X_{1})\) and \(\mathbb{R}_{0\to 1}(RAB_{1})\) remain unchanged. Let's suppose that the cost of capital for the cash-flow \(X_{1}\), is say, \(20\%\), and the cost of capital for the closing asset base \(RAB_{1}\) is, say, \(5\%\), so \(\mathbb{R}_{0\to 1}(X_{1})=1.2\) and \(\mathbb{R}_{0\to 1}(RAB_{1})=1.05\). Table 1 sets out the cash-flow allowance and the relevant cost of capital under different assumptions about the level of the opening and closing asset base.14 As can be seen, the cost of capital for the combined cash-flow \(X_{1}+RAB_{1}\) varies widely with the level of the asset base, even though there is no change in the underlying systematic risk faced by the firm (that is, no change in \(\mathbb{R}_{0\to 1}(X_{1})\) and \(\mathbb{R}_{0\to 1}(RAB_{1})\)). Footnote 14: The fourth column in table 1 is given by equation 28; the fifth column is given by equation 17. Figure 2 presents another way of looking at this question. The left hand graph in figure 2 illustrates the impact on the combined cost of capital \(\mathbb{R}_{0\to 1}(X_{1}+RAB_{1})\) of changes in the allowed regulatory cash-flow \(\mathbb{E}_{0}(X_{1})\) holding all other factors constant (here we assume \(\mathbb{E}_{0}(RAB_{1})=\$1000\) and \(\mathbb{R}_{0\to 1}(X_{1})=1.2\) and \(\mathbb{R}_{0\to 1}(RAB_{1})=1.05\) as before). As can be seen, an increase in the allowed regulatory cash-flow (perhaps due to, say, a reduction in forecast expenditure) results in an increase in the combined cost of capital, even if nothing else changes in the environment. The right hand graph in figure 2 illustrates how the combined cost of capital varies over the life of a firm with changes in the asset base. This graph illustrates the case of a firm which starts with an opening asset base of $1000, and lasts five years. The regulated cash-flow allowance is chosen to be constant at \(\mathbb{E}_{t}(X_{t+1})=\) \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline \(RAB_{0}\) & \(\mathbb{E}_{0}(RAB_{1})\) & \(\mathbb{R}_{0\to 1}(X_{1})\) & \(\mathbb{R}_{0\to 1}(RAB_{1})\) & \(\mathbb{E}_{0}(X_{1})\) & \(\mathbb{R}_{0\to 1}(X_{1}+RAB_{1})\) \\ \hline \(\$1,000\) & \(\$900\) & \(20\%\) & \(5\%\) & \(\$171.83\) & \(7.14\%\) \\ \(\$500\) & \(\$400\) & \(20\%\) & \(5\%\) & \(\$142.86\) & \(8.57\%\) \\ \(\$1,000\) & \(\$800\) & \(20\%\) & \(5\%\) & \(\$285.71\) & \(8.57\%\) \\ \(\$400\) & \(\$200\) & \(20\%\) & \(5\%\) & \(\$251.73\) & \(12.86\%\) \\ \hline \end{tabular} \end{table} Table 1: The combined cost of capital for a regulated firm \(\mathbb{R}_{0\to 1}(X_{1}+RAB_{1})\) varies with the size of the components even if there is no change in the cost of capital for the components \(\$263.97\) each year, ensuring that the closing asset base at the end of the life of the firm is zero (\(RAB_{5}=0\)). The regulatory asset base starts at \(\$1000\) and declines to zero over the five years. As can be seen, the combined cost of capital \(\mathbb{R}_{t\to t+1}(X_{t+1}+RAB_{t+1})\) increases as the regulatory asset base declines. In both of these graphs the cost of capital for the one-period cash-flow is fixed at \(20\%\) (\(\mathbb{R}_{t\to t+1}(X_{t+1})=1.20\)) and the cost of capital for the closing asset base is fixed at \(5\%\) (\(\mathbb{R}_{t\to t+1}(RAB_{t+1})=1.05\)). ## 4 Multi-year regulatory period For many years, the standard regulatory practice in Australia and New Zealand has involved the use of a fixed-length (five-year) regulatory period, with a single cost of capital for that period. Perhaps unsurprisingly, it turns out that this practice substantially complicates the question of setting the appropriate cost of capital. As before, the relevant cost of capital depends very heavily on the precise formulation of the regulatory process. Therefore, as before, we must be precise as to the regulatory process we are using. Let's suppose that the regulatory period lasts \(T\) years (where \(T>1\)). In the context of a multi-year regulatory period, equation 22 generalises as follows. Given the opening asset base \(RAB_{0}\), the task of the regulator is to choose the cash-flow in each year of the regulatory period \(X_{1}\), \(X_{2}\),..., \(X_{T}\), and the closing Figure 2: Illustration of the dependence of the combined cost of capital \(\mathbb{R}_{0\to 1}(X_{1}+RAB_{1})\) on the level of the allowed cash-flow or the regulatory asset base asset base \(RAB_{T}\), to satisfy the following expression: \[RAB_{0}=\mathbb{V}_{0}(X_{1},X_{2},\ldots,X_{T}+RAB_{T})=\sum_{t=1}^{T}\mathbb{V} _{0}(X_{t})+\mathbb{V}_{0}(RAB_{T}) \tag{29}\] In common regulatory practice, this is usually implemented as follows. A single parameter, which we will label \(R\) is chosen. In addition, a sequence of values of the asset base \(RAB_{t}\) is chosen. The regulated cash-flow allowance in each year of the regulatory period is then determined using a simple analogy to equation 23: \[\forall t=1,\ldots,T,\ \ \mathbb{E}_{0}(X_{t})=R\times RAB_{t-1}-\mathbb{E}_{0}( RAB_{t}) \tag{30}\] But, how should we choose the parameter \(R\)? Expanding out equations 30 and applying equation 29 we find that, given the expected cash-flows \(\mathbb{E}_{0}(X_{t})\) and the opening asset base \(RAB_{0}\) and the expected closing asset base \(\mathbb{E}_{0}(RAB_{T})\), the parameter \(R\) must be chosen to satisfy the following: \[\frac{\mathbb{E}_{0}(X_{1})}{R}+\frac{\mathbb{E}_{0}(X_{2})}{R^{2}}+\ldots+ \frac{\mathbb{E}_{0}(X_{T}+RAB_{T})}{R^{T}}=RAB_{0} \tag{31}\] In other words, in this approach to regulation, the correct value for the cost of capital parameter \(R\) is the **internal rate of return** of the cash-flow stream consisting of an outlay of \(RAB_{0}\) at time zero, and cash-flow of \(\mathbb{E}_{0}(X_{1}),\mathbb{E}_{0}(X_{2}),\ldots,\mathbb{E}_{0}(X_{T}+RAB_{T})\) in the subsequent years of the regulatory period. But again, we can see that there is a problem. As equation 31 makes clear, the parameter \(R\) depends on the levels of the individual cash-flows \(\mathbb{E}_{0}(X_{1})\), \(\mathbb{E}_{0}(X_{2})\) and so on, in a complicated manner. But the parameter \(R\) is also a key input into the determination of the \(\mathbb{E}_{0}(X_{1})\) and so on through equation 29. Once again we have a problem of circularity. As before, this problem can be resolved by changing the regulatory process. Rather than using a single cost of capital parameter \(R\) for the entire regulatory period we should use different costs of capital for the individual components of the cash-flow of the firm. There are two ways that this might be carried out. In the first approach, the allowed cash-flows are all set in advance, based on the costs of capital prevailing at the start of the regulatory period. In the second approach, the allowed cash-flows are set each year of the regulatory period, on the basis of the costs of capital prevailing at that time. Let's consider the first approach in which the allowed cash-flows are all set in advance. Specifically, let's suppose that the regulator chooses parameters \(R_{t}\) and \(S_{t}\) for \(t=1,\ldots,T\). The regulator then sets the cash-flow allowance as follows (this equation is the analogy to equation 28): \[\forall t=1,\dots,T,\ \ \mathbb{E}_{0}(X_{t})=R_{t}\times\mathbb{E}_{0}(RAB_{t-1})- \mathbb{E}_{0}(RAB_{t})\frac{R_{t}}{S_{t}} \tag{32}\] The cash-flows set in this way satisfy the fundamental requirement of equation 29 provided we choose \(R_{1}=\mathbb{R}_{0\to 1}(X_{1})\), \(S_{1}R_{2}=\mathbb{R}_{0\to 2}(X_{2})\),..., \(S_{1}S_{2}\dots S_{T}=\mathbb{R}_{0\to T}(RAB_{T})\). These equations can be satisfied in different ways. However, a straightforward approach is to choose:15 Footnote 15: \(S_{t}\) here is the ‘forward rate’ for the cost of capital for the asset base – specifically, it is the rate at time zero that is forecast to apply between time \(t-1\) and time \(t\). \[R_{t} =\frac{R_{0\to t}(X_{t})}{\mathbb{R}_{0\to t-1}(RAB_{t-1})} \tag{33}\] \[\text{and}\ S_{t} =\frac{\mathbb{R}_{0\to t}(RAB_{t})}{\mathbb{R}_{0\to t-1}(RAB_{t-1})} \tag{34}\] It is straightforward to check that, with this choice of the parameters, equation 29 is satisfied. Under the second approach, the allowed cash-flows are not fixed in advance, but are set at the start of each year (within the regulatory period) on the basis of information that is available at the time. In this case, the relevant equation for establishing the cash-flow allowance can be written as follows: \[\forall t=1,\dots,T,\ \ \mathbb{E}_{t-1}(X_{t})=R_{t}\times RAB_{t-1}-\mathbb{E }_{t-1}(RAB_{t})\frac{R_{t}}{S_{t}} \tag{35}\] This equation is the generalisation of equation 28. The formulae to calculate the relevant values of the parameters \(R_{t}\) and \(S_{t}\) in this case are derived in appendix C. To see how the first approach might work in a simple example, let's suppose that we have a regulatory period that lasts five years. At time zero the opening asset base is \(RAB_{0}=\$1,000\). The regulator chooses a path for the asset base \(RAB_{1}=\$900\), \(RAB_{2}=\$800\),..., \(RAB_{5}=\$500\). The cost of capital for the cash-flow and for the asset base (which is just equal to the risk-free rate of 5% per annum) are set out in table 3. The cost of capital for the asset base (assuming a flat term structure and a risk-free rate of 5% per annum) is: \[\mathbb{R}_{0\to t}(RAB_{t})=(1.05)^{t} \tag{36}\] The cost of capital for the cash-flow is chosen to satisfy the condition:16 Footnote 16: This can be justified using the assumptions that the term structure is flat and no new information about the future cash-flow arrives over time. \[\mathbb{R}_{0\to t}(X_{t})=(1.05)^{t-1}(1.20) \tag{37}\] The remainder of the table shows the implied values of \(R_{t}\) and \(S_{t}\) and the resulting cash-flow allowance \(\mathbb{E}_{0}(X_{t})\).17 Using these values we can calculate that the value of the parameter \(R\) (the single cost of capital for entire regulatory period) is \(7.16\%\). As before, this value is a complicated mix of the different costs of capital for the different cash-flows and timings during the regulatory period. It is not possible to calculate this value until after the regulatory cash-flow allowances have been determined. The last row of table 3 shows the implied value for the cost of capital that would be required if the regulator followed the naive approach of equation 23. As can be seen, in this case the parameter chosen is a'mixture' of cost of capital for the cash-flow \(X_{t}\) and the asset base \(RAB_{t}\) and varies across the regulatory period. At this point we can answer one of the questions posed at the outset: What is the correct term for the cost of capital used in regulatory proceedings? From the analysis above we can provide the following answer: * Where a single cost-of-capital parameter is used to determine all of the cash-flow allowances throughout a regulatory period (as in equation 30), there is no single correct 'term' for this cost-of-capital. Rather, this cost of capital is a mix of a number of different terms (corresponding to the cash-flows within the regulatory period, and the asset base at the end of the regulatory period). The term of each of these individual cash-flows is shorter than or equal to the length of the regulatory period. Formally it is the internal rate of return associated with a cash-flow stream. * that is, a separate cost of capital for \(X_{t}\), \(t=1,\ldots,T\) and \(RAB_{T}\). In the case where all of the cash-flow allowances are set at the beginning of \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Time & \(t=0\) & \(t=1\) & \(t=2\) & \(t=3\) & \(t=4\) & \(t=5\) \\ \hline \(RAB_{t}\) & \$1,000 & \$900 & \$800 & \$700 & \$600 & \$500 \\ \(\mathbb{R}_{0\to t}(X_{t})\) & 1.000 & 1.200 & 1.260 & 1.323 & 1.389 & 1.459 \\ \(\mathbb{R}_{0\to t}(RAB_{t})\) & 1.000 & 1.05 & 1.103 & 1.158 & 1.216 & 1.276 \\ \(R_{t}\) & & 1.20 & 1.20 & 1.20 & 1.20 & 1.20 \\ \(S_{t}\) & & 1.05 & 1.05 & 1.05 & 1.05 & 1.05 \\ \(\mathbb{E}_{0}(X_{t})\) & & \$171.43 & \$165.71 & \$160.00 & \$154.29 & \$148.57 \\ Implied CoC & & 7.14\% & 7.30\% & 7.50\% & 7.76\% & 8.10\% \\ \hline \end{tabular} \end{table} Table 2: Illustration of calculating the cash-flow allowance over a five-year regulatory period the period (the first approach above), the regulatory process must follow equation 32. Specifically: * The relevant cost of capital for the purposes of determining the allowed'return on capital' (that is, the coefficient on the regulatory asset base) must be given by: \[R_{t}=\frac{R_{0\to t}(X_{t})}{\mathbb{R}_{0\to t-1}(RAB_{t-1})}\] (38) * In calculating the'return of capital' the closing regulatory asset base must be scaled by \[\frac{R_{t}}{S_{t}}=\frac{R_{0\to t}(X_{t})}{\mathbb{R}_{0\to t}(RAB_{t})}\] (39) This reduces to equation 28 in the case of a regulatory period of one year. As an illustration of the effect of these observations, let's use some typical values for the cash-flow and asset base drawn from an actual regulatory proceeding. In this case we will use distribution businesses in Australia. These are regulated using a five-year regulatory cycle. From the discussion above, this means that there are 6 costs-of-capital we need to estimate: \(\mathbb{R}_{0\to t}(X_{t})\), \(t=1,\ldots,5\) and \(\mathbb{R}_{0\to 5}(RAB_{5})\). We will assume values for these costs of capital and then determine the implications for the value of the parameter \(R\). As before we will assume that the (annualised) risk-free interest rate for all terms is, say, 1.05 (5%). In other words, the cost of capital for a fixed value in one year is 1.05, in two years is \(1.05^{2}\), and so on. Following standard regulatory practice, we will make the assumption that \(RAB_{5}\) is a fixed number, so it should receive a cost of capital equal to the risk-free rate which as just noted, is \(1.05^{5}\). We will make the assumption that, at time 0, \(\forall_{s\to t}(X_{t})\) is a fixed value for \(0<s<t\).18 Finally, we will assume that \(\mathbb{V}_{t-1\to t}(X_{t})=\mathbb{E}_{t-1}(X_{t})/1.35\). It follows that the cost of capital for the first cash-flow \(X_{1}\) is 1.35 (as before), and for all the other cash-flows: Footnote 18: In essence, this assumes that the regulator receives no new information about the future cash-flows until the year in which the cash-flow is received. \[\mathbb{R}_{0\to t}(X_{t})=(1.05)^{t-1}(1.35) \tag{40}\] Given these assumed values for the component costs of capital, we can use the actual cash-flow and closing asset base for the 13 electricity network distribution businesses on the east coast of Australia (known as Distribution Network Service Providers, or DNSPs) over the period 2014-2019 to determine the value of \(R\) which satisfies equation 31. The results are set out in table 3. As can be seen, the variation in the relative size of the cash-flows gives rise to a variation of about 400 basis points in the discount factor \(R\), even though all of these DNSPs are assumed to have exactly the same underlying component costs of capital. This variation is roughly the same size as a variation in, say, the market risk premium of 0.5%. ## 5 The cost of capital for the debt stream In standard regulatory practice the relevant cost of capital for regulatory purposes is often estimated as a weighted average of the cost of capital for the debt and equity cash-flow streams of the regulated firm.19 In addition, in standard regulatory practice the cost of capital for debt is typically estimated as the current 'yield to maturity' on a corporate bond of the relevant credit rating and term. Usually the term is taken to be the same as the length of the regulatory period (e.g., five years) or longer (say, ten years). In recent years some regulators in Australia have used a cost of debt which is based on a trailing average of historic rates on corporate bonds of the relevant credit rating and term. \begin{table} \begin{tabular}{|c|c|} \hline DNSP & Value of \(R\) \\ \hline Energex & 5.543\% \\ Evoenergy & 5.581\% \\ AusNet & 5.622\% \\ Ergon Energy & 5.631\% \\ CitiPower & 5.649\% \\ United Energy & 5.688\% \\ Powercor & 5.735\% \\ Jemena & 5.753\% \\ Ausgrid & 5.819\% \\ Endeavour Energy & 5.796\% \\ TasNetworks & 5.920\% \\ SA Power Networks & 5.922\% \\ Essential Energy & 5.942\% \\ \hline \end{tabular} \end{table} Table 3: Illustration of different values of the discount factor \(R\) for real-world distribution network businesses with the same underlying component costs of capital What can we say about the theoretically-correct approach to estimating the cost of debt in regulatory proceedings? ### Debt preliminaries let's assume that we have a set of debt instruments, distinguished only in their date of maturity \(t\).20 The debt instrument which matures at time \(t\) is assumed to make an uncertain payoff given by the random variable \(I_{t}\) at time \(t\). There are no other payments from each debt instrument (i.e., they are 'zero coupon bonds').21 These instruments are assumed to be actively traded in an efficient market. The current price of this debt instrument in the market is its present value \(\mathbb{V}_{0\to t}(I_{t})\). It follows that the cost of capital to maturity of each of these instruments is the ratio of the expected future payoff \(\mathbb{E}_{0}(I_{t})\) to the current market price \(\mathbb{V}_{0\to t}(I_{t})\): Footnote 20: All other characteristics of the debt instruments, such as the credit rating, or seniority, are assumed to be chosen so that the traded debt instruments are a perfect substitute for the debt instruments of the firm whose cost of capital we are estimating. Footnote 21: This is without loss of generality as a regular bond paying coupons can be constructed out of a series of zero-coupon bonds. \[\mathbb{R}_{0\to t}(I_{t})=\frac{\mathbb{E}_{0}(I_{t})}{\mathbb{V}_{0\to t}(I _{t})} \tag{41}\] At time zero the regulated firm is assumed to hold a portfolio of debt instruments, with a volume of the instrument maturing at time \(t\) given by the quantity \(D_{0\to t}\) for \(t=1,2,\ldots\). From the linearity of the value function, the value of this portfolio at time zero can be directly derived from the current observed price of each debt instrument: \[\sum_{t=1}\mathbb{V}_{0\to t}(D_{0\to t}I_{t})=\sum_{t=1}D_{0\to t} \mathbb{V}_{0\to t}(I_{t}) \tag{42}\] At the end of the first period, the debt instrument maturing at time \(t=1\) matures, making the payoff \(D_{0\to 1}I_{1}\). In addition, at this time the firm is assumed to be able to make changes to its portfolio of debt instruments. We can represent this by the assumption that the firm sells its entire portfolio purchased at time zero, which has the value at time \(1\) of \(\sum_{t=2}\mathbb{V}_{1\to t}(D_{0\to t}I_{t})\), and then purchases a new portfolio of debt instruments, given by the quantity \(D_{1\to t}\) of the instrument \(I_{t}\), \(t=2,3,\ldots\). The cost of purchasing this new portfolio (which is also its current value) is \(\sum_{t=2}\mathbb{V}_{1\to t}(D_{1\to t}I_{t})\). Ignoring transactions costs, the net payoff at time one is therefore as follows: \[X_{1}^{D}=D_{0\to 1}I_{1}+\sum_{t=2}D_{0\to t}\mathbb{V}_{1\to t}(I_{t})- \sum_{t=2}D_{1\to t}\mathbb{V}_{1\to t}(I_{t}) \tag{43}\] The combined payoff is therefore: \[X_{1}^{D}+\mathbb{V}_{1}^{D}=D_{0\to 1}I_{1}+\sum_{t=2}D_{0\to t}\mathbb{V}_{1\to t}(I_{t}) \tag{44}\] Importantly, the present value of the stream of debt payments at time zero depends _only_ on the value of the debt portfolio that is held at time zero (all future changes in the debt portfolio are of no consequence). This is demonstrated in appendix D. ### Estimating the cost of debt in a one-year regulatory period Let's assume that the stream of payments to equityholders and debtholders are denoted \((X_{1}^{E},X_{2}^{E},\ldots)\) and \((X_{1}^{D},X_{2}^{D},\ldots)\), respectively. The total cash-flow of the firm is assumed to be paid out in total to equityholders and debtholders each period: \[\forall t,\;X_{t}=X_{t}^{E}+X_{t}^{D} \tag{45}\] As above, the cash-flow stream to debtholders \(X_{t}^{D}\) is determined by the portfolio of debt instruments held by the firm (that is, any payments for maturing debt instruments) plus purchases of new debt instruments less sales of old debt instruments. Let's return now to the case of a one-year regulatory period. As we have seen, the standard regulatory practice involves estimating a single cost of capital for the firm as a whole \(X_{1}+RAB_{1}\), which we labeled \(\mathbb{R}_{0\to 1}(X_{1}+RAB_{1})\). It follows immediately from equation 45 that the cost of capital for the combined cash-flow of the firm \(\mathbb{R}_{0\to 1}(X_{1}+RAB_{1})\) can be expressed as a weighted average of the cost of capital for the equity payment stream \(\mathbb{R}_{0\to 1}(X_{1}^{E}+\mathbb{V}_{1}^{E})\) and the cost of capital for the debt payment stream \(\mathbb{R}_{0\to 1}(X_{1}^{D}+\mathbb{V}_{1}^{D})\), using either of the weighted average formulae in equations 15 or 17. Let's put aside the circularity problems discussed above and explore what we can say about the estimation of the cost of debt as a step toward the estimation of the cost-of-capital for the firm as a whole. The relevant cost of debt is the cost of capital for the combined debt cash-flow \(X_{1}^{D}+\mathbb{V}_{1}^{D}\). We can express this cost of debt as the weighted average of the cost of capital of the instruments in the portfolio: \[\mathbb{R}_{0\to 1}(X_{1}^{D}+\mathbb{V}_{1}^{D}) =\frac{\mathbb{E}_{0}(X_{1}^{D}+\mathbb{V}_{1}^{D})}{\mathbb{V}_{0 \to 1}(X_{1}^{D}+\mathbb{V}_{1}^{D}}\] \[=D_{0\to 1}\frac{\mathbb{E}_{0}(I_{1})}{\mathbb{V}_{0}^{D}}+\sum_{t=2 }D_{0\to t}\frac{\mathbb{E}_{0}(\mathbb{V}_{1\to t}(I_{t}))}{\mathbb{V}_{0}^{D}}\] \[=D_{0\to 1}R_{0\to 1}(I_{1})\frac{\mathbb{V}_{0\to 1}(I_{1})}{ \mathbb{V}_{0}^{D}}\] \[+\sum_{t=2}D_{0\to t}\mathbb{R}_{0\to 1}(\mathbb{V}_{1\to t}(I_{t})) \frac{\mathbb{V}_{0\to t}(I_{t})}{\mathbb{V}_{0}^{D}} \tag{46}\] The weighting on each instrument in this depends on the ratio of the current price for the debt instrument in the market \(\mathbb{V}_{0\to t}(I_{t})\) to the total value of the portfolio \(\mathbb{V}_{0}^{D}\). In principle both of these can be easily observed. In addition, the relevant cost of capital for the one-year debt instrument \(I_{1}\) can, is the ratio of the expected payoff in one year \(\mathbb{E}_{0}(I_{1})\) to the current market price \(\mathbb{V}_{0}(I_{1})\): \[\mathbb{R}_{0\to 1}(I_{1})=\frac{\mathbb{E}_{0}(I_{1})}{\mathbb{V}_{0}(I_{1})} \tag{47}\] In principle the expected payoff on the debt instrument can be estimated if we can estimate the probability of default and the likely recovery of funds to debtholders in the event of default. As noted above, the current market price can be easily observed. As a consequence, the relevant cost of capital for a one-year debt instrument can, in principle be estimated. But what about the relevant cost of capital for the longer-term debt instruments in the portfolio? For longer-term debt instruments the relevant cost of capital for each instrument in equation 46 is the cost of capital associated with holding the long-term debt instrument from time zero to time one: \(\mathbb{R}_{0\to 1}(\mathbb{V}_{1\to t}(I_{t}))\). This is equal to the expected future (time one) price of the instrument over the current price: \[\mathbb{R}_{0\to 1}(\mathbb{V}_{1\to t}(I_{t}))=\frac{\mathbb{E}_{0}( \mathbb{V}_{1\to t}(I_{t}))}{\mathbb{V}_{0\to 1}(\mathbb{V}_{1\to t}(I_{t}))}=\frac{ \mathbb{E}_{0}(\mathbb{V}_{1\to t}(I_{t}))}{\mathbb{V}_{0\to t}(I_{t})} \tag{48}\] Unfortunately, the expected future price of the debt instrument \(\mathbb{E}_{0}(\mathbb{V}_{1\to t}(I_{t}))\) cannot be easily observed in the market. This value is related to forecasts of future interest rates and investor tolerance of risk. These observations are illustrated in figure 3. Let's assume that the regulated firm holds a portfolio of debt instruments maturing in one, two and three years, labelled \(I_{1}\), \(I_{2}\), and \(I_{3}\). Each of these instruments has a 'face value' of, say, $1000. The current price of these three instruments is $900, $800, and $700, say. The regulator can in principle estimate the expected payout on the instrument maturing in one year, \(I_{1}\). Although it has a face value of $1000, the actual expected payout will be somewhat less, reflecting the probability of default and the recovery expected in the event of default. Let's suppose that the expected future payout is, say, $950. In this case the cost of capital for this instrument is \(\$950/\$900-1=5.56\%\). But in the case of the longer-term instruments \(I_{2}\) and \(I_{3}\) it is not easy to estimate the price of these instruments at time one. As a result estimating the cost of capital between time 0 and time 1 for these instruments is not straightforward. We are now in a position to answer one of the questions asked at the outset: Can we estimate the cost of debt for a regulated firm by observing the appropriate yield-to-maturity on a corporate bond of the right term? If so, what is the right term? This analysis provides no support for the assertion that, in standard regulatory practice, we can estimate the cost of debt for a regulated firm by observing the yield-to-maturity on a corporate bond of the right term, for the following reasons: * In standard, historic regulatory practice (as summarised in equation 23 or equations 25 and 26), the regulator is interested in estimating the cost of capital of the combined cash-flow for the firm as a whole. If we seek to estimate this cost of capital as a weighted average of the cost of equity and the cost of debt the relevant cost of capital for debt is the cost of capital for the debt payment stream of the firm. \(\mathbb{R}_{0\to 1}(X_{1}^{D}+\mathbb{V}_{1}^{D})\). This depends Figure 3: It is not, in general, possible to estimate the cost of capital for a portfolio of debt instruments using currently observed market data on the _portfolio of debt instruments_ held by the regulated firm (and not just the cost of capital for a single bond). * In general, the debt portfolio held by the regulated firm will include instruments with a range of terms. Although it is relatively easily to estimate the cost of capital for a debt instrument maturing in one year, it is not straightforward to estimate the one-year cost of capital for debt instruments maturing in future years. * The current yield-to-maturity of a debt instrument is the ratio of the 'face value' to the current price. But the relevant cost of capital \(\mathbb{R}_{0\to t}(I_{t})\) is the ratio of the expected future payout to the current price: \[\mathbb{R}_{0\to t}(I_{t})=\frac{\mathbb{E}_{0}(I_{t})}{\mathbb{V}_{0\to t}(I_ {t})}\] (49) For any debt instrument other than a risk-free instrument, the expected future payout \(\mathbb{E}_{0}(I_{t})\) is less than the face value due to the probability of default. Therefore, the yield-to-maturity on a bond is an over-estimate of the cost of capital for that bond. If the cost of capital for the debt portfolio of the regulated firm cannot be easily observed (as appears to be normally assumed) it follows that there appears to be little value in separating the cost of capital for the firm as a whole into a weighted average of the cost of equity and the cost of debt. This leaves two possibilities: We could estimate the cost of capital for the firm as a whole (as before) - or, more strictly, the component costs of capital for the firm as a whole, as set out in equation 28. Alternatively, we could implement a version of the regulatory process which only requires estimation of the (components of the) costs of equity. To see how this might be achieved, let's assume that the cash-flow stream to debt can be treated as exogenous - determined outside the regulatory process. The regulatory process then determines the equity cash-flow \(X_{1}^{E}\) and the equity asset base \(RAB_{1}^{E}\). The required version of the Building Block Model equations can be derived as follows. First, equation 27 can be expanded as follows: \[RAB_{0} =\mathbb{V}_{0}(X_{1}+RAB_{1})=\mathbb{V}_{0}(X_{1}^{E}+RAB_{1}^{ E}+X_{1}^{D}+\mathbb{V}_{1}^{D})\] \[=\frac{\mathbb{E}_{0}(X_{1}^{E})}{\mathbb{R}_{0\to 1}(X_{1}^{E})}+ \frac{\mathbb{E}_{0}(RAB_{1}^{E})}{\mathbb{R}_{0\to 1}(RAB_{1}^{E})}+ \mathbb{V}_{0\to 1}(X_{1}^{D}+\mathbb{V}_{1}^{D}) \tag{50}\] It follows that the allowed cash-flow stream to equity of the regulated firm should be determined as follows: \[\mathbb{E}_{0}(X_{1}^{E}) =RAB_{0}\times\mathbb{R}_{0\to 1}(X_{1}^{E})-\mathbb{E}_{0}(RAB_{1}^{E}) \frac{\mathbb{R}_{0\to 1}(X_{1}^{E})}{\mathbb{R}_{0\to 1}(RAB_{1}^{E}}\] \[+\mathbb{V}_{0\to 1}(X_{1}^{D}+\mathbb{V}_{1}^{D})\mathbb{R}_{0\to 1}(X_{1}^{E})\] \[=RAB_{0}\times\mathbb{R}_{0\to 1}(X_{1}^{E})\] \[-\left(\mathbb{E}_{0}(RAB_{1}^{E})-\mathbb{R}_{0\to 1}(RAB_{1}^{E}) \sum_{t=1}D_{0\to t}\mathbb{V}_{0\to t}(I_{t})\right)\frac{\mathbb{R}_{0\to 1}(X_{1}^{E})}{ \mathbb{R}_{0\to 1}(RAB_{1}^{E})} \tag{51}\] This is a further variation on the Building Block Model, extending equation 28 by replacing the expected closing asset base \(\mathbb{E}_{0}(RAB_{1}^{E})\) with an expression that subtracts the current value of the debt portfolio: \(\mathbb{E}_{0}(RAB_{1}^{E})-\mathbb{R}_{0\to 1}(RAB_{1}^{E})\sum_{t=1}D_{0 \to t}\mathbb{V}_{0\to t}(I_{t})\). This last term (\(\sum_{t=1}D_{0\to t}\mathbb{V}_{0\to t}(I_{t})\)) can be directly observed from market data. We have seen that, in the case of a single-year regulatory period, if we use a single cost of capital for the combined cash-flow of the firm it is possible to write that cost of capital as a weighted average of the (combined) cost of capital for debt and the (combined) cost of capital for equity. If we seek to implement a version of the regulatory process in which we estimate separate costs of capital for the components (as in equation 28) then again we can express the costs of capital for those components as a weighted average of the corresponding component cost of capital for debt and equity separately. What about the case of a multi-year regulatory period? Let's suppose that we seek to estimate a single cost of capital for the entire regulatory period (as in equation 30. Can we express this cost of capital parameter \(R\) as a weighted average of the corresponding parameter for equity and for debt? The answer is no. Let's suppose that the parameter \(R\) satisfies equation 31 and the corresponding parameter \(R^{E}\) and \(R^{D}\) satisfies the corresponding equation for debt and equity. In the case where \(T=2\) this yields the following: \[\frac{\mathbb{E}_{0}(X_{1})}{R}+\frac{\mathbb{E}_{0}(X_{2}+RAB_{3 })}{R^{2}}\] \[=\frac{\mathbb{E}_{0}(X_{1}^{E})}{R^{E}}+\frac{\mathbb{E}_{0}(X_{ 2}^{E}+RAB_{2}^{E})}{R^{E^{2}}}+\frac{\mathbb{E}_{0}(X_{1}^{D})}{R^{D}}+\frac{ \mathbb{E}_{0}(X_{2}^{D}+\mathbb{V}_{2}^{D})}{R^{D^{2}}} \tag{52}\] In this case the parameter \(R\) cannot be expressed as a weighted average of \(R^{D}\) and \(R^{E}\). This calls into question the common use of a weighted-average cost of capital in the context of a multi-year regulatory period. Conclusion Cost of capital issues are amongst the most controversial in regulatory practice. In my view, these debates have been made more clouded and confused by the lack of a strong theoretical foundation. In the absence of a clear, strong, theoretical foundation, regulators and courts are not in a position to make reasoned, rational choices between one approach or methodology and another. This tends to prolong and perpetuate disputes. Cost of capital for regulatory purposes tends to draw heavily and uncrtically on the corporate finance literature. But the corporate finance context tends to be quite different from the regulatory context, with different objectives and assumptions. Approaches which are commonplace in the corporate finance literature (such as the estimation of the cost of capital as a weighted average of the cost of equity and the cost of debt) do not necessarily carry over to the regulatory context. This article seeks clarify and formalise the theory of cost of capital for regulatory purposes. Because the cost of capital for a regulated firm depends strongly on the precise formulation of the regulatory approach, this article has sought to be clear about the standard regulatory approach, and various possible variations. A starting point of this analysis is the assumption that (putting aside incentive concerns), a central objective of all cost-based regulation is the achievement of NPV=0. Errors in the estimation of the cost of capital potentially undermine this objective. There are different ways of setting an allowed revenue stream to achieve an overall NPV of zero. Those different approaches to regulating will, in general, require a different corresponding cost of capital (or potentially, multiple different costs of capital). If there was one cost of capital that was materially easier to accurately estimate than others, we might favour the corresponding approach to regulation. But this is not obviously the case. This analysis has suggested the following problems with, and potential improvements to standard regulatory practice: * In standard regulatory practice (in the one period case, as summarised in equation 23 or equations 25 and 26), the regulatory process depends on estimates of a single cost of capital for the regulated firm as a whole, which we have referre to as the combined cost of capital, denoted \(\mathbb{R}_{0\to 1}(X_{1}+RAB_{1})\). This single cost of capital depends on the level of both the one-period cash-flow \(\mathbb{E}_{0}(X_{1})\) and the closing asset base \(\mathbb{E}_{0}(RAB_{1})\). But this cost of capital is itself an input to the determination of the regulated cash-flow allowance, giving rise to a problem of circularity. At a minimum this muddies the problem of estimation of the cost of capital under this regulatory process. The regulatory process could be made clearer and cleaner by changing the regulatory process so that it makes use of a separate cost of capital for \(X_{1}\) and for \(RAB_{1}\) individually (which we have referred to as the component costs of capital). In the context where the closing asset base \(RAB_{1}\) is chosen by the regulator, the cost of capital for this component is just the risk-free rate. But this still leaves the problem of estimating the cost of capital for \(X_{1}\). If this could be done effectively it would result in more effective achievement of the fundamental objective of NPV=0. * In the context of a multi-year regulatory period, the standard regulatory approach is to use a single cost of capital for the entire regulatory period. This single cost of capital parameter is the solution to an 'internal rate of return' calculation which depends on the level of the cash-flow allowance \(E_{0}(X_{t})\) in each year of the regulatory period and the level of the closing asset base \(\mathbb{E}_{0}(RAB_{T})\). This cost of capital parameter suffers from the same problem of circularity mentioned above. Many regulators have argued that the appropriate term of this cost of capital is equal to the length of the regulatory period. This practice is not supported in the theory set out here. The relevant single cost of capital parameter is a complicated mix of costs of capital of various terms, shorter than and equal to the length of the regulatory period. The regulatory process could be made clear and cleaner by changing the regulatory process to use a different cost of capital for each component of the cash-flow \(X_{1},X_{2},\ldots\) over the period. There are different ways of setting the allowed revenue over the regulatory period, but, if the revenues are set annually, the relevant cost of capital for each cash-flow individually is a one-year rate (either the forward rate, or the out-turn rate, depending on the approach used). But, in any case, the relevant term of the cost of capital is not equal to the length of the regulatory period. * The standard regulatory approach estimates the cost of capital as the weighted average of the cost of equity and the cost of debt. The cost of debt is estimated in different ways, but one typical way is to estimate the cost of debt as the yield-to-maturity on a corporate bond of a particular credit rating and term. The analysis set out here does not support that practice. The relevant cost of debt is the cost of capital for the debt portfolio of the regulated firm. Although the cost of capital for debt instruments maturing in one year may (in theory) be estimated drawing on knowledge of the current price of those instruments in the market, the cost of capital for debt instruments maturing in future years cannot be observed using current market data. As a result it is not, in general, possible to easily estimate the cost of capital for the debt portfolio of the regulated firm. This calls into question the value of separately estimating the cost of equity and the cost of debt and combining them to form an estimate of the cost of capital for the firm as a whole. If neither the cost of equity nor the cost of debt can be easily estimated, it is not clear that this approach offers an improvement over simply estimating the cost of capital for the firm as a whole. It is possible to construct a regulatory process in which there is no need to estimate a cost of debt (only a cost of equity would be required). This regulatory process would isolate and focus on the cash-flow stream to the equity of the regulated firm (summarised in equation 51). However, it is not clear that this approach offers an improvement over simply estimating the cost of capital for the firm as a whole. * In the context of a multi-year regulatory period, the single cost of capital parameter cannot be expressed as a weighted average of a similar parameter for equity and for debt. Again, this calls into question the value of separately estimating the cost of equity and the cost of debt and combining them to form an estimate of the cost of capital for the firm as a whole. There is, at present, no known mechanism for achieving the fundamental objective of regulation (NPV=0) without estimating some form of the cost of capital. We cannot know how to improve our estimates of the cost of capital for regulatory purposes without a clear understanding of the underlying theory. In my view, clarifying and articulating that theory - as set out in this article - is at least a start in placing regulatory practice on a sound footing going forward. ## Appendix A Derivation of CAPM This appendix derives the CAPM using the value functional and assumptions on the preferences of the representative investor. Let's assume that the representative investor has mean-variance preferences. In other words, let's assume that the utility of the representative investor from a certain income \(Y\) today (at time 0) and an uncertain income \(X\) at time 1, is given by the following: \[U(X)=Y+\delta(E[X]-\alpha Var[X]) \tag{53}\] Here \(\alpha\) is a parameter which reflects the degree of risk aversion of the investor, and \(\delta\) is a parameter which reflects the rate of time preference between time 0 and time 1. Let's suppose the set of all assets in the economy (the so-called'market portfolio') is represented in the payoff \(M\) at time 1. This is the set of all assets which must be held by the investors in equilibrium. Now consider a small change to the equilibrium that involves the addition of a small amount \(\epsilon\) of an asset \(X\). In equilibrium this asset must be priced in a way such that the purchase of a small amount does not change the utility of the representative investor holding the market portfolio. The utility from purchasing and the portfolio \(M+\epsilon X\) at time 0 and holding it to time 1 is as follows: \[U(M+\epsilon X)=\delta(\mathbb{E}[M+\epsilon X]-\alpha Var[M+\epsilon X])- \mathbb{V}(M+\epsilon X) \tag{54}\] The first order condition with respect to \(\epsilon\) (setting \(\epsilon=0\)) is as follows: \[\mathbb{V}(X)=\delta\mathbb{E}[X]-2\alpha\delta Cov(M,X) \tag{55}\] The two parameters \(\alpha\) and \(\delta\) can be determined using the results: (a) when \(X\) has a certain payoff, \(\mathbb{V}(X)=\Delta_{F}\mathbb{E}(X)\), where \(\Delta_{F}=RF^{-1}\) is the inverse of the risk-free cost of capital and (b) in the case where \(X\) is a share of the market portfolio, \(\mathbb{V}(M)=\Delta_{M}\mathbb{E}(M)\), where \(\Delta_{M}=\mathbb{R}_{0\to 1}(M_{1})^{-1}\) is the inverse of the cost of capital for the market portfolio. This yields: \[\Delta_{X}=\Delta_{F}-(\Delta_{F}-\Delta_{M})\beta(X) \tag{56}\] Here \(\Delta_{X}=\mathbb{R}_{0\to 1}(X_{1})^{-1}\) is the inverse of the cost of capital for the cash-flow \(X_{1}\). Equation 56 is the version of the CAPM in this context. ## Appendix B The Fundamental Theorem This appendix sets out a proof of the Fundamental Theorem of Regulation. **Theorem 1**.: _Suppose that, at the start of a regulatory period, in year \(t\), the regulator observes \(RAB_{t}\) and chooses \(X_{t+1},X_{t+2},\ldots,X_{t+T}\) and \(RAB_{t+T}\), and the parameters \(A_{t\to t+1},A_{t\to t+2},\ldots A_{t\to t+T}\) and \(B_{t\to t+T}\) to satisfy the following two conditions:_ \[\mathbb{V}_{t}(X_{t+1}, X_{t+2},\ldots,X_{t+T}+RAB_{t+T})\] \[=\frac{\mathbb{E}_{t}(X_{t+1})}{A_{t\to t+1}}+\frac{\mathbb{E}_{t}(X_ {t+1})}{A_{t\to t+1}}+\ldots+\frac{\mathbb{E}_{t}(X_{t+T})}{A_{t\to t+T}}+ \frac{\mathbb{E}_{t}(RAB_{t+T})}{B_{t\to t+T}}\] \[=RAB_{t} \tag{57}\] _And, in addition, at some point in the future \(s\) (e.g., at the end of the life of the firm), the regulator ensures that \(RAB_{s}=\mathbb{V}_{s}(X_{s+1},X_{s+2},\ldots)\). Then, at time \(t\), the asset base is equal to the present value of the future stream of cash-flows:_ \[\mathbb{V}_{t}(X_{t+1},X_{t+2},\ldots)=RAB_{t} \tag{58}\] _It follows that the firm achieves NPV=0._ Proof.: By backward induction. Five year regulatory period Let's suppose that the regulator follows the following practice: At the start of each year of the regulatory period, that is at time \(t-1\), (\(t=1,\ldots T\)), the value of the parameters \(A_{t-1\to t}\) and \(B_{t-1\to t}\) are determined and the regulated cash-flow allowance \(X_{t}\) and the closing asset base \(RAB_{t}\) is set to satisfy the following equation: \[\mathbb{E}_{t-1}(X_{t})=A_{t-1\to t}RAB_{t-1}-\mathbb{E}_{t-1}(RAB_{t})\frac{A_ {t-1\to t}}{B_{t-1\to t}} \tag{59}\] This can, of course, be re-written as the requirement that, at the start of each year of the regulatory period, the regulated cash-flow allowance \(X_{t}\) and the closing asset base \(RAB_{t}\) is set to satisfy the following: \[RAB_{t-1}=\frac{\mathbb{E}_{t-1}(X_{t})}{A_{t-1\to t}}+\frac{RAB_{t-1}}{B_{t-1 \to t}} \tag{60}\] Expanding this equation over the five-year (say) regulatory period starting at time \(t=1\) we have the following: \[RAB_{0} =\frac{\mathbb{E}_{0}(X_{1})}{A_{0\to 1}}+\frac{\mathbb{E}_{0}}{B_{0 \to 1}}\left[\frac{\mathbb{E}_{1}(X_{2})}{A_{1\to 2}}\right]\] \[+\frac{\mathbb{E}_{0}}{B_{0\to 1}}\left[\frac{\mathbb{E}_{1}}{B_{1 \to 2}}\left[\frac{\mathbb{E}_{2}(X_{3})}{A_{2\to 3}}\right]\right]\] \[+\frac{\mathbb{E}_{0}}{B_{0\to 1}}\left[\frac{\mathbb{E}_{1}}{B_{1 \to 2}}\left[\frac{\mathbb{E}_{2}}{B_{2\to 3}}\left[\frac{\mathbb{E}_{3}(X_{4})}{A_{3 \to 4}}\right]\right]\right]\] \[+\frac{\mathbb{E}_{0}}{B_{0\to 1}}\left[\frac{\mathbb{E}_{1}}{B_{1 \to 2}}\left[\frac{\mathbb{E}_{2}}{B_{2\to 3}}\left[\frac{\mathbb{E}_{3}}{B_{3 \to 4}}\left[\frac{\mathbb{E}_{4}(X_{5})}{A_{4\to 5}}\right]\right]\right]\right]\] \[+\frac{\mathbb{E}_{0}}{B_{0\to 1}}\left[\frac{\mathbb{E}_{1}}{B_{1 \to 2}}\left[\frac{\mathbb{E}_{2}}{B_{2\to 3}}\left[\frac{\mathbb{E}_{3}}{B_{3 \to 4}}\left[\frac{\mathbb{E}_{4}(RAB_{5})}{B_{4\to 5}}\right]\right]\right]\right] \tag{61}\] The question is how to choose the values of the cost-of-capital parameters \(A_{0\to 1},A_{1\to 2},\ldots,A_{t-1\to T}\) (and similarly for \(B\)) in order to satisfy equation 57. To begin, we will choose \(A_{t-1\to t}=\mathbb{R}_{t-1\to t}(X_{t})\), \(t=1,\ldots T\). The first term in equation 61 then becomes: \[\frac{\mathbb{E}_{0}(X_{1})}{A_{0\to 1}}=\frac{\mathbb{E}_{0}(X_{1})}{ \mathbb{R}_{0\to 1}(X_{1})}=\mathbb{V}_{0\to 1}(X_{1}) \tag{62}\] As required. For the second term we choose: \[B_{0\to 1}=\mathbb{R}_{0\to 1}(V_{1\to 2}(X_{2})) \tag{63}\] Then the second term becomes: \[\frac{\mathbb{E}_{0}}{B_{0\to 1}}\left[\frac{\mathbb{E}_{1}(X_{2})}{A_{1\to 2}} \right]=\frac{\mathbb{E}_{0}}{B_{0\to 1}}\left[\mathbb{V}_{1\to 2}(X_{2}) \right]=\mathbb{V}_{0\to 2}(X_{2}) \tag{64}\] As required. For the third term we choose: \[B_{1\to 2}=\frac{\mathbb{R}_{0\to 1}(\mathbb{V}_{1\to 3}(X_{3}))\mathbb{R}_{1\to 2}(\mathbb{V}_{2\to 3}(X_{3}))}{B_{0\to 1}} \tag{65}\] The third term then becomes: \[\frac{\mathbb{E}_{0}}{B_{0\to 1}}\left[\frac{\mathbb{E}_{1}}{B_{1\to 2}}\left[\mathbb{V}_{2\to 3}(X_{3}) \right]\right]\] \[=\frac{\mathbb{E}_{0}}{B_{0\to 1}}\left[\frac{B_{0\to 1}}{\mathbb{R}_{0\to 1}(\mathbb{V}_{1\to 3}(X_{3}))}\frac{ \mathbb{E}_{1}(\mathbb{V}_{2\to 3}(X_{3}))}{\mathbb{R}_{1\to 2}(\mathbb{V}_{2 \to 3}(X_{3}))}\right]\] \[=\frac{\mathbb{E}_{0}}{\mathbb{R}_{0\to 1}(\mathbb{V}_{1\to 3}(X_{3}))} \left[\mathbb{V}_{1\to 2}(\mathbb{V}_{2\to 3}(X_{3}))\right]\] \[=\mathbb{V}_{0\to 3}(X_{3}) \tag{66}\] For the fourth term we choose: \[B_{2\to 3}=\frac{\mathbb{R}_{0\to 1}(\mathbb{V}_{1\to 4}(X_{4}))\mathbb{R}_{1\to 2}( \mathbb{V}_{2\to 4}(X_{4}))\mathbb{R}_{2\to 3}(\mathbb{V}_{3\to 4}(X_{4}))}{B_{0\to 1}B_{1\to 2}} \tag{67}\] (We will omit the algebra in this case for brevity). Similarly, for the fifth term we choose: \[B_{3\to 4}=\frac{\mathbb{R}_{0\to 1}(\mathbb{V}_{1\to 5}(X_{5}))\mathbb{R}_{1\to 2}( \mathbb{V}_{2\to 5}(X_{5}))\mathbb{R}_{2\to 3}(\mathbb{V}_{3\to 5}(X_{5})) \mathbb{R}_{3\to 4}(\mathbb{V}_{4\to 5}(X_{5}))}{B_{0\to 1}B_{1\to 2}B_{2\to 3}} \tag{68}\] The fifth term then becomes: \[\frac{\mathbb{E}_{0}}{B_{0\to 1}}\left[\frac{\mathbb{E}_{1}}{B_{1\to 2}}\left[\frac{ \mathbb{E}_{2}}{B_{2\to 3}}\left[\frac{\mathbb{E}_{3}}{B_{3\to 4}}\left[\mathbb{V}_{4 \to 5}(X_{5})\right]\right]\right]\right]\] \[=\frac{\mathbb{E}_{0}}{B_{0\to 1}}\left[\frac{\mathbb{E}_{1}}{B_{1 \to 2}}\left[\frac{B_{0\to 1}B_{1\to 2}}{\mathbb{R}_{0\to 1}(\mathbb{V}_{1\to 5}(X_{5})) \mathbb{R}_{1\to 2}(\mathbb{V}_{2\to 5}(X_{5}))}\left[\frac{\mathbb{E}_{2}[ \mathbb{V}_{3\to 5}(X_{5})]}{\mathbb{R}_{2\to 3}(\mathbb{V}_{3\to 5}(X_{5}))} \right]\right]\right]\] \[=\frac{\mathbb{E}_{0}}{B_{0\to 1}}\left[\frac{B_{0\to 1}}{\mathbb{R}_{0\to 1}( \mathbb{V}_{1\to 5}(X_{5}))}\left[\frac{\mathbb{E}_{1}[\mathbb{V}_{2\to 5}(X_{5})]}{ \mathbb{R}_{1\to 2}(\mathbb{V}_{2\to 5}(X_{5}))}\right]\right]\] \[=\frac{\mathbb{E}_{0}[\mathbb{V}_{1\to 5}(X_{5})]}{\mathbb{R}_{0 \to 1}(\mathbb{V}_{1\to 5}(X_{5}))}\] \[=\mathbb{V}_{0\to 5}(X_{5}) \tag{69}\] The choice of \(B_{4\to 5}\) follows similarly from the algebra. ## Appendix D Changes in the debt portfolio The total payoff at time one from the debt portfolio \(X_{1}^{D}+\mathbb{V}_{1}^{D}\) is independent of any future changes in the debt portfolio. This can be easily demonstrated in a simple case: Let's suppose that, at time zero, the firm holds amount \(D_{0\to 1}\) of instrument \(I_{1}\) maturing at time \(1\), amount \(D_{0\to 2}\) of \(I_{2}\) maturing at time \(2\), and amount \(D_{0\to 3}\) of \(I_{3}\) maturing at time \(3\). The value of this portfolio at time zero is: \[\mathbb{V}_{0}^{D}=D_{0\to 1}\mathbb{V}_{0\to 1}(I_{1})+D_{0\to 2} \mathbb{V}_{0\to 2}(I_{2})+D_{0\to 3}\mathbb{V}_{0\to 3}(I_{3}) \tag{70}\] At time \(t=1\) the debt instrument \(I_{1}\) matures paying the amount \(D_{0\to 1}I_{1}\). In addition, the firm can adjust its portfolio of the other debt instruments. It can sell its existing portfolio \(D_{0\to 2}\) of \(I_{2}\) and \(D_{0\to 3}\) of \(I_{3}\) and purchase the amount \(D_{1\to 2}\) of \(I_{2}\) and amount \(D_{1\to 3}\) of \(I_{3}\). The net cash-flow at time \(t=1\) is therefore: \[X_{1}^{D}=D_{0\to 1}I_{1}+(D_{0\to 2}-D_{1\to 2})\mathbb{V}_{1\to 2}(I_{2})+(D_{0\to 3}-D_{1\to 3})\mathbb{V}_{1\to 3}(I_{3}) \tag{71}\] Assuming there are no further changes in the portfolio, the firm receives a payout in the amount of \(D_{1\to 2}\) of \(I_{2}\) at time \(t=2\) and amount \(D_{1\to 3}\) of \(I_{3}\) at time \(t=3\). This payment stream has value at time \(t=1\) of \(D_{1\to 2}\mathbb{V}_{1\to 2}(I_{2})+D_{1\to 3}\mathbb{V}_{1\to 3}(I_{3})\). The total value of the debt payment stream at time zero is therefore independent of any subsequent changes in the portfolio: \[\mathbb{V}_{0\to 1}(X_{1}^{D}+\mathbb{V}_{1}^{D}) =D_{0\to 1}\mathbb{V}_{0\to 1}(I_{1})\] \[+(D_{0\to 2}-D_{1\to 2})\mathbb{V}_{0\to 1}(\mathbb{V}_{1\to 2}(I_{2}))\] \[+(D_{0\to 3}-D_{1\to 3})\mathbb{V}_{0\to 1}(\mathbb{V}_{1\to 3}(I_{3}))\] \[+D_{1\to 2}\mathbb{V}_{0\to 1}(\mathbb{V}_{1\to 2}(I_{2}))+D_{1\to 3} \mathbb{V}_{0\to 1}(\mathbb{V}_{1\to 3}(I_{3}))\] \[=D_{0\to 1}\mathbb{V}_{0\to 1}(I_{1})+D_{0\to 2} \mathbb{V}_{0\to 2}(I_{2})+D_{0\to 3}\mathbb{V}_{0\to 3}(I_{3})\] \[=\mathbb{V}_{0}^{D} \tag{72}\] The general proof is as follows: \[\mathbb{V}_{0\to 1}(X_{1}^{D}+\mathbb{V}_{1}^{D}) =\mathbb{V}_{0\to 1}(X_{1}^{D}+\sum_{t=2}\mathbb{V}_{1\to t}(D_{1\to t}I_{t}))\] \[=D_{0\to 1}\mathbb{V}_{0\to 1}(I_{1})+\sum_{t=2}D_{0\to t} \mathbb{V}_{0\to 1}(\mathbb{V}_{1\to t}(I_{t})))\] \[=\sum_{t=1}D_{0\to t}\mathbb{V}_{0\to t}(I_{t})=\mathbb{V}_{0}^{D} \tag{73}\] Although the present value of the combined cash-flow \(X_{1}^{D}+\mathbb{V}_{1}^{D}\) is independent of the future changes in the debt portfolio, this is not true for the cash-flows \(X_{1}^{D}\) and \(\mathbb{V}_{1}^{D}\) separately - these depend on the details of the changes in the debt portfolio that occur at time \(1\).
2310.10920
NuclearQA: A Human-Made Benchmark for Language Models for the Nuclear Domain
As LLMs have become increasingly popular, they have been used in almost every field. But as the application for LLMs expands from generic fields to narrow, focused science domains, there exists an ever-increasing gap in ways to evaluate their efficacy in those fields. For the benchmarks that do exist, a lot of them focus on questions that don't require proper understanding of the subject in question. In this paper, we present NuclearQA, a human-made benchmark of 100 questions to evaluate language models in the nuclear domain, consisting of a varying collection of questions that have been specifically designed by experts to test the abilities of language models. We detail our approach and show how the mix of several types of questions makes our benchmark uniquely capable of evaluating models in the nuclear domain. We also present our own evaluation metric for assessing LLM's performances due to the limitations of existing ones. Our experiments on state-of-the-art models suggest that even the best LLMs perform less than satisfactorily on our benchmark, demonstrating the scientific knowledge gap of existing LLMs.
Anurag Acharya, Sai Munikoti, Aaron Hellinger, Sara Smith, Sridevi Wagle, Sameera Horawalavithana
2023-10-17T01:27:20Z
http://arxiv.org/abs/2310.10920v1
# NuclearQA: A Human-Made Benchmark for Language Models for the Nuclear Domain ###### Abstract As LLMs have become increasingly popular, they have been used in almost every field. But as the application for LLMs expands from generic fields to narrow, focused science domains, there exists an ever-increasing gap in ways to evaluate their efficacy in those fields. For the benchmarks that do exist, a lot of them focus on questions that don't require proper understanding of the subject in question. In this paper, we present NuclearQA, a human-made benchmark of 100 questions to evaluate language models in the nuclear domain, consisting of a varying collection of questions that have been specifically designed by experts to test the abilities of language models. We detail our approach and show how the mix of several types of questions makes our benchmark uniquely capable of evaluating models in the nuclear domain. We also present our own evaluation metric for assessing LLM's performances due to the limitations of existing ones. Our experiments on state-of-the-art models suggest that even the best LLMs perform less than satisfactorily on our benchmark, demonstrating the scientific knowledge gap of existing LLMs. ## 1 Introduction With the current rapid advancement in the field of Large Language Models (LLMs), they have been increasingly used for a wide variety of tasks across several domains. Among them, one of the more popular domains in recent times has been the scientific domain Taylor et al. (2022); Cohan et al. (2020); Beltagy et al. (2019). There have been several models that have aimed to tackle the difficult task of scientific reasoning and understanding, and the results have been mixed, with these models performing well in some cases but not in others. Unfortunately, our ability to evaluate these models have been less than ideal due to lack of proper benchmarks. While there exists numerous benchmarks for the fields of general question answering, commonsense reasoning, and so on, most of these usually draw from existing resources that already exist, like popular trivia show questions, high school and college notebooks and text, online news, and so on. But even then, the focus is mostly on generic and broad topics that can be used by all types of models, creating a dearth of such benchmarks for narrow, specific yet highly important sub-fields. Additionally, even when such benchmarks are created, they are often sourced from existing material that were meant to test humans, with not enough effort put into curating custom benchmarks that can accurately judge a model's abilities. Finally, in addition to creating and publishing benchmarks for others to use, we believe it is also essential to iron out in detail the entire process of how to create such benchmarks so that it will be easier for future researchers to replicate the process for other domains. The lack of proper benchmarks, of course, is not without reason. Creating a benchmark is a complicated and time-consuming process, and in fields like science, care needs to be taken to verify the benchmarks are properly balanced across a variety of competing criteria. They need to be balanced for difficulty, usefulness, and accuracy, with the benchmark needing to be challenging enough for current models while also being achievable in the near future, and be a good mix of questions that can truly assess the capabilities while staying within the range of the limits of current systems. In this paper, we introduce NuclearQA1: a novel, expert-crafted benchmark for evaluating the scientific understanding of large language models in the nuclear domain, encompassing fields like physics, material science, chemistry, etc. Unlike a lot of other benchmarks that use tests made for humans and adapt it for the models, we built our benchmark from scratch exclusively to test scientific understanding of LLMs. We not only present and describe the NuclearQA benchmark, but also lay out in full detail our approach of creating a high quality benchmark that can properly evaluate a model's scientific understanding. We show how we created a balanced benchmark to be a true test of understanding of nuclear-related science for LLMs. Additionally, we evaluate some of the state-of-the-art models with our questions and observe that even the best LLMs lack scientific knowledge required to excel in our benchmark. ## 2 Related Works There have been numerous works in the field of question answering for quite some time. While some of them focus on general question-answering abilities of models, others have focused on question answering (QA) of a particular domain. ### General QA Benchmarks There have been numerous benchmarks that deal with the general question-answering abilities of models. Perhaps the most famous is the Stanford Question Answering Dataset (SQuAD) Rajpurkar et al. (2016), consisting of 100,000+ questions and a reading comprehension dataset. They contrast three types of tasks: reading comprehension (RC; read a passage, select a span that answers); Open-domain QA (answer a question from a large set of documents); and Cloze datasets (predict a missing word in a passage). Another pivotal work is the AI2 Reasoning Challenge (ARC) Clark et al. (2018). ARC consisted of a dataset of almost \(8,000\) science questions in English, and also included a set of questions that neither a retrieval-based algorithm nor a word co-occurrence algorithm were able to answer correctly. Likewise, the MCTest dataset Richardson et al. (2013) consists of a total of \(500\) stories and \(2000\) multiple-choice reading comprehension questions that were targeted at 7 year olds. Additionally, there are several other datasets, like CommonsenseQA - 12K multiple-choice questions Talmor et al. (2018), NewsQA: 10K news articles Trischler et al. (2016), Search QA: 140K QA pairs Dunn et al. (2017), TriviaQA: 650K QA pairs with evidence Joshi et al. (2017), the ARC2 Bhakthavatsalam et al. (2021), Big Bench Ghazal et al. (2013), GLUE Wang et al. (2018), and many more that focus on general question-answering abilities. ### Scientific and Academic Benchmarks More recently, there have been several works that focus on using AI models for the scientific domain. As a result, there have been several benchmarks that pertain to this field. Science Questions: 1K multiple choice questions in AI2R Talmor et al. (2018) and SciQ Dataset: Welbl et al. (2017) 13,679 multiple choice science questions are two key and pioneering benchmarks in the scientific domain. Other important works include SciQA Auer et al. (2023), a benchmark for scientific question answering that was created by using knowledge graphs of academic articles and with the help of human-made templates, and SciRepEvalSingh et al. (2022), a collection of several scientific document tasks across four types: classification, regression, proximity, and searching. Finally, perhaps one of the most widely used science benchmarks is the science-specific portions of the MMLU Hendrycks et al. (2020) benchmark, which include high school and college-level questions for a wide variety of scientific fields, like Physics, Chemistry, Biology, Computer Science, and many more. Similarly, some of the other most recent works include QASA Lee et al. (2023), a QA benchmark of \(\sim\)1800 questions to test reasoning on scientific articles, specifically in AI and ML domains, and SciBenchWang et al. (2023), a benchmark of \(\sim\)700 questions sourced from textbooks for college-level science problems. Another recent work in the field is the scientific dataset released by Galactica Taylor et al. (2022) alongside their model. There are also benchmarks that address specific fields, with TheoremQA Chen et al. (2023) for mathematics, emrQA Pampari et al. (2018) for medicine, and BioRead Pappas et al. (2018) and BioMRC Pappas et al. (2020) for biology. BigBio Fries et al. (2022) presents a framework with more than 126 biomedical NLP datasets, and guidelines for task schema, data auditing, etc. The closest thing to a nuclear benchmark is the NQuAD dataset that was released together with the NukeBERT Jain et al. (2020) model. However, the questions in the NQuAD dataset are selected from pre-sampled paragraphs and contain answers in those specific selection of text. This limits the necessity of a model having to actually understand the nuclear domain, with the ability to comprehend just a small passage of text being sufficient to perform well on this benchmark. In contrast, we include questions in NuclearQA that don't have a specific text containing the answer, but rather needs an understanding of the science to be able to answer correctly. Furthermore, our benchmark has questions across a number of different dimensions. These differences make our benchmark presented here a clear advancement of the work. ## 3 The NuclearQA Benchmark The NuclearQA benchmark presented in this work is a first-of-its-kind benchmark. It has not been adapted from tests originally meant for humans, but is crafted by subject matter experts (SMEs) specifically to assure that these questions are well suited to judge a language model's ability to solve nuclear-related questions. While creating this benchmark, we have put every effort into assuring that the benchmark consists of high-quality questions from across disciplines that relate to the nuclear domain, including physics, chemistry, material sciences, and so on. When creating any benchmark, it is important to make sure that the benchmark has a variety of different types of questions such that it can test different types of abilities. As such, NuclearQA has been designed to be balanced across a number of dimensions. We describe the distribution of the questions across these dimensions in detail below. ### Difficulty One of the most natural and important ways to classify the questions is by difficulty. Our benchmark consists of questions of three difficulty levels: **Easy, Medium,** and **Hard**, with the questions being divided more or less evenly across the categories. These difficulty levels were defined by SMEs based on the difficulty from a nuclear domain point of view, rather than based on a computational model's perceived difficulty in solving these questions. ### Question Format The benchmark consists of questions that were considered short-answer questions (**Short QA**), or more factoid-like in nature, and open-ended long-answer questions (**Open QA**), which require additional reasoning abilities to answer. Short QA questions are trivia-style questions that can be answered with a few words. The benchmark purposefully favored Short QA, with only a quarter of the questions being Open QA. ### Answer Format This dimension is based on whether the answer contains a single or a composite correct answer. If the question has a clear single answer, it's classified as **single correct**. If it has multiple correct answers that make up a full correct answer, that is classified as **multiple correct**. For example, for the question _What are the three main subatomic particles?_ the full correct answer contains three components, i.e., proton, neutron, and electron. Finally, there are some questions that can't be put into either of these bins. These are typically the open-ended questions whose answers are open to interpretation. We denote the **uncategorizable** as **N/A** in the dataset. ### Answer Type This set of classification has to do with the type of response that would be the correct answer. The dimension is named to be closer to the meaning of _type_ in a more programming sense of the word. We have defined four main types: Numerical, Scientific, Numerical + Scientific, and General. As the name suggests, if the answer is a number, that is classified as **Numerical**. Questions whose answers have something specifically scientific as \begin{table} \begin{tabular}{|c|c|} \hline **Difficulty** & **\% of Questions** \\ \hline Easy & 31 \\ \hline Medium & 33 \\ \hline Hard & 36 \\ \hline \end{tabular} \end{table} Table 1: Proportion of the questions for each level of difficulty \begin{table} \begin{tabular}{|c|c|} \hline **Difficulty** & **\% of Questions** \\ \hline Short QA & 75 \\ \hline Open QA & 25 \\ \hline \end{tabular} \end{table} Table 2: Proportion of the questions based on question format \begin{table} \begin{tabular}{|c|c|} \hline **Answer Format** & **\% of Questions** \\ \hline Single Correct & 60 \\ \hline Multiple Correct & 30 \\ \hline N/A & 10 \\ \hline \end{tabular} \end{table} Table 3: Proportion of questions based on the answer format response, such as an element symbol or specific quark name, etc., are classified as **Scientific**. When the answer contains a combination of both, it is classified as **Numerical + Scientific**. These are answers that require a quantitative and qualitative response. Examples include answers such as _10 protons + 12 neutrons_ or _12 moles of Hydrogen_, and so on. Any other question that cannot be categorized as previously described is classified as **General**. It is important to note that answers to general questions might still include scientific or numerical components, but are not limited to those classifications. ## 4 Creating the NuclearQA Benchmark ### Subject Matter Experts as Question Creators One of the first decisions to make when creating the dataset is how to go about creating the questions. While a handful of tools exist that can automatically extract questions from text (Cui et al., 2021; Heilman, 2011), we found that none of these questions were of a sufficient quality to be used for evaluating models. We hesitated to use questions that a model can automatically extract as the means to test similar models: we felt this would not be a true test. Using automatic methods would be considerably more economical from both a time and money point of view, but would compromise the quality of the dataset. Thus, we decided that the questions should be curated by humans. The standard approach of collecting human-written questions for a dataset in cases where existing resources are unavailable is to use some form of crowdsourcing platform (Sap et al., 2019; Acharya et al., 2020). However, given the technical nature of the field, we did not think it would be advisable to have the general public create these questions. We decided that subject matter experts themselves need to create the questions manually to assure quality. One side effect of this was that the total number of questions that could be included in the benchmark would be significantly low compared to what a crowdsourced approach could achieve; on the other hand, the questions themselves would be of the highest possible quality. We decided to pursue quality over quantity. ### Deciding on Different Types of Questions Once we decided on the approach of the benchmark creation, we needed to decide on the different types of questions we wanted to include in the benchmark. The goal of this was to assure we covered a wide breadth of the nuclear domain with some level of depth, while also ensuring it resulted in a useful test for LLMs. The first thing we wanted in the benchmark was to have questions of varying levels of difficulties so that it could quickly check how models perform compared to each other. We eventually decided on three levels of difficulty. Second, we also wanted to make sure we could test the model with both short-answer questions and open-ended questions in the benchmark. But unlike the difficulty levels that we wanted to distribute more or less evenly, we wanted to assure that we had more short-answer questions than open-ended ones. Additionally, we wanted to include questions that needed specific scientific answers to be true to the field. We also included some questions that include numerical answers. Eventually, we decided on questions with four different answer types. Furthermore, because we wanted to see how the models would perform in a format similar to that for a human pupil taking a nuclear sciences exam, we had different types of answers: some only had a single correct answer, some needed multiple correct answers to form a full composite correct response, and some needed reasoning to get to the correct answer. After we decided on these dimensions, we set about creating the questions. We did not set a hard boundary of having a fixed number of questions in each of these categories. Rather, we focused on creating a well-rounded nuclear test with these categories in mind, and made sure to balance them out to reasonable proportions in the end. Through an exhaustive process of checks and edits, we created a benchmark that balanced these categories across several dimensions to the required proportions, as shown in Tables 1, 2, 3, and 4. \begin{table} \begin{tabular}{|c|c|} \hline **Type** & **\% of Questions** \\ \hline Numerical & 17 \\ \hline Scientific & 26 \\ \hline Numerical + Scientific & 20 \\ \hline General & 37 \\ \hline \end{tabular} \end{table} Table 4: Proportion of questions that have answers of a certain type ## 5 Human-in-the-loop Evaluation ### Failure of Traditional Metrics Due to the nature of our benchmark, traditional methods of evaluation are not suited to judge the success of models on our questions. We have to consider various factors in advance for the selection of good evaluation criteria. The existing metrics such as partial/exact match accuracy and F1 would not be able to portray an accurate picture of a system's performance on NuclearQA. For example, if the question was to state the symbol for helium, the answer "H" would be marked a 50% match by traditional methods, which would of course be completely wrong from a nuclear point of view. We also experimented with different automatic metrics for different answer types (e.g., numerical, text). However, we noticed that the scale of error is significantly different for atomic numbers and the masses of subatomic particles. For example, an answer of 7 for the atomic number of oxygen is clearly incorrect, while 7.99, which would be essentially 8 from a computational standpoint, is also incorrect because oxygen cannot have a fractional atomic number. Having individual automated metrics to evaluate certain sub-components of the benchmark would introduce a large number of composite metrics, which would be meaningless in terms of the overall performance of the systems. ### Evaluation Metric and Method To alleviate the issues explained in the previous section, we propose a human-in-the-loop evaluation system in place for this benchmark. The first challenge was to come up with a judging criteria with the right scale of evaluation. For example, we did not want to simply have a correct/incorrect categorization, but a scale that is truly reflective of the abilities of LLMs. Thus, we came up with a scale to evaluate the responses, shown in Table 6. We chose different evaluation criteria for short and open question answering (QA). For Short QA, the answer to the corresponding question only has a single correct answer, although the answers can be partially correct and require an SME's evaluation. For Open QA, an interpretation of the NuclearQA answer is needed, as there is a chance that there is not just one answer to the corresponding question. In Short QA, it requires additional interpretation depending on the number of answers available. A Short QA evaluation of "5" means that the answer was correct and no interpretation is needed. For multiple answers, a "5" is given if the criteria of the question was met with all correct answers. If an answer is required and was not given, the evaluation was not given a "5." An Open QA evaluation of "5" means that the model provided a correct answer that met the criteria of the question, even if other answers exist. An evaluation of "4" for both types of questions means that the model provided an answer that was partially correct. For multiple answers in Short QA questions, this means that multiple answers are required to be correct but not given (e.g, two correct answers of six total answers). When an answer is provided that is related to the topic of the question but incorrect, that answer is evaluated as a "3." For answers that are unrelated to the question, but still in the general domain of nuclear, the answer is evaluated as a \begin{table} \begin{tabular}{|c|c|} \hline Score & Meaning \\ \hline 5 & Correct \\ \hline 4 & Partially Correct \\ \hline 3 & Incorrect but related \\ \hline 2 & Unrelated but in-domain \\ \hline 1 & Out-domain and/or nonsensical \\ \hline \end{tabular} \end{table} Table 6: Evaluation scale for our human-in-the-loop evaluation \begin{table} \begin{tabular}{|c|l|} \hline **Type** & **Example question** \\ \hline **Numerical** & How many neutrons are inside a U-238 atom? \\ \hline **Scientific** & What two particles are emitted after a pair production absorption of a gamma-ray? \\ \hline **Numerical + Scientific** & How many Uranium-235 atoms per cubic centimeter are there in natural uranium? \\ \hline **General** & Why are poison rods included in some nuclear reactor designs? \\ \hline \end{tabular} \end{table} Table 5: Random examples of questions of different answer types from the NuclearQA dataset "2." An evaluation of "1" is given to answers that are out of domain or nonsensical. These answers are often related to the model providing an answer in the form of a question, or hallucinating strange text that doesn't make sense in the context of the question. ### Baseline Models Evaluation When selecting the LLMs to test NuclearQA, we wanted to assure that we selected not just the most popular LLMs, but also the most representative models. We tested the NuclearQA benchmark with four different state-of-the-art LLMs, shown in Table 8. 1. **UnifiedQA**Khashabi et al. (2020) is fine-tuned on question-answering datasets, including sets of scientific questions, over the T5 base model. 2. **Flan T5**Chung et al. (2022) is a instruction-tuned model over the T5 base model. 3. **Llama 2**Touvron et al. (2023) is one of the best performing decoder-style models, exceling in multiple academic benchmarks. 4. **Galactica**Taylor et al. (2022) is trained with scientific data, including research publications across multiple scientific disciplines. ### Model Performance We used the standard prompting method for all four models with the same configuration setting across all types of questions. We increased the response length to assure the full answer generation for Open QA. The models were not penalized for generating repetitive but correct answers to short questions due to this setting. An SME reviewed the responses for all of these models with no prior knowledge or expectation of which model was expected to perform better or worse to avoid bias. While we also calculated the average score for all the models, this does not properly represent the overall performance of the models. This is due to the unique nature of our benchmark where many related but incorrect answers could overshadow several completely correct ones. Instead, we used an Olympic medal tally style evaluation, i.e., we treated the model that got the most correct answers as the best model, regardless of the overall average score. However, we also reported the average score for all the models. The full results for the models are shown in Table 7. We can see that the Llama 2 model outperformed the other models by quite a fair distance, getting 27% of the questions completely correct, with the next best being Galactica with just 16 correct answers. On the other hand, we see that Llama, along with UnifiedQA, also produced the highest number of nonsensical answers. The Flan T5 model managed to produce the highest number of responses that were at least related to the question regardless of correctness, with 76% of the answers achieving a score of 3 or higher. Flan T5 also produced the fewest number of nonsensical responses, with just 6% of the responses being nonsensical, less than a third of the next best model, Galactica. \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline & **Correct** & **Partially Correct** & **Incorrect, related** & **Unrelated, in-domain** & **Nonsense** & **Average Score** \\ \hline **Llama 2** & **27** & 10 & 21 & 10 & **32** & 2.90 \\ \hline **Galactica** & 16 & **13** & 29 & 23 & 19 & 2.84 \\ \hline **FlanT5** & 13 & **13** & **50** & 18 & 6 & **3.09** \\ \hline **UnifiedQA** & 5 & 4 & 11 & 48 & **32** & 2.02 \\ \hline \end{tabular} \end{table} Table 7: Olympics-style ratings of the various models’ performance on NuclearQA, i.e., models with the highest number of correct answers are shown at the top, regardless of the average score overall, which may be inflated by a lot of relevant but incorrect answers. The best score(s) for each score category are shown in bold. Total number of questions = 100. \begin{table} \begin{tabular}{|c|c|c|} \hline Model Type & Model & \# of Parameters \\ \hline \multirow{2}{*}{Encoder-Decoder} & UnifiedQA & 770M \\ \cline{2-3} & Flan T5 & 770M \\ \hline \multirow{2}{*}{Decoder} & Galactica & 1.3B \\ \cline{2-3} & Llama 2 & 7B \\ \hline \end{tabular} \end{table} Table 8: The models and the number of parameters used for evaluation against NuclearQA. ## 6 Error Analysis While most of the responses don't need a lot of further analysis and are simply incorrect answers, we saw some unique responses from some of these models that require a further look. We have seen in the past that large language models are prone to hallucinations (Rawte et al., 2023; Ji et al., 2023), and there have been several efforts to detect and mitigate these hallucinations (Manakul et al., 2023; Li et al., 2023; Zhang et al., 2023). In our evaluation, the Llama model seemed the most likely to hallucinate information in the responses, with it constantly making up many of its own questions in the response, and answering those questions. There were also several instances of it generating the responses in the form of a chat between two or more people. These instances could either have been a direct copy from the training data, suggesting the model is memorizing the training data, or else they could be hallucinations. Either way, we found out that the usernames Llama used for these responses were actual usernames of real people on Twitter, and so we have chosen not to disclose those responses verbatim in this paper. A sample of an anonymized version of such response is shown in Figure 1. Furthermore, the Llama model also had the habit of hallucinating its own multiple choice answers for the prompted question and selecting one of them as the answer, with several instances of all its manufactured options being the incorrect answer, and sometimes all the manufactured options being the same one repeated multiple times. Additionally, the Galactic model sometimes had issues of creating its own question unrelated to the prompt question and then going on to solve its own questions instead. It also had the issue of hallucinating its own multiple-choice answers like the Llama model. With the Flan T5 model, there were a couple of cases with the model producing an empty response. The UnifiedQA model had the least amount of such issues, but there were a couple of instances where the model simply extracted keywords from the questions that happened to be close enough to the correct answer. ## 7 Limitations and Future Work While our work in this paper has achieved the goal of creating a novel and comprehensive benchmark, there is still room for further development and refinement. Our main limitation is that this approach requires an extensive time commitment from an SME and therefore is costly to build large datasets. If we are to scale this work to thousands of questions, there would need to be an automated step to speed up the question creation without compromising the quality. Similarly, another limitation is the lack of relevant automated evaluation metrics in the literature for us to use. This is a big gap in the field that needs to be filled if we want to have true measures of success of large language models moving forward. One obvious way this work could be expanded is by adding more questions across several other domains. Another potential direction for the future is to create queries of other types and not be limited to just the question-answer format. Figure 1: An example of a response where the model hallucinates a conversation with real people to answer the prompt question. The response has been formatted for clarity and truncated for space. The username has been removed for privacy. ## 8 Conclusion In this work, we presented a novel benchmark that is able to accurately evaluate a large language model's understanding of the nuclear domain. In addition, we laid out in detail the methodology of creating a scientific benchmark, which can serve well for future researchers to use when creating similar benchmarks in other scientific domains. Our results suggest that while the current state-of-the- art LLMs seem to perform the best as expected in the general domain, there is a lot of room for improvement when it comes to demonstrating truly good performance in the cross-disciplinary science domains. Thus, we see that due to its unique nature and quality and variety of questions, NuclearQA is an appropriate measure of a model's understanding of the nuclear domain and therefore a true test for any such models in the future. ## Acknowledgments This work was supported by the NNSA Office of Defense Nuclear Nonproliferation Research and Development, U.S. Department of Energy, and Pacific Northwest National Laboratory, which is operated by Battelle Memorial Institute for the U.S. Department of Energy under Contract DE-AC05-76RLO1830. This article has been cleared by PNNL for public release as PNNL-SA-190898.
2308.05801
The Galactic Interstellar Object Population: A Framework for Prediction and Inference
The Milky Way is thought to host a huge population of interstellar objects (ISOs), numbering approximately $10^{15}\mathrm{pc}^{-3}$ around the Sun, which are formed and shaped by a diverse set of processes ranging from planet formation to galactic dynamics. We define a novel framework: firstly to predict the properties of this Galactic ISO population by combining models of processes across planetary and galactic scales, and secondly to make inferences about the processes modelled, by comparing the predicted population to what is observed. We predict the spatial and compositional distribution of the Galaxy's population of ISOs by modelling the Galactic stellar population with data from the APOGEE survey and combining this with a protoplanetary disk chemistry model. Selecting ISO water mass fraction as an example observable quantity, we evaluate its distribution both at the position of the Sun and averaged over the Galactic disk; our prediction for the Solar neighbourhood is compatible with the inferred water mass fraction of 2I/Borisov. We show that the well-studied Galactic stellar metallicity gradient has a corresponding ISO compositional gradient. We also demonstrate the inference part of the framework by using the current observed ISO composition distribution to constrain the parent star metallicity dependence of the ISO production rate. This constraint, and other inferences made with this framework, will improve dramatically as the Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST) progresses and more ISOs are observed. Finally, we explore generalisations of this framework to other Galactic populations, such as that of exoplanets.
Matthew J. Hopkins, Chris Lintott, Michele T. Bannister, J. Ted Mackereth, John C. Forbes
2023-08-10T18:00:06Z
http://arxiv.org/abs/2308.05801v2
# The Galactic Interstellar Object Population: A Framework for Prediction and Inference ###### Abstract The Milky Way is thought to host a huge population of interstellar objects (ISOs), numbering approximately \(10^{15}\,\mathrm{pc}^{-3}\) around the Sun, which are formed and shaped by a diverse set of processes ranging from planet formation to galactic dynamics. We define a novel framework: firstly to predict the properties of this Galactic ISO population by combining models of processes across planetary and galactic scales, and secondly to make inferences about the processes modelled, by comparing the predicted population to what is observed. We predict the spatial and compositional distribution of the Galaxy's population of ISOs by modelling the Galactic stellar population with data from the APOGEE survey and combining this with a protoplanetary disk chemistry model. Selecting ISO water mass fraction as an example observable quantity, we evaluate its distribution both at the position of the Sun and averaged over the Galactic disk; our prediction for the Solar neighbourhood is compatible with the inferred water mass fraction of 2I/Borisov. We show that the well-studied Galactic stellar metallicity gradient has a corresponding ISO compositional gradient. We also demonstrate the inference part of the framework by using the current observed ISO composition distribution to constrain the parent star metallicity dependence of the ISO production rate. This constraint, and other inferences made with this framework, will improve dramatically as the Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST) progresses and more ISOs are observed. Finally, we explore generalizations of this framework to other Galactic populations, such as that of exoplanets. Interstellar objects (52), Small Solar System bodies(1469), Galaxy Evolution (594) 0000-0002-4882-8865]Matthew J. Hopkins 0000-0002-8828-0886]Cris Lintott 0000-0002-4882-7888]Michele T. Bannister 0000-0001-8878-0888]J. Ted Mackereth 0000-0002-1888-7888]John C. Forbes ## 1 Introduction 1I/'Oumuamua (Meech et al., 2017) and 2I/Borisov1 are the first two observed samples from a highly numerous population: interstellar objects (ISOs). Estimated to number \(\sim 10^{15}\,\mathrm{pc}^{-3}\) around the Sun (Engelhardt et al., 2017; Do et al., 2018), they are implied to have a spatial distribution spanning the entire Galaxy. This population has been predicted to exist for decades (McGlynn and Chapman, 1989), based on models of the accretion and migration of the giant planets, which predict that 75-85% of cometary bodies initially in the Solar System must have been scattered into interstellar space (Fernandez and Ip, 1984; Brasser et al., 2006). Modern exoplanet surveys consistently find that giant planets are common across the Galaxy around stars with a range of spectral types (Fulton et al., 2021; Sabotta et al., 2021). This makes planetesimal scattering common across the Galaxy. A significant number of planetesimals can also be ejected by close stellar flybys early in a planetary system's life (e.g. Pfalzner et al., 2021). The protoplanetary disks of other stars are therefore expected to be a source of ISOs (Stern, 1990; Moro-Martin, 2022). Initially it was expected that interstellar objects would display cometary characteristics (e.g. Jewitt, 2003). The population's dominant dynamical formation mechanisms would preferentially harvest more distant, ice-rich planetesimals from the disks of the source systems. More of the cometary ISO population passing through the Solar System could be detected than rocky ISOs, as comae make cometary ISOs brighter down to smaller diameters (Engelhardt et al., 2017). 2I/Borisov appeared relatively similar in size and composition to Solar System comets (Jewitt and Seligman, 2022). Its distinctive features were an exceptionally high CO and NH\({}_{2}\) content, implying it formed on the edge of its home system's protoplanetary disk, beyond the hypervolatile CO ice line (Bodewits et al., 2020; Cordiner et al., 2020). However, 1I/'Oumuamua had a mix of observed characteristics. A 160-m scale object, it lacked a coma in deep imaging (Jewitt et al., 2017) or detectable outgassing in CO, CO\({}_{2}\)(Trilling et al., 2018) or CN (Ye et al., 2017). Despite this, it still underwent non-gravitational acceleration similar to that experienced by Solar System comets (Micheli et al., 2018). The large amplitude of its light curve implied a high-aspect-ratio shape (Mashchenko, 2019). At the time, this combination of factors seemed unusual, although the surface reflectance properties were consistent with outer Solar System bodies (e.g. Bannister et al., 2017). Hypotheses for the composition and formation of 1I/'Oumuamua that match the limited data remain varied. It may be a planetesimal ('Oumuamua ISSI Team et al., 2019); a fragment of a comet devolatized by passages close to its parent star before ejection (Raymond et al., 2018); an icy fractal aggregate (Moro-Martin, 2019); a hydrogen iceberg formed in a molecular cloud (Seligman and Laughlin, 2020); or a nitrogen ice fragment from the surface of a Pluto-like dwarf planet (Jackson and Desch, 2021). Recently, observation of similarly-small near-Earth asteroids have identified objects with the lightcurve amplitudes seen in 1I. Additionally, Farnocchia et al. (2023) and Seligman et al. (2023) report six asteroids with significant non-gravitational acceleration and no coma. While from the two ISOs found so far, the compositions of interstellar objects are clearly varied, 1I may be less extreme than it was first considered. ISOs formed in a protoplanetary disk carry information about their home systems in their composition: we have samples of other planetary systems coming to us for study. The composition of a protoplanetary disk correlates with the elemental abundances of its central star, due to their formation from the same gas and dust in a molecular cloud core (Oberg and Bergin, 2021). We can thus expect stars of different metallicities to produce ISOs of different compositions. The composition of this gas and dust varies in both space and time as the Galaxy chemically evolves due to stellar nucleosynthesis, making the current Galactic ISO population sensitive to the entire history and evolution of the Milky Way over cosmic time (Tinsley and Cameron, 1974; Lintott et al., 2022). Since the occurrence of planetesimal-scattering giant planets also has a metallicity dependence (Fischer and Valenti, 2005), the relative occurrence of ISOs of different compositions will therefore carry information about the Galaxy's distribution of planetary architectures. Finally, we expect many ISOs to outlive their parent stars, as they occupy a similar environment to Gyr-old Oort cloud comets. At minor-planet sizes, ISOs are not subject to any known destructive forces in the ISM (Guilbert-Lepoutre et al., 2015), other than minor erosion by dust, in their frequent passages through molecular clouds (Pfalzner et al., 2020). They could also be disrupted or devolatilised in rare close encounters with stars (Raymond et al., 2018; Forbes and Loeb, 2019). ISOs thus present the possibility of studying long-lost planetary systems. The Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST) (Ivezic et al., 2019) is predicted to provide a sample of tens of 1I/'Oumuamua-like ISOs (e.g. Levine et al., 2021), as well as any more 2I/Borisov-like cometary ISOs that enter our Solar System. This is in addition to the continuing contributions of the NEO surveys and other observatories that found 1I and 2I in the first place (Meech et al., 2017). These will provide a large and varied sample of interstellar objects, for study and comparison to predictions. The many dependencies over planetary and Galactic scales make the observed ISO population a fascinating tool: it has the potential to test models in both Galaxy and planetary physics, with an entirely different set of biases to traditional methods. Lintott et al. (2022) introduced the concept of predicting the composition of a Galaxy's population of ISOs from the Galaxy's stellar distribution. Using simulated Galaxies from EAGLE (Schaye et al., 2015), they showed that the water content of ISOs was sensitive to Galactic star formation history. In this work, we develop this method and apply it to the stellar population of the Milky Way, estimated with data from the APOGEE survey, to predict a broader set of properties of our own Galaxy's population of interstellar objects. We predict the distribution of ISOs in both their spatial position in the Galaxy and their water mass fraction. By evaluating this distribution at the current position of the Solar System, we predict the properties of the population of ISOs from which the observed sample of the 2020s will be drawn, and we compare this to the whole-Galaxy distribution. We then detail a Bayesian method of comparing the predicted and observed distributions to make inferences about the planetary and Galactic processes modelled, and demonstrate this method by constraining the metallicity dependence of the ISO production rate. ## 2 APOGEE and stellar density modelling To predict the distribution of ISOs in the Milky Way, we first obtain the distribution of all stars throughout Galactic history over a large swath of the Galactic disk, which we model by fitting simple density profiles to debiased data from the APOGEE survey. While APOGEE's main sample is not representative of all extant stars, we can extrapolate from it to recover the total stellar population of the Milky Way. By design, the APOGEE main sample mainly contains red giants: stars in a relatively short-lived phase towards the end of their lives. Since each stellar generation forms stars with a range of masses (Chabrier, 2003) and a corresponding range of lifespans, all but the newest stellar populations will have some stars currently in the red giant stage. This means multiple generations are represented in the APOGEE sample. We then extrapolate to the entire stellar population. This reconstruction is detailed in 2.3. ### Observational Data: APOGEE We use data from the APOGEE SDSS-IV Data Release 16 (Jonsson et al., 2020). APOGEE is a near-infrared, high-resolution (\(R\sim 22\,500\)) spectroscopic stellar survey used to estimate high-precision chemical abundances for a sample of over 200 000 Milky Way stars (Majewski et al., 2017). The survey's simple and well-characterised selection function, based on apparent magnitude and dereddened colour, is optimised to select red giants (Zasowski et al., 2013, 2017). As infrared bands suffer less from dust extinction, the red giants selected are visible across the Galactic disk. The high-precision abundance measurements mean that we can identify monoabundance populations with very low levels of contamination by binning the APOGEE stars in [Fe/H] and [\(\alpha\)/Fe] (Bovy et al., 2016). Additionally, the well-characterised nature of the selection function means that it can easily be accounted for using the method of Bovy et al. (2016). This makes APOGEE an ideal choice for modelling the spatial and chemical distribution of the Milky Way's red giant population. We obtain the chemical abundances, heliocentric distance and age of stars in APOGEE DR16. To obtain each star's abundances, we use the calibrated ASPCAP pipeline's (Garcia Perez et al., 2016) abundances of iron [Fe/H] and alpha elements [\(\alpha\)/Fe], calculated from an average of the abundances of the elements O, Mg, Si, S, and Ca, after Bovy et al. (2016). For each star's heliocentric distance and age, we use the weighted_dist and age_lowess_correct estimate of AstroNN (Leung and Bovy, 2019; Mackereth et al., 2019). weighted_dist is a weighted average of the distance estimate from the _Gaia_ parallax measurement of the star (Gaia Collaboration et al., 2016) and a spectro-photometric distance estimate of the star from a neural network trained on 265 761 stars surveyed in common between APOGEE DR14 (Abolfathi et al., 2018; Holtzman et al., 2018) and _Gaia_ DR2 (Gaia Collaboration et al., 2018). age_lowess_correct is a measurement of the stellar age from a neural network trained on 6676 stars with both spectroscopic measurement by APOGEE DR14 and asteroseismic age measurement by the _Kepler_ mission (Borucki et al., 2010), corrected for biases from the neural network as described in Mackereth et al. (2017). Due to a lack of stars with low metallicity in the training data, reliable ages aren't available for stars with [Fe/H] \(<-0.5\), so for these stars we must assume an age distribution as described below. The APOGEE DR16 "statistical sample" (the sample of stars for which the selection function can be reconstructed) contains 165,768 stars. However, this is partly made up of dwarfs, which have higher uncertainties in their atmospheric parameters and abundances. We thus select a subset of the statistical sample with a calibrated ASPCAP surface gravity of \(1\leq\log g<3\). We additionally select only stars with fractional uncertainty in heliocentric distance \(D\) of less than 0.5. To restrict our sample to the Milky Way's disk, we select stars with Galactocentric radii \(R\) between 4 kpc and 12 kpc and height above the disk \(z\) of \(-5\) kpc to 5 kpc. This gives us a sample of 80 958 red giants. ### Density Modelling of Red Giants across the Galaxy To calculate the distribution of red giant stars in the Milky Way from the APOGEE data we use the method of Bovy et al. (2016), as this accounts for both dust and the survey selection function simultaneously. In brief, assuming the stars observed in a survey are distributed independently in a space of some observables \(O\) (for example position, colour and magnitude, chemical abundances), then the positions of \(N\) stars in the space of observables \(O_{1},\ldots,O_{N}\) is a realisation of an inhomogeneous Poisson point process. This process is a random distribution of points defined by a rate function \(\lambda(O)\), such that the number of points in a given volume \(V\) in the space of the observables is a Poisson random variable with mean and variance equal to the integral of the rate function over that volume, \(\int_{V}\lambda(O)\,\mathrm{d}O\). It follows that the probability of finding a point (i.e. an observed star) with observables in the infinitesimal volume \(\delta O\) is given by \(\lambda(O)\,\delta O\), and the total number of points (i.e. stars observed) is a Poisson random variable with mean and variance \(\Lambda=\int\lambda(O)\,\mathrm{d}O\). Since the rate function is equal to the rate of occurrence of observed stars, it can account for both the underlying true density of stars, as well as the effect of the survey selection function and dust which prevent all extant stars from being observed. If the rate function is modelled with \(\lambda(O\mid\theta)\), where \(\theta\) parameterises the model, then the likelihood of the model given \(N\) observed stars \(O_{1},\ldots,O_{N}\) is given by \[\ln\mathcal{L}(\theta)=\sum_{i}\ln\lambda(O_{i}\mid\theta)-\int\lambda(O\mid \theta)\,\mathrm{d}O\,. \tag{1}\] APOGEE has a selection function based on bins in dereddened colour and apparent magnitude, so Bovy et al. (2016) define the effective selection function \(\mathfrak{S}\), a convenient quantity equal to the fraction of a population's stars at each heliocentric distance \(D\) that will be spectroscopically observed in each of APOGEE's fields. This is calculated for each field by placing a tracer sample of stars in the field at that distance, and calculating the fraction that would be observed, given a dust map and the survey selection function in dereddened colour and apparent magnitude. We evaluate this for each monoabundance population in each field at a range of heliocentric distances \(D\) and ages \(\tau\). This allows us to treat stars as only having three observables: the field they appear in, their distance from the Sun \(D\), and their age \(\tau\). We used the effective selection function implementation in the apogee2 package, described in Bovy et al. (2016). We obtained the tracer population by sampling PARSEC stellar model isochrones (Bressan et al., 2012; Marigo et al., 2017) at a range of ages with a Kroupa initial mass function with a minimum mass of \(0.08M_{\odot}\)(Kroupa, 2001), cut to \(1\leq\log g<3\) to match the APOGEE red giant sample to which it was being fit. For a dust map we used a combination of Drimmel et al. (2003), Marshall et al. (2006) and Green et al. (2019), combined with the package mwdust3, also described in Bovy et al. (2016). Footnote 2: [https://github.com/jobovy/apogee](https://github.com/jobovy/apogee) Footnote 3: [https://github.com/jobovy/mwdust](https://github.com/jobovy/mwdust) Following Bovy et al. (2016), we separate our red giant sample into monoabundance populations by binning the stars in [Fe/H] and [\(\alpha\)/Fe], then fit a separate number density model \(n_{\mathrm{giants}}\) to each monoabundance population. The density model we chose to fit to each monoabundance population was a simple axisymmetric exponential in Galactocentric radius \(R\) and height above the plane of the disk \(z\), \[n_{\mathrm{giants}}(R,z\mid\mathrm{logA},a_{R},a_{z})=\exp(\mathrm{logA}-a_{R} (R-R_{0})-a_{z}|z|)\,\mathrm{kpc}^{-3}\,, \tag{2}\] parameterised by an amplitude logA and two scale parameters \(a_{R}\) and \(a_{z}\). \(R_{0}=8.1\,\mathrm{kpc}\) is the radial distance of the Sun from the Galactic centre (GRAVITY Collaboration et al., 2018). Note that both \(R\) and \(z\) are functions of our observables, the angular coordinates of the field pointing and distance from the Sun \(D\). We simultaneously fit the age distribution of each monoabundance population, assumed to be a normal distribution with mean \(\tau_{0}\) and variance \(1/\omega\), \[g(\tau\mid\tau_{0},\omega)=\sqrt{\frac{\omega}{2\pi}}\exp\left(-\frac{\omega}{ 2}(\tau-\tau_{0})^{2}\right) \tag{3}\] This form is justified by the simple narrow, monomodal age distributions for monoabundance populations found by Lian et al. (2022), and the fact that the effective selection function and IMF are sufficiently well-behaved that small changes in the age distribution will not significantly change our results. To calculate the rate function \(\lambda(\mathrm{field},D,\tau)\) for each monoabundance population, the number density of red giants needs to be multiplied a Jacobian factor \(|J(\mathrm{field},D)|=\Omega_{\mathrm{field}}D^{2}\) to convert it from a density in Cartesian coordinates to a density in \(D\). Multiplying by the age distribution gives the combined underlying distribution in field, distance and age. The observed distribution is then found by multiplying this underlying distribution by the effective selection function \(\mathfrak{S}(\mathrm{field},D,\tau)\). Thus the rate function for each monoabundance population is given by \[\begin{split}\lambda(\mathrm{field},D,\tau\mid\mathrm{logA},a_{R},a_{z},\tau_{0},\omega)&=n_{\mathrm{giants}}(R,z\mid\mathrm{ logA},a_{R},a_{z})\cdot g(\tau\mid\tau_{0},\omega)\\ &\cdot|J(\mathrm{field},D)|\cdot\mathfrak{S}(\mathrm{field},D, \tau)\,.\end{split} \tag{4}\] This particular form for the density profile has the advantage that the Poisson point process likelihood takes the tractable form \[\begin{split}\ln\mathcal{L}(\text{logA},a_{R},a_{z},\tau_{0},\omega )=\text{const}&+N\left(\text{logA}-a_{R}\langle R-R_{0}\rangle-a_ {z}\langle|z|\rangle+\tfrac{1}{2}\ln\omega-\frac{\omega}{2}\left(\langle\tau^{ 2}\rangle-2\tau_{0}\langle\tau\rangle+\tau_{0}^{2}\right)\right)\\ &-\sum_{\text{field}}\int\int\lambda(\text{field},D\mid\text{ logA},a_{R},a_{z},\tau_{0},\omega)\,\text{d}D\,\text{d}\tau\,,\end{split} \tag{5}\] where the sum over every star in the monoabundance population \(\sum_{i}\ln\lambda(O_{i}\mid\theta)\) in Eq. 1 is reduced to a linear combination of the parameters with aggregates of the data in the monoabundance population being fitted: \(N\) as the number of stars observed, \(\langle R\rangle\) and \(\langle|z|\rangle\) as the mean coordinate values, and \(\langle\tau\rangle\) and \(\langle\tau^{2}\rangle\) as the mean age and age squared. To build our model of the Milky Way disk between \(R=4\,\text{kpc}-12\,\text{kpc}\) and \(|z|=0-5\,\text{kpc}\), we found a best-fit model for each monoabundance population by maximising this likelihood with respect to logA, \(a_{R}\), \(a_{z}\), \(\tau_{0}\), and \(\omega\), visually checking each fit to ensure the global maximum had been found. The results are plotted in Figure 1. Reliable ages are not available for stars [Fe/H] \(<-0.5\) in AstroNN, so for these populations we fit only for logA, \(a_{R}\) and \(a_{z}\). We use an effective selection function dependent only on field and \(D\), calculated assuming a uniform age distribution. While a uniform age distribution is an inaccurate description of a monoabundance population, it is a non-informative assumption that ensures that the effective selection function is never zero where a star may actually be observed. On testing, we found that the effective selection function did not vary strongly with the age distribution assumed. Even then, as discussed in the section below, we do not expect stars with metallicity [Fe/H] \(<-0.5\) to contribute a significant number of ISOs, making our conclusions independent of the age distribution assumed for these stars. The two main distinct chemodynamical populations of the Milky Way are clearly shown in our model (Fig. 1). The young, high [Fe/H], low [\(\alpha\)/Fe] monoabundance populations have the low vertical scale lengths and high radial scale lengths which form the thin disk, whereas the old low [Fe/H], high [\(\alpha\)/Fe] monoabundance populations have the opposite trend in scale lengths, forming the thick disk (Mashonkina et al., 2019). This approach gives us simple but accurate models of the trends of the Milky Way disk's stellar population. These results agree with the results of Bovy et al. (2016) which also models monoabundance populations in APOGEE, fitting more complex models to only stars in the red clump, using their highly consistent absolute magnitudes as an accurate distance measurement. ### The Sine Morte Stellar Population Having obtained a model for the distribution of red giants in the Milky Way, we then infer the distribution of all stars throughout Galactic history. As described in SS 3.2, ISOs form mostly at the start of their parent star's life and then outlive their parent star. Under this constraint, the population of currently living stars is too limited to use to predict the ISO population. Instead, we must consider what the stellar population would be at the present time if stars did not die, but instead continued to orbit around the Milky Way indefinitely -- while being subjected to the same dynamical effects that affected both their longer-lived companion stars and the ISOs they had released on similar orbits. We introduce this as the _sine morte4_ stellar population. Footnote 4: Latin for “without death”, [’sine ‘morte] IPA pronunciation, or ‘seen-ay mort-ay’. First, we calculate the sine morte number density of stars in each monoabundance population, from the red giant number density. For this, we use the same PARSEC stellar model isochrones and Kroupa initial mass function with a minimum mass of \(0.08M_{\odot}\) as in SS 2. Using the isochrones with metallicities corresponding to each monoabundance population, weighted by the fitted age distribution \(g(\tau\mid\tau_{0},\omega)\) for that monoabundance population, we calculate \(N_{\text{giants}}/N_{\text{sm}}\): the fraction of all stars ever created that are currently in the red giant phase, which is again defined by \(1\leq\text{log}\,g<3\). Dividing the number density of giants by this fraction gives us the sine morte stellar number density. This assumes that the age distribution of each monoabundance population does not vary significantly across the Milky Way disk, which is reasonable for the range of \(R\) and \(z\) we are considering (Lian et al., 2022). Next, we calculate the sine morte stellar mass density by multiplying the sine morte stellar number density by the average star's initial mass, \(\langle M_{\text{int}}\rangle\). By definition, the average mass of stars in the sine morte population is the average initial mass, which is only dependent on the initial mass function. Thus we calculate \(\langle M_{\text{int}}\rangle\) from the same Kroupa IMF. All combined, the sine morte stellar mass density is given by \[\rho_{\rm sm}({\bf x})=\frac{\langle M_{\rm int}\rangle}{N_{\rm giants}/N_{\rm sm }}\cdot n_{\rm giants}({\bf x})\,. \tag{6}\] The difference between the two populations at the position of the Sun is illustrated in Fig. 2. The ratio between the sine morte stellar mass density and the red giant number density is plotted in the top right panel in Fig. 1. For the monoabundance populations with \(\rm[Fe/H]<-0.5\), without reliable age measurements, we calculate an upper limit for the sine morte mass density. We assume an old age distribution, with mean \(\tau_{0}=12\) Gyr and standard Figure 1: The best-fit values of the density modelling parameters for each monoabundance population with 20 or more observed stars, and \(\rho_{\rm sm}/n_{\rm giants}\), the ratio between the sine morte stellar mass density and the red giant number density, explained in section 2.3. deviation \(\omega^{-\frac{1}{2}}=1\,\)Gyr, which minimises \(N_{\rm giants}/N_{\rm sm}\). Even with this upper limit, below we find that stars with \(\rm[Fe/H]<-0.5\) contribute a very small number of ISOs, making our conclusions independent of this approximation. The chemical model we use in SS 3.1 to link the composition of ISOs to the composition of stars depends only on stellar metallicity [Fe/H]: so when we evaluate the model we sum the sine more distribution over the bins in [\(\alpha\)/Fe] to get a distribution in only [Fe/H] bins. To ensure accurately fitted models, we include only monoabundance populations with 20 or more observed stars. We then smooth this binned distribution by taking the derivative of a spline fit to the cumulative distribution, knotted at the edges of the bins. Plotted in Fig. 3 is the sine more metallicity distribution \(\rho_{\rm sm}(\rm[Fe/H])\) evaluated at the position of the Sun (\(R=8.1\,\)kpc, \(z=0.021\,\)kpc, GRAVITY Collaboration et al. (2018); Bennett and Bovy (2019)), and integrated over the broader \(R=4\,\)kpc\(-12\,\)kpc and \(|z|=0-5\,\)kpc range of the Galactic disk we are modelling. ## 3 Predicting the Interstellar Object Distribution In the previous section, we calculated the sine more stellar population of the Milky Way from the APOGEE survey. In this section we describe how to predict an example physical property of the Galactic ISO population -- the ISO water mass fraction distribution -- from this stellar population. ### Protoplanetary Disk Model We make the foundational assertion that all ISOs we consider form as planetesimals in a protoplanetary disk ('Oumuamua ISSI Team et al., 2019). A protoplanetary disk has to first order the same composition as the star it forms around, since they both form from the same molecular cloud core. Under this assumption, Bitsch and Battistini (2020) predict the composition of planetesimals formed around stars of different metallicities. They do this for stars with metallicities Figure 2: Comparison of the [Fe/H] distribution of red giants and the [Fe/H] distribution of the sine more stellar population around the Sun, both normalised. Though similar, there are subtle differences largely caused by the fact that the distribution of ages changes as a function of [Fe/H], as shown in Fig. 1. in the range \(-0.4\leq\rm[Fe/H]\leq 0.4\), using the average elemental composition of stars at each value of [Fe/H] in the GALAH DR2 (Buder et al., 2018), and for planetesimals that form both interior and exterior to the water ice line: the inner edge of the region of the protoplanetary disk where it is cool enough to form water ice. We assume that each star in the sine morte population produces only ISOs with their composition set by the Bitsch & Battistini (2020) formula for that metallicity, exterior to the water ice line. While in reality, stars will each produce a distribution of ISOs that formed at different positions in their protoplanetary disk and thus have a range of compositions, this simplification of only modelling planetesimals which form exterior to the water ice line is justified by the proportionally greater reservoir of snowline-exterior planetesimals, and the higher efficiencies of formation mechanisms dynamically stripping them into the interstellar population (Fitzsimmons et al., 2023). In our Solar System, the vast majority of Oort cloud objects are ice-rich (Meech et al., 2016); therefore both these and the majority of ISOs produced by the Solar System must have formed outside of the water ice line (Filacchione et al., 2022). Additionally, planetesimals beyond the water ice line are more loosely bound to their parent stars, so will be more easily ejected (e.g. Moro-Martin (2018)). We focus on the mass fraction of water, \(f_{\rm H_{2}O}\), as this varies significantly and decreases monotonically with [Fe/H] in the models of Bitsch & Battistini (2020). To obtain a smooth map from [Fe/H] to \(f_{\rm H_{2}O}\), we fit a third-order polynomial to the water ice mass fraction data points in figure 10 of Bitsch & Battistini (2020). For metallicities outside the range of \(-0.4\leq\rm[Fe/H]\leq 0.4\) we assume that the relationship between \(f_{\rm H_{2}O}\) and [Fe/H] continues to be monotonic, with \(f_{\rm H_{2}O}\) remaining high beyond the low [Fe/H] limit and remaining low beyond the high [Fe/H] limit: allowing us to track the fraction of ISOs with high, low, and intermediate water mass fraction. This range corresponds to \(0.07\leq f_{\rm H_{2}O}\leq 0.51\) in water mass fraction. Figure 3: The normalised mass-weighted sine morte stellar metallicity distribution, evaluated at the position of the Sun and integrated over the range of the Milky Way disk we are modelling. Vertical lines show the range in [Fe/H] for which we model variations in the water mass fraction \(f_{\rm H_{2}O}\) of ISOs During the writing of this paper, Cabral et al. (2023) was published, containing an updated version of the Bitsch & Battistini (2020) chemical model which included data from GalaH DR3 (Buder et al., 2021) and APOGEE DR17 (Abdurro'uf et al., 2022), and allowed for variation in [\(\alpha\)/Fe] as well as [Fe/H]. They found that the water mass fraction of planetesimals, consistent with the Bitsch & Battistini (2020) assumptions we use, is most dependent on [Fe/H]. They also confirmed the trend between [Fe/H] and \(f_{\rm H_{2}O}\) found in Bitsch & Battistini (2020). However, they also found that for the APOGEE survey there was a smaller variation in \(f_{\rm H_{2}O}\) over the same range in [Fe/H] compared to the GalaH data used in their work. Cabral et al. (2023) notes that this means that the trends seen are robust. The findings of Cabral et al. (2023) do mean that the predictions of the ISO distributions in our work may overestimate the width of the distribution in \(f_{\rm H_{2}O}\). We assume that every star produces ISOs, and the number of ISOs produced by each star depends on its mass and metallicity. Lu et al. (2020) argue that the mass of planet-forming material in a protoplanetary disk is proportional to both the mass of the host star mass \(M_{*}\) and its metal mass fraction \(Z\) -- well approximated by \(Z_{\odot}10^{\rm[Fe/H]}\) for small values of Z (\(Z_{\odot}=0.0153\), Caffau et al. (2011)). In the absence of confirmed and comprehensive knowledge of ISO formation mechanisms, we use this as a reasonable proxy for the number of ISOs produced by each star. However, the number of ISOs produced by a star may not be simply proportional to the mass of planet-forming material, because ISO production also requires the ejection of planetesimals -- which is dependent on system dynamical architecture. One major ISO ejection pathway is scattering by giant planets, the occurrence of which has its own metallicity dependence (Osborn & Bayliss, 2020). Additionally, scattering by giant planets may form ISOs with extra fragmentation, reweighting the number distribution towards lower masses and increasing the number of ISOs produced from the same mass of planet-forming material (Raymond et al., 2018). We therefore assume the number of ISOs produced by each star is proportional to its mass, while incorporating into the model a power law dependence on metallicity mass fraction: thus the number of ISOs produced is proportional to \(M_{*}\cdot 10^{\beta[\rm Fe/H]}\). Here \(\beta=1\) corresponds to the simple assumption that the number of ISOs produced is proportional to the mass of planet-forming material. \(\beta=0\) corresponds to no metallicity dependence at all. However, we expect \(\beta>0\): planetesimals require some fraction of dust and ice in order to exist, i.e. at the elemental level, C/N/O, Al/Si, &c. must be present, so a minimum metallicity constraint must exist (Johnson & Li, 2012). We do not model the constant of proportionality here, as this depends on the size distribution of ISOs, which remains observationally unconstrained with only two ISOs ('O where \(\rho(\mathbf{x},[\mathrm{Fe/H}])\) is the mass density distribution of stars at position \(\mathbf{x}\) with the metallicity [Fe/H] corresponding to the ISO water mass fraction \(f_{\mathrm{H_{2}O}}\), and \(\mathrm{d[Fe/H]/d}f_{\mathrm{H_{2}O}}\) is the gradient of the relationship between [Fe/H] and \(f_{\mathrm{H_{2}O}}\) described in SS 3.1. The fact that stars do in fact die is then corrected for by replacing \(\rho(\mathbf{x},[\mathrm{Fe/H}])\) in this equation with \(\rho_{\mathrm{sm}}(\mathbf{x},[\mathrm{Fe/H}])\), the sine morte stellar mass density introduced in SS 2.3. Correcting for the fact that ISOs disperse from near their parent stars is more complex, so we proceed with some simplifying assumptions. Once ejected, unless its resultant velocity exceeds the Galactic escape velocity, an ISO will orbit the Galactic centre, as stars do. We assume ISOs are ejected relatively slowly from their parent planetary systems compared to the stellar velocity dispersion. This is justified assuming ejection velocities \(<10\,\mathrm{km\,s^{-1}}\), the maximum ejection velocity from a planetary system under an expected suite of scattering mechanisms (Pfalzner and Bannister, 2019; Fitzsimmons et al., 2023), and the velocity dispersions \(\gtrsim 20\,\mathrm{km\,s^{-1}}\) measured in the Solar Neighbourhood (Anguiano et al., 2020). Stars form on near-circular orbits around the Galactic centre (Frankel et al., 2020), so a cloud of recently ejected ISOs will all have similar orbits to their parent star: nearly circular with similar ranges of oscillation in Galactocentric \(R\) and \(z\). However, the slight differences in their orbits will give the ISOs different orbital periods around the Galactic centre, meaning they will disperse along their similar near-circular orbital paths. Therefore, though ISOs do not stay near their parent star, we assume here that they only disperse in the azimuthal direction, and remain in the same \(R\) and \(z\) range as their parent star. Under this assumption equation 7 still holds if the stellar density model is axisymmetric, depending only on \(R\) and \(z\), as does ours: \[n_{\mathrm{ISO}}(R,z,f_{\mathrm{H_{2}O}}\mid\beta)\propto 10^{\beta[\mathrm{Fe/H }]}\cdot\frac{\mathrm{d[Fe/H]}}{\mathrm{d}f_{\mathrm{H_{2}O}}}\cdot\rho_{ \mathrm{sm}}(R,z,[\mathrm{Fe/H}]) \tag{8}\] Orbits around the Galactic centre can evolve with time, due to the influence of perturbing potentials such as spiral arms, the bar and molecular clouds. These effects can cause both dynamical 'heating', an increase in the size of radial and vertical excursions from a circular orbit, and'migration', changes to the radius of an orbit while it remains nearly circular (Sellwood and Binney, 2002). We introduced Eq. 8 with the assumption that an ISO will stay in the same range of \(R\) and \(z\) as its parent star. Due to their azimuthal separation, the star and ISOs will experience slightly different perturbing potentials, causing their orbits to evolve adjacently but independently. However, if the Galactic stellar and ISO distributions are sufficiently axisymmetric, perturbations will consistently change together both the orbits of stars and the orbits of a corresponding number of ISOs. Thus, Eq. 8 holds for our model. In this work we predict the distribution of ISOs in \(f_{\mathrm{H_{2}O}}\) both at particular values of \(R\) and \(z\) and integrated over the whole Milky Way. Since we do not model the total number of ISOs, we remove the need for the constant of proportionality in Eq. 8 by normalising each \(n_{\mathrm{ISO}}\) distribution we calculate with \[p(f_{\mathrm{H_{2}O}}\mid\beta)=\frac{n_{\mathrm{ISO}}(f_{\mathrm{H_{2}O}}\mid \beta)}{\int n_{\mathrm{ISO}}(f_{\mathrm{H_{2}O}}\mid\beta)\,\mathrm{d}f_{ \mathrm{H_{2}O}}}\,. \tag{9}\] This gives us the distribution of ISOs within the bounds of the protoplanetary disk chemical model: \(0.07\leq f_{\mathrm{H_{2}O}}\leq 0.51\). Outside of this range, we can calculate the fraction of ISOs with \(f_{\mathrm{H_{2}O}}\leq 0.07\) and \(f_{\mathrm{H_{2}O}}\geq 0.51\), by assuming that the relation between [Fe/H] and \(f_{\mathrm{H_{2}O}}\) remains monotonic. Thus, all stars with [Fe/H] \(\leq-0.4\) contribute ISOs with \(f_{\mathrm{H_{2}O}}\geq 0.51\), and all stars with [Fe/H] \(\geq 0.4\) contribute ISOs with \(f_{\mathrm{H_{2}O}}\leq 0.07\). ## 4 Results In this section we demonstrate the prediction framework of section 3 by making two different predictions. We demonstrate two different example values for a stellar metallicity dependence for ISO production, \(\beta\), with \(\beta=1\) for our principal prediction and \(\beta=0\) as an alternate prediction. ### Principal Prediction: \(\beta=1\) First, we predict the distribution of ISOs assuming that the number produced by each star is proportional to the star's metal mass fraction \(Z\), by setting \(\beta=1\) in equation 8. As described in section 3.1, in the absence of concrete knowledge of ISO formation mechanisms this is a reasonable value to assume, and thus we consider this our principal prediction. Table 1 lists the the fraction of ISOs within and outside either end of the Bitsch and Battistini (2020) protoplanetary disk chemical model \(f_{\mathrm{H_{2}O}}\) range, \(0.07\leq f_{\mathrm{H_{2}O}}\leq 0.51\). We assess both the distribution of ISOs at the position of the Sun, at \(R=8.1\,\mathrm{kpc}\), \(z=0.021\,\mathrm{kpc}\)(GRAVITY Collaboration et al., 2018; Bennett and Bovy, 2019), and the distribution of ISOs integrated over the region of the Milky Way disk we are modelling. In both cases, the vast majority of ISOs lie within the range of the model. The significant mass of stars beyond the lower [Fe/H] limit of the \(\rho_{\rm sm}(\rm[Fe/H])\) distribution in Fig. 3, both at the position of the Sun and over the whole Milky Way disk, do not contribute significantly to the fractions of ISOs in the high \(f_{\rm H_{2}O}\) range. This is because of the exponential dependence of the number of ISOs produced by each star on [Fe/H] in Eq. 8. Within the range of the chemical model, we can plot the distribution function of the ISO water mass fractions. Figure 4 shows both the population of ISOs around the Sun and over the whole Milky Way. The different shapes of the two metallicity distribution functions in Figure 3 is also apparent here, as the wider Milky Way metallicity distribution function results in a wider ISO water mass fraction distribution. The distributions of ISOs around the Sun and averaged over the Milky Way disk are remarkably similar. We explore this in Figure 5, through the sine more stellar mass [Fe/H] distribution and the ISO \(f_{\rm H_{2}O}\) distribution within the range of the chemical model at a range of values of \(R\), integrated over \(z\). Also plotted are the median values of [Fe/H] and \(f_{\rm H_{2}O}\) at each value of \(R\). Clear in the left-hand panel of Fig. 5 is the well-studied Galactic metallicity gradient \begin{table} \begin{tabular}{c c c} \hline ISO \(f_{\rm H_{2}O}\) range & Fraction of ISOs around Sun & Fraction of ISOs in Milky Way Disk \\ \hline \(f_{\rm H_{2}O}<0.07\) & 0.017 & 0.024 \\ \(0.07\leq f_{\rm H_{2}O}\leq 0.51\) & 0.955 & 0.915 \\ \(0.51<f_{\rm H_{2}O}\) & 0.027 & 0.061 \\ \hline \end{tabular} \end{table} Table 1: Primary prediction for the fraction of ISOs in each \(f_{\rm H_{2}O}\) range evaluated at the position of the Sun and integrated over whole Milky Way, with \(\beta=1\). Figure 4: Primary prediction for the distribution of ISO water mass fractions, evaluated at the position of the Sun and integrated over the Milky Way disk, for \(\beta=1\). (Cheng et al., 2012) -- but additionally in the right-hand panel is a corresponding gradient in ISO water mass fraction. Since the composition of ISOs depends on the chemical makeup of the stars that they form around, we expect trends in the chemical abundances of stars to be accompanied by equivalent trends in the compositions of ISOs. Figure 5 also shows why the Solar neighbourhood distributions are similar to the whole-Galaxy integrated distributions in Figures 3 and 4: The Solar neighbourhood happens to be at an intermediate value of \(R\) (8.1 kpc), where both the stellar [Fe/H] distribution and therefore the ISO \(f_{\rm H_{2}O}\) distributions are approximately midway between the high and low extremes. In these plots, we have normalised the distributions in [Fe/H] and \(f_{\rm H_{2}O}\) such that at each value of \(R\) the integral over [Fe/H] or \(f_{\rm H_{2}O}\) is unity. However, it is worth noting that our model implies the densities of both the stellar and ISO populations will decrease with distance from the Galactic centre. The exponential stellar density profile is well established (Juric et al., 2008). Here we predict that the Galactic ISO population density profile will decrease faster than that of the stars: due to the metallicity dependence of the number of ISOs produced by each star, the higher-metallicity stars in the inner disk will produce more ISOs per unit of stellar mass than the lower-metallicity stars of the outer disk. ### Alternate Prediction: \(\beta=0\) Setting \(\beta=1\) is just a choice in a basic and observationally unconstrained model. Therefore, we explore how changing its value affects the predicted ISO distribution. Lintott et al. (2022) predicted the ISO populations of Galaxies from the EAGLE hydrodynamical simulation using the same protoplanetary disk chemical model to map stellar metallicities [Fe/H] to ISO water mass fractions \(f_{\rm H_{2}O}\). However, Lintott et al. (2022) assumed that the number of ISOs produced by each star was independent of the star's metallicity. This is equivalent to setting \(\beta=0\) in Eq. 8 in this work, and we use this to make our alternate prediction. The resulting water mass fraction distributions for ISOs at the position of the Sun and integrated over the whole Milky Way are plotted in Figure 6 and tabulated in Table 2. Assuming \(\beta=0\) causes the ISO distribution to be more weighted towards a higher water mass fraction. These high water mass fraction ISOs come from low metallicity stars, which in this scenario no longer have their contributions to the ISO population suppressed by the exponential dependence on [Fe/H]. This means that the low metallicity tails of \begin{table} \begin{tabular}{c c c} \hline ISO \(f_{\rm H_{2}O}\) range & Fraction of ISOs around Sun & Fraction of ISOs in Milky Way Disk \\ \hline \(f_{\rm H_{2}O}<0.07\) & 0.007 & 0.009 \\ \(0.07\leq f_{\rm H_{2}O}\leq 0.51\) & 0.903 & 0.798 \\ \(0.51<f_{\rm H_{2}O}\) & 0.090 & 0.194 \\ \hline \end{tabular} \end{table} Table 2: Alternate prediction for fraction of ISOs in each \(f_{\rm H_{2}O}\) range, evaluated at the position of the Sun and integrated over the whole Milky Way with \(\beta=0\): assuming the number of ISOs produced by each star is independent of the star’s metallicity. Figure 5: Sine more stellar mass distributions (left) and ISO water mass fraction distributions (right), integrated over \(z\), at each distance to the Galactic Centre \(R\). Both distributions are normalised such that at each value of \(R\) the integral over [Fe/H] or \(f_{\rm H_{2}O}\) is unity. Dashed lines show the median value of each distribution at each \(R\). the \(\rho_{\rm sm}\)([Fe/H]) distributions in Figure 3 now contribute a significant number of ISOs with \(0.51<f_{\rm H_{2}O}\): almost 20% when averaged over the Milky Way disk. A comparison of these results to those of Lintott et al. (2022) is discussed in section 6.1. ## 5 Inference Framework ### Bayesian Inference In this work so far we have detailed a method of predicting the distribution of ISOs around the Sun, and demonstrated that this method can be used with varying models to get different predictions. These predictions can be compared to an observed sample of ISOs in order to make inferences about the models which went into producing them. In this section, we detail how to do this in a Bayesian manner. The predictions we make in this work are normalised ISO water mass fraction distributions, \(p(f_{\rm H_{2}O}\mid\beta)\), parameterised by the ISO production power law slope \(\beta\). This however could be generalised to a distribution in any set of observable properties \(\mathbf{O}\), and parameterised by any set of parameters \(\theta\). These distributions are also probability density functions for the value of \(\mathbf{O}\) of a randomly selected ISO, so \(p(\mathbf{O}\mid\theta)\,\delta\mathbf{O}\) is the probability of an ISO having properties in the infinitesimal volume \(\delta\mathbf{O}\). Extending this to multiple ISOs, a sample of ISOs with properties \(\mathbf{O}_{1},\ldots,\mathbf{O}_{N}\) will have likelihood \[p(\mathbf{O}_{1},\ldots,\mathbf{O}_{N}\mid\theta)=\prod_{i}p(\mathbf{O}_{i} \mid\theta)\] assuming ISOs arrive independently with independent compositions. This is a valid assumption if the ISOs we observe are samples from a large Galactic population with contributions from stars across the Galaxy and cosmic time. This is Figure 6: Alternate prediction for the distribution of ISO water mass fractions, evaluated at the position of the Sun and integrated over the Milky Way disk with \(\beta=0\): assuming the number of ISOs produced by each star is independent of the star’s metallicity. because the probability that two randomly selected ISOs have the same parent star is approximately equal to one over the total number of stars which could contribute ISOs to the population around the Sun. The ISO sample likelihood can then be combined with priors to form a posterior distribution on the model parameters: \[p(\theta\mid\mathbf{O}_{1},\ldots,\mathbf{O}_{N})\propto p(\theta)\cdot\prod_{i} p(\mathbf{O}_{i}\mid\theta) \tag{10}\] This posterior can be used to calculate estimates and confidence intervals for the values of the parameters \(\theta\). An example of this calculation is given in section 6.3. ### Additional Properties of ISOs As with all minor planets, there are many observable quantities for ISOs that could be used in this framework in the set of observable properties \(\mathbf{O}\). If the distribution of ISOs in these properties can be predicted, the inference method of SS 5.1 can be used to compare the predicted distribution to the observed distribution in these properties, to make inferences about the models used. If multiple properties are included in \(\mathbf{O}\), then the joint distribution of ISOs in these properties can be predicted and used in inference. Including different ISO properties in the framework will allow inferences to be made about different processes affecting the ISO population. We briefly consider several such properties here. The Milky Way stellar population is broadly divided between different chemo-dynamical populations: the thin disk (high [Fe/H], low velocity dispersion), thick disk (low [Fe/H], high velocity dispersion), and the halo (very low [Fe/H], radial orbits) (Recio-Blanco et al., 2014; Horta et al., 2023). Since the composition of an ISO depends on the metallicity of the star it formed around, each chemo-dynamical stellar population will contribute ISOs to the Galactic population with a distinct joint distribution in both composition and velocity. Therefore, including the velocities of ISOs in this framework could be used to tie ISOs to the chemo-dynamical stellar populations their parent star belongs to (Eubanks et al., 2021). As an alternative measurement of composition to water mass fraction, the carbon-to-oxygen ratio of a cometary ISO can be estimated from its coma. This contains information about an ISO's formation location in the protoplanetary disk relative to the H\({}_{2}\)O, CO\({}_{2}\) and CO ice lines; modelling suggests it would be a useful measure for future ISOs (Seligman et al., 2022). Further additions to our framework could include predictions of the size distribution and aspect ratio distribution of ISOs, which would be especially interesting due to 1I/'Oumuamua's extreme shape. The prediction of these distributions would depend on the stellar population via models of planetesimal formation which could link the size and shape distribution of planetesimals to the properties of their natal star. Finally, including the binarity rate of ISOs would test models of the applicability of formation mechanism of planetesimals to other protoplanetary disks, as the compact binarity of trans-Neptunian objects does for the Solar System protoplanetary disk (e.g. Nimmo et al., 2018). This could also constrain the ejection mechanisms of ISOs: loosely-bound wide planetesimal binaries would not survive scattering by a giant planet, but would survive a more gentle gravitational interaction with a stellar flyby. ### Generalisation to Other Galactic Populations Our method could easily be generalised to any Galaxy-wide population with a dependence on the properties of stars, simply by replacing the "ISO recipe" of section 3. For example, the distribution of planets through the Galaxy could be predicted by substituting a model of the occurrence rate of planets as a function of their host star's metallicity. This is based on the planet-metallicity correlation (Fischer and Valenti, 2005; Osborn and Bayliss, 2020): as noted earlier that dust is necessary for planets, stars with higher metallicity are more likely to host planets. Here, the Milky Way metallicity gradient means that planets are more common towards the Galactic centre. This is a testable prediction with microlensing surveys such as OGLE (Udalski et al., 2015), which continue to find exoplanets between the Solar System and the Galactic centre (Tsapras, 2018) with distances estimated from followup observations (Vandorou et al., 2023). If ISOs seed planet formation as hypothesised by Pfalzner and Bannister (2019), the ISO gradient we predict in SS 4.1 could also produce a signature in the planetary population. ## 6 Discussion As described in section 3.1, Cabral et al. (2023) updates the protoplanetary disk chemical model of Bitsch and Battistini (2020) that we use, and note that the trend of planetesimal water mass fraction decreasing with stellar metallicity is robust. They do however find that for the APOGEE survey, there was a smaller variation in \(f_{\rm H_{2}O}\) over the same range in [Fe/H] compared to the GalaH data used in Bitsch and Battistini (2020). If the true variation in \(f_{\rm H_{2}O}\) is smaller than in the model of this work, our predictions may overestimate the width of the ISO water mass fraction distribution. Lintott et al. (2022) made a prediction of the ISO population of a simulated Milky Way Galaxy from the EAGLE hydrodynamical cosmological simulation (Schaye et al., 2015), using a model equivalent to that of this work with \(\beta=0\). Whereas we predict a single-peaked \(f_{\rm H_{2}O}\) distribution, they predicted an ISO distribution with a significant number of ISOs with water mass fraction both below and above the range of the protoplanetary disk model, which they interpret as a bimodal distribution in ISO composition. There are expected reasons for the difference between the prediction here, based on the observed Milky Way stellar population, and the prediction by Lintott et al. (2022) based on the simulated EAGLE Galaxy. The EAGLE Galaxy has a much wider [Fe/H] distribution than the Milky Way, with many more stars outside of the [Fe/H] range of the protoplanetary disk model. As a smoothed particle hydrodynamics simulation, EAGLE is susceptible to producing Galaxies with [Fe/H] distributions wider than those observed in nature, due to underestimating metal mixing between particles (Wiersma et al., 2009). Therefore the results of this work, based on the observed stellar population of the Milky Way, should give a much more accurate prediction for the Milky Way's population of ISOs. ### Distinguishing Local and Galactic Populations of ISOs The results of SS 4.1 show that the well-studied metallicity gradient of the Milky Way has a corresponding ISO composition gradient, with ISOs generally having a higher water mass fraction at larger Galactocentric radii. Although we can only observe the compositions of ISOs which pass through the inner Solar System, it is still instructive to model how the distribution of ISOs varies across a wider portion of the Galactic disk. This is because we have made assumptions about the Galactic dynamics of ISOs, under which the distribution of ISOs at a point in the Galaxy corresponds to the distribution of stars at that same point. If these assumptions break down, then the particular way in which they break down will affect the population of ISOs detectable in the Solar System in a related, calculable way. For example, radial migration, caused by the non-axisymmetric potential of spiral arms, flattens the the Milky Way metallicity gradient by blurring the metallicity distribution in the radial direction (Vickers et al., 2021). This widens the stellar metallicity distribution around the Sun as stars migrate in from the metal-poor outer disk and out from metal-rich inner disk. However, ISOs may undergo less radial migration than stars, due to the random motion given to them by their ejection from their home planetary system (Daniel and Wyse, 2015). The stars currently in the Solar neighbourhood may thus have a wider range of Galactocentric radii of origin than the ISOs. This would make the distribution of observable ISO compositions narrower than would be predicted from the distribution of stars. Additionally, the low velocity of 1I/'Oumuamua relative to the local standard of rest implies that it -- and therefore a large fraction of observable ISOs -- could come from local star-forming regions (e.g. Hallatt and Wiegert, 2020). This is also testable: their compositions will match those predicted from the metallicities of the nearest star forming regions. In addition, if the velocity distribution of ISOs is included in future work as described in 5.2, it may be possible to trace individual ISOs back to the star forming regions they came from by matching them up with both composition and velocity. This would be hugely advantageous for studying planetesimal formation, as it would allow us to pair the properties of detected ISOs directly with observations of their parent planetary systems. ### An Estimation of the ISO Production Metallicity Dependence The two different predictions made in SS 4 demonstrate that the Galactic population of ISOs is sensitive to small changes to the processes that affect their formation and evolution. This means that if models of these processes can be combined to make accurate predictions of the ISO population, then the framework described in SS 5 can be used to make inferences about those processes. In particular, the predictions of sections 4.1 and 4.2 show that the ISO distribution around the Sun is sensitive to the metallicity dependence of the number of ISOs produced by each star, \(\beta\). As outlined in section 5.1, these two predictions can then be compared to a sample of ISOs. In our example physical property, this is done by calculating the likelihood of each of these predictions producing the observed distribution of water mass fractions, such as for the ISOs expected to be found by the Legacy Survey of Space and Time (LSST) of the Vera C. Rubin Observatory. For a more general result, this likelihood can be combined with a prior on \(\beta\) and Bayes' theorem to calculate a posterior distribution for \(\beta\). Though the work of this paper has been carried out in expectation of a larger sample of ISOs being known in the future, we do already have a preliminary estimate of the ISO water mass fraction distribution. Due to the unknown composition of 1I/'Oumuamua, we can only estimate the water mass fraction of 2I/Borisov. We adopt a value of \(f_{\rm H_{2}O}=0.3\), after that of Seligman et al. (2022) inferred from the production rates of 2I's coma. However, this value is more useful for demonstrative purposes than as a finely constrained estimate; estimating a comet's bulk composition from the composition of its coma is challenging. For instance, Seligman et al. (2022) note that the compositions of interstellar comets calculated from production rates are affected by preferential desorbtion of CO and CO\({}_{2}\) relative to H\({}_{2}\)O. With this observed distribution of \(f_{\rm H_{2}O}\) we can then use the Bayesian framework of SS 5.1 to calculate a posterior distribution for \(\beta\). We use our model for the distribution of ISOs around the Sun, and taking a uniform prior on \(\beta\) means that the posterior is simply proportional to the likelihood, equal to the value of the \(f_{\rm H_{2}O}\) distribution. For \(f_{\rm H_{2}O}=0.3\) this posterior is maximised by a value of \(\beta=1.33\). The distribution of ISOs at this value is listed and plotted in Table 3 and Figure 7. The symmetric 90% confidence limit for this estimation of \(\beta\) is \(\beta\in(-1.3,7.2)\). This is of course a wide interval containing physically implausible values. The physically implausible values could be removed from the confidence interval by using a physically motivated prior on \(\beta\) -- but with one known value of \(f_{\rm H_{2}O}\) this would make the posterior dominated by the prior. Using the sample of ISOs that the LSST is expected to find will much better constrain the posterior for \(\beta\) and other parameters used in models with this framework. Figure 7: Distribution of ISO water mass fractions, evaluated at the position of the Sun and integrated over the Milky Way disk, with maximum _a posteriori_ estimate of \(\beta=1.33\). At \(f_{\rm H_{2}O}=0.3\) a vertical line marks the measured water mass fraction of 2I/Borisov. ### Selection Effects It should be noted that the predictions in this work ignore some specific effects which will influence the population of ISOs we expect to observe from the Earth. The predictions in SS 4 are the distribution of ISOs in a smooth Galaxy-wide distribution, evaluated at the location of the Solar System. Gravitational focussing by the Sun will increase the density of ISOs in the inner solar system in a velocity-dependent manner (e.g. Engelhardt et al., 2017; Forbes and Loeb, 2019; Dehnen and Hands, 2022), and therefore also in a composition-dependent manner. This is due to the fact that we expect compositionally and dynamically distinct populations of ISOs to come from the chemo-dynamically distinct stellar populations in the Milky Way thin and thick disks (c.f. Eubanks et al., 2021). This could handily be modelled when incorporating ISO velocities into the predictions of this framework. Additionally, the Pan-STARRS near-Earth object survey (Chambers et al., 2016) that detected 1I/'Oumuamua, the observations of amateur astronomers such as Gennadiy Borisov who discovered 2I/Borisov, and the LSST which will discover tens more ISOs: all have highly non-trivial selection functions. Since in order to be discovered an ISO needs to be detected in multiple observations which can be linked as the same object (e.g. Meech et al., 2017; Schwamb et al., 2023), these selection functions are dependent on ISO size, approach velocity, perihelion, and composition. Composition has a direct link to the detectability of ISOs, as 2I-sized and cometary ISOs will be more likely to be detected, as they will form a coma as they approach the Sun. These selection effects will need to be accurately accounted for, in order for the Bayesian framework to produce accurate inferences about the processes affecting the Galactic ISO population. ## 7 Conclusion In advance of the Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST), we lay out a framework to predict the Galactic distribution of ISOs, using the stellar population of the Milky Way. Using the method of Bovy et al. (2016), we fit simple density models to a sample of red giants in APOGEE binned in [Fe/H] and [\(\alpha\)/Fe], and use these to evaluate the "sine morte" metallicity distribution of stars throughout the Galaxy's integrated history, across the Galactic disk. Under the assumption that the spatial distribution of a population of ISOs will be the same as the sine morte distribution of stars which formed them, we use the protoplanetary disk model of Bitsch and Battistini (2020) to map the metallicity distribution of stars to the distribution of ISO water mass fractions. Localising our model to the Solar neighbourhood, we predict that 95% of ISOs around the Sun have water mass fraction \(f_{\rm H_{2}O}\) between 0.07 and 0.51, with a peak around 0.35. By considering the distribution of ISOs over the Galactic disk, we show that the well-studied Milky Way metallicity gradient has an equivalent gradient in ISO composition, with the median ISO water mass fraction increasing with distance from the Galactic Centre as the median stellar metallicity decreases. This causes the ISO water mass fraction distribution averaged over the Milky Way disk to be wider that the distribution around the Sun. Since we also predict higher-metallicity stars produce more ISOs than lower-metallicity stars, the Milky Way metallicity gradient implies that the radial ISO density profile is steeper than the exponential radial stellar density profile, making ISOs much more common in the inner Galactic disk than in the outer disk. We also set out a Bayesian inference framework, which can compare predictions of the ISO distribution to the sample observed by the LSST, in order to make inferences about the many astrophysical processes which influence the ISO population. To demonstrate its use, we use the composition measurement of 2I/Borisov to calculate a maximum _a posteriori_ estimate for the power law slope of the ISO production metallicity dependence, \(\beta\), to be 1.33, with a symmetric 90% confidence interval of \((-1.3,7.2)\). MJH acknowledges support from the Science and Technology Facilities Council through grant ST/W507726/1. MJH also thanks the LSSTC Data Science Fellowship Program, which is funded by LSSTC, NSF Cybertraining Grant #1829740, the Brinson Foundation, and the Moore Foundation; his participation in the program has benefitted from this work. MTB appreciates support by the Rutherford Discovery Fellowships from New Zealand Government funding, administered by the Royal Society Te Aparangi. Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High Performance Computing at the University of Utah. The SDSS website is www.sdss4.org. SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, Center for Astrophysics -- Harvard & Smithsonian, the Chilean Participation Group, the French Participation Group, Instituto de Astrofisica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, the Korean Participation Group, Lawrence Berkeley National Laboratory, Leibniz Institut fur Astrophysik Potsdam (AIP), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astrophysik (MPA Garching), Max-Planck-Institut fur Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observatario Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Autonoma de Mexico, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University. NumPy (Harris et al., 2020); SciPy (Virtanen et al., 2020); Astropy (Astropy Collaboration et al., 2013, 2018, 2022); Matplotlib (Hunter, 2007)
2306.14278
Non-Abelian Factors for Actions of $\mathbb{Z}$ and Other Non-$C^*$-Simple Groups
Let $\Gamma$ be a countable group and $(X, \Gamma)$ a compact topological dynamical system. We study the question of the existence of an intermediate $C^*$-subalgebra $\mathcal{A}$ $$C^{*}_{r}(\Gamma)<\mathcal{A}<C(X)\rtimes_r\Gamma,$$ which is not of the form $\mathcal{A} = C(Y) \rtimes_r \Gamma$, corresponding to a factor map $(X,\Gamma) \to (Y,\Gamma)$. Here $ C^{*}_{r} (\Gamma)$ and $C(X) \rtimes_r \Gamma$ are the reduced $C^*$-algebras of $\Gamma$ and $(X,\Gamma)$ respectively. Our main results are (1) For $\Gamma$, which is not $C^*$-simple, if $(X,\Gamma)$ admits a $\Gamma$-invariant probability measure, then such a sub-algebra always exists. (2) For $\Gamma = \mathbb{Z}$ and $(X, \Gamma)$ an irrational rotation of the circle $X = S^1$, we give a full description of all these non-crossed-product subalgebras.
Tattwamasi Amrutam, Eli Glasner, Yair Glasner
2023-06-25T15:59:08Z
http://arxiv.org/abs/2306.14278v2
# Intermediate algebras of crossed product \(C^{*}\)-algebras ###### Abstract. Let \(\Gamma\) be a countable group and \((X,\Gamma)\) a compact topological dynamical system. We study the question of the existence of an intermediate \(C^{*}\)-subalgebra \(\mathcal{A}\) \[C^{*}_{r}(\Gamma)<\mathcal{A}<C(X)\rtimes_{r}\Gamma,\] which is not of the form \(\mathcal{A}=C(Y)\rtimes_{r}\Gamma\), corresponding to a factor map \((X,\Gamma)\to(Y,\Gamma)\). Here \(C^{*}_{r}(\Gamma)\) and \(C(X)\rtimes_{r}\Gamma\) are the reduced \(C^{*}\)-algebras of \(\Gamma\) and \((X,\Gamma)\) respectively. Our main results are: (1) For \(\Gamma\) which is not \(C^{*}\)-simple, if \((X,\Gamma)\) admits a \(\Gamma\)-invariant probability measure then such a sub-algebra always exists. (2) For \(\Gamma=\mathbb{Z}\) and \((X,\Gamma)\) an irrational rotation of the circle \(X=S^{1}\), we give a full description of all these non-crossed-product subalgebras. Key words and phrases:\(C^{*}\)-crossed products, intermediate subalgebras, irrational rotation, crossed product, \(C^{*}\)-simple groups 2020 Mathematics Subject Classification: Primary 37A55, 37B05; Secondary 46L55 This research was supported by grants of the Israel Science Foundation: ISF 1175/18 for the first author, ISF 1194/19 for the second, and ISF 2919/19 for the third. ###### Contents * 1 Introduction * 2 Preliminaries [MISSING_PAGE_POST] main theorem gives a complete classification of all the intermediate subalgebras of \(\mathcal{C}\). This classification, in turn, yields structural information about these subalgebras. To state our results, we first introduce some terminology. Let \(\mathcal{I}\) denote the collection of all closed two-sided ideals in \(C^{*}_{r}(\mathbb{Z})\). We denote by \[\mathfrak{A}=\{\mathcal{A}\ |\ C^{*}_{r}(\mathbb{Z})<\mathcal{A}<\mathcal{C}\}, \qquad\mathfrak{D}=\{\mathfrak{c}:\mathbb{Z}\to\mathcal{I}\ |\ \mathfrak{c}(0)=C^{*}_{r}(\mathbb{Z})\},\] the collections of all intermediate subalgebras, and of all \(\mathcal{I}\) valued functions on \(\mathbb{Z}\) such that \(\mathfrak{c}(0)=C^{*}_{r}(\mathbb{Z})\) respectively. We will refer to elements of \(\mathfrak{D}\) as _ideal functions_. The _support_ of an ideal function is defined as \[\operatorname{Supp}(\mathfrak{c})=\{n\in\mathbb{Z}\ |\ \mathfrak{c}(n)\neq(0)\}. \tag{1}\] For an intermediate algebra \(\mathcal{A}\in\mathfrak{A}\) and for an ideal function \(\mathfrak{c}\in\mathfrak{D}\) we set \[\mathfrak{I}_{\mathcal{A}}=\{I\ |\ I\lhd\mathcal{A}\}\qquad\mathfrak{D}_{ \mathfrak{c}}=\{\mathrm{j}:\mathbb{Z}\to\mathcal{I}\ |\ \mathrm{j}(n)<\mathfrak{c}(n),\forall n\in\mathbb{Z}\}.\] \(\mathfrak{I}_{\mathcal{A}}\) is the collection of all closed two-sided ideals in \(\mathcal{A}\). Note that such an ideal is automatically *-invariant. In the sequel, when we say 'ideal,' we mean a closed two-sided one. We refer to the elements of \(\mathfrak{D}_{\mathfrak{c}}\) as \(\mathfrak{c}\)_-ideal functions_. Note that \(\mathfrak{D}_{\mathfrak{c}}\subsetneq\mathfrak{D}\) because we do not require \(\mathrm{j}(0)=C^{*}_{r}(\mathbb{Z})\) for \(\mathrm{j}\in\mathfrak{D}_{\mathfrak{c}}\). Again the _support_ of a \(\mathfrak{c}\)-ideal function is defined by an equation identical to Equation (1). Since by definition \(\mathrm{j}(n)<\mathfrak{c}(n),\forall n\in\mathbb{Z}\), we have \(\operatorname{Supp}(\mathrm{j})\subset\operatorname{Supp}(\mathfrak{c}), \forall\mathrm{j}\in\mathfrak{D}_{\mathfrak{c}}\). All of these collections admit a natural lattice structure. The partial order on \(\mathfrak{A}\) is given by inclusion. Similarly, on \(\mathfrak{D}\) we write \(\mathfrak{c}_{1}\preceq\mathfrak{c}_{2}\) when \(\mathfrak{c}_{1}(n)\leq\mathfrak{c}_{2}(n),\ \forall n\in\mathbb{Z}\). The lattice operations are given by: \[\begin{array}{ll}\mathcal{A}_{1}\wedge\mathcal{A}_{2}=\mathcal{A}_{1}\cap \mathcal{A}_{2},&\mathcal{A}_{1}\vee\mathcal{A}_{2}=\overline{\langle\mathcal{ A}_{1},\mathcal{A}_{2}\rangle},&\forall\mathcal{A}_{1},\mathcal{A}_{2}\in \mathfrak{A}\\ (\mathfrak{c}_{1}\wedge\mathfrak{c}_{2})(n)=\mathfrak{c}_{1}(n)\cap\mathfrak{c }_{2}(n),&(\mathfrak{c}_{1}\vee\mathfrak{c}_{2})(n)=\overline{\mathfrak{c}_{ 1}(n)+\mathfrak{c}_{2}(n)},&\forall\mathfrak{c}_{1},\mathfrak{c}_{2}\in \mathfrak{D}\end{array}\] For given \(\mathcal{A}\in\mathfrak{A}\) and \(\mathfrak{c}\in\mathfrak{D}\) the lattice structure on \(\mathfrak{I}_{\mathcal{A}}\) and on \(\mathfrak{D}_{\mathfrak{c}}\) is defined similarly. Via Fourier transform \(C^{*}_{r}(\mathbb{Z})\cong C(S^{1})\), and hence \(\mathcal{I}\) can be identified with the collection of closed subsets of the circle \(\operatorname{Cl}(S^{1})\). Under this identification, with every \(\mathfrak{c}\in\mathfrak{D}\), we associate a _set function_\(\tilde{\mathfrak{c}}:\mathbb{Z}\to\operatorname{Cl}(S^{1})\), such that \(\tilde{\mathfrak{c}}(0)=\emptyset\). By \(\tilde{\mathfrak{D}}\), we denote the collection of all such functions. Similarly for \(\mathfrak{c}\in\mathfrak{D}\) we define the collection \(\tilde{\mathfrak{D}}_{\mathfrak{c}}\). It is crucial to distinguish between the given dynamical system \((X,T)\) and the circle \(S^{1}=\hat{\mathbb{Z}}\) introduced in the above paragraph (see Section 2.2). Still, perhaps surprisingly, we will often use the irrational rotation given by \(\tau(t)=t+\alpha\mod 2\pi\). Thus \(\tau\) represents the irrational rotation on \(S^{1}\) by the same angle \(\alpha\) appearing in the given irrational rotation \(T:X\to X\). **Definition 1.1**.: An ideal function \(\mathfrak{c}\in\mathfrak{D}\) is called _closed_ if the following properties are satisfied: * Cif1 \(\tilde{\mathfrak{c}}(-n)=\tau^{n}\tilde{\mathfrak{c}}(n),\ \forall n\in\mathbb{Z}\), * Cif2 \(\tilde{\mathfrak{c}}(m+n)\subset\tau^{-m}\tilde{\mathfrak{c}}(n)\cup\tilde{ \mathfrak{c}}(m)\) for every \(m,n\in\operatorname{Supp}(\mathfrak{c})\). By \(\mathfrak{C}\subset\mathfrak{D}\), we denote the sublattice of closed ideal functions. We define functions \(\Phi:\mathfrak{D}\to\mathfrak{A}\) and \(\Psi:\mathfrak{A}\to\mathfrak{D}\) by: \[\Phi(\mathfrak{c})=\overline{\langle e^{inx}\eta\ |\ n\in\mathbb{Z},\eta\in \mathfrak{c}(n)\rangle},\quad\Psi(\mathcal{A})(n)=\{\eta\in C^{*}_{r}(\mathbb{Z} )\ |\ e^{inx}\eta\in\mathcal{A}\}.\] (See Section 2 for more details concerning these notations.) With all these definitions in place, we can state: **Theorem 1.2**.: _(Main theorem) Let \(\Phi:\mathfrak{D}\to\mathfrak{A}\) and \(\Psi:\mathfrak{A}\to\mathfrak{D}\) be as above, then:_ 1. _The maps_ \(\Phi,\Psi\) _form a (monotone) Galois connection between the two lattices_ \(\mathfrak{D},\mathfrak{A}\)_. Namely, for_ \(\mathfrak{c}\in\mathfrak{D},\mathcal{A}\in\mathfrak{A}\) _we have_ \(\Phi(\mathfrak{c})<\mathcal{A}\) _if and only if_ \(\mathfrak{c}<\Psi(\mathcal{A})\)_._ 2. _The connection is perfect on the algebra side. Namely_ \(\mathcal{A}=\Phi\circ\Psi(\mathcal{A})\) _for every_ \(\mathcal{A}\in\mathfrak{A}\)_._ 3. \(\mathfrak{c}=\Psi\circ\Phi(\mathfrak{c})\) _if and only if it is closed in the sense of Definition_ 1.1_._ 4. \(\Phi\upharpoonright\mathfrak{c}\colon\mathfrak{C}\to\mathfrak{A}\) _is an isomorphism of lattices, with_ \(\Psi\) _as its inverse._ It is worthwhile noting what is easy and requires new ideas in the above theorem. Once all the definitions are in place, the fact that \(\Phi,\Psi\) form a Galois connection is clear. We invite the readers to verify that before proceeding. From this, it follows, by standard properties of Galois connections, that \(\Phi,\Psi\) form an isomorphism between the sublattices of closed elements on both sides. The deeper part lies in showing that this Galois connection does not degenerate in the sense that closed elements are ubiquitous on both sides. In particular, the connection is perfect in the sense that all elements are closed on the algebra side. Indeed it is not clear at all why a given intermediate algebra \(\mathcal{A}\in\mathfrak{A}\) should contain any element of the form \(e^{inx}\eta\ (n\ {\pm}\ 0,\ 0\neq\eta\in C^{*}_{r}(\mathbb{Z}))\). For any given algebra \(\mathcal{A}\in\mathfrak{A}\) or ideal function, with an associated ideal function \(\mathfrak{c}=\Psi(\mathcal{A})\in\mathfrak{C}\), we can state a similar classification for all closed two-sided ideals of \(\mathcal{A}\). **Definition 1.3**.: Given \(\mathfrak{c}\in\mathfrak{C}\) an \(\mathfrak{c}\)-ideal function \(\mathfrak{j}\in\mathfrak{D}_{\mathfrak{c}}\) will be called a _closed_ it satisfies the following two conditions: \(\mathtt{Cif1}(\mathfrak{c})\ \stackrel{{\cdot}}{{\operatorname{j}}}(-n)= \tau^{n}\stackrel{{\cdot}}{{\operatorname{j}}}(n),\ \forall n\in\mathbb{Z}\), \(\mathtt{Cif2}(\mathfrak{c})\ \stackrel{{\cdot}}{{\operatorname{j}}}(m+n) \subset\tau^{-m}\stackrel{{\cdot}}{{\operatorname{c}}}(n)\cup \stackrel{{\cdot}}{{\operatorname{j}}}(m)\) for every \(n\in\operatorname{Supp}(\mathfrak{c}),m\in\operatorname{Supp}(\mathfrak{j})\). For \(\mathfrak{c}\in\mathfrak{D}\) and \(\mathcal{A}\in\mathfrak{A}\) we define: \(\Phi_{\mathfrak{c}}:\mathfrak{D}_{\mathfrak{c}}\to\mathfrak{J}_{\Phi( \mathfrak{c})}\) and \(\Psi:\mathfrak{I}_{\mathcal{A}}\to\mathfrak{D}_{\Psi(\mathcal{A})}\) by: \[\Phi_{\mathfrak{c}}(\mathfrak{j})=\overline{\langle e^{inx}\eta\ |\ n\in\mathbb{Z},\eta \in\mathfrak{j}(n)\rangle},\quad\Psi_{\mathcal{A}}(I)(n)=\{\eta\in C^{*}_{r}( \mathbb{Z})\ |\ e^{inx}\eta\in I\}.\] **Theorem 1.4**.: _Fix \(\mathfrak{c}\in\mathfrak{D}\) and set \(\mathcal{A}=\Phi(\mathfrak{c})\)._ 1. _The maps_ \(\Phi_{\mathfrak{c}},\Psi_{\mathcal{A}}\) _form a (monotone) Galois connection between the two lattices_ \(\mathfrak{D}_{\mathfrak{c}},\mathfrak{I}_{\mathcal{A}}\)_._ 2. _The connection is perfect at_ \(\mathfrak{I}_{\mathcal{A}}\)_._ 3. \(\mathfrak{j}=\Psi_{\mathcal{A}}\circ\Phi_{\mathfrak{c}}(\mathfrak{j})\) _if and only if it is closed in the sense of Definition_ 1.3_._ 4. \(\Phi_{\mathfrak{c}}\upharpoonright\mathfrak{c}\colon\mathfrak{C}_{\mathfrak{c}} \to\mathfrak{I}_{\mathcal{A}}\) _is an isomorphism of lattices, with_ \(\Psi_{\mathcal{A}}\) _as its inverse._ Now let us turn to some applications of this structure theory. **Definition 1.5**.: We say that an invariant algebra \(\mathcal{A}\in\mathfrak{A}\) is _residual_ if \(\Psi(\tilde{\mathcal{A}})(n)\in\operatorname{Cl}(S^{1})\) is nowhere dense for every \(n\in\mathbb{N}\). An ideal \(J\lhd\mathcal{A}\) in a residual algebra is called a _residual ideal_ if \(\Psi_{\mathcal{A}}(J)(n)\) is nowhere dense for every \(n\in\mathbb{Z}\). Here are some properties of these algebras following the classification (see Subsection 4.1). **Theorem 1.6**.: _Let \(\mathcal{A},\{\mathcal{A}_{i}\}_{i\in I}\in\mathfrak{A}\) be residual algebras and \((0)\neq J\lhd\mathcal{A}\) a closed two sided nontrivial ideal. Let \(\mathfrak{c},\{\mathfrak{c}_{i}\}_{i\in I},\mathfrak{j}\) be the associated ideal functions. Set_ \[\Omega=\bigcup_{m,n\in\operatorname{Supp}(\mathfrak{c})}\tau^{-m}\tilde{ \mathfrak{c}}(n)\] _Then_ 1. \(\bigvee_{i\in I}\mathcal{A}_{i}\) _is residual for any set of indices_ \(I\)_._ 2. \(\mathcal{A}_{1}\wedge\mathcal{A}_{2}\) _is residual._ 3. \(\operatorname{Supp}(\mathfrak{c})=\{n\in\mathbb{Z}\mid\mathfrak{c}(n)\neq(0)\}\) _is a subgroup of_ \(\mathbb{Z}\)_._ 4. \(\operatorname{Supp}(\mathfrak{c})=\operatorname{Supp}(\mathfrak{j})\)_._ 5. \(\tilde{\mathfrak{j}}(k)\subset\Omega\) _for every_ \(k\in\mathbb{Z}\) _and in particular every non trivial ideal_ \(J\lhd\mathcal{A}\) _is residual._ 6. \(\mathcal{A}\) _is center free._ _Property (6) holds more generally, even if we assume that \(\operatorname{Supp}(\mathfrak{c})(n)\) is nowhere dense for one \(0\neq n\in\operatorname{Supp}(\mathfrak{c})\)._ Residual algebras should be thought of as large intermediate subalgebras. At the other extreme stand, the algebras highlighted in the following result, which we will refer to as _small_ intermediate subalgebras (see Sub-section 4.2). **Proposition 1.7**.: _The following conditions are equivalent for an intermediate subalgebra \(\mathcal{A}\in\mathfrak{A}\)._ * \(\operatorname{Supp}(\Psi(\mathcal{A}))\) _is finite._ * \(\mathcal{A}\) _is finite dimensional as a module over_ \(C^{\ast}_{r}(\mathbb{Z})\)_._ _Many small algebras admit a non-trivial center._ In Subsection 4.3, we use our machinery to study the simplicity of intermediate algebras. We give examples of some algebras that are simple and others that are not. A partial characterization for simplicity is given in Propositions 4.5 and 4.11. _Remark 1.8_.: **The universal property of \(\mathcal{C}\):** Contrary to what the notation \(\mathcal{C}=C(X)\rtimes_{r}\mathbb{Z}\) seems to imply there is a symmetry between the two factors of this crossed product, realized by an involutive automorphism \(\iota:\mathcal{C}\to\mathcal{C}\) with the property that \(\iota C(X)=C^{\ast}_{r}(\mathbb{Z})\). When \(\mathcal{C}_{\alpha}\) is viewed as a crossed product over the algebra \(C^{\ast}_{r}(\mathbb{Z})\) it is isomorphic to \(\mathcal{C}_{-\alpha}\). It follows that all our results concerning the intermediate subalgebras \(C^{\ast}_{r}(\mathbb{Z})<\mathcal{A}<\mathcal{C}\), are also valid for the analogous intermediate subalgebras \(C(X)<\mathcal{A}<\mathcal{C}\), when they are reformulated with respect to the rotation \(R_{-\alpha}\). See Remark 2.1 below. To sum up, we have: **Theorem 1.9**.: _There are unaccountably many intermediate subalgebras \(C^{\ast}_{r}(\mathbb{Z})<\mathcal{A}<\mathcal{C}\) which are not of the form \(\mathcal{A}=C(Y)\rtimes_{r}\mathbb{Z}\) with \(X\to Y\) a dynamical factor of the system \((X,R_{\alpha})\). They are indexed by the system of ideal functions \(\mathfrak{C}\) as described in Theorem 1.2._ The fact that the crossed product \(C(X)\rtimes_{r}\mathbb{Z}\) admits sub-algebras which are not themselves crossed products, is not at all special for \(\mathbb{Z}\) dynamical systems. Let \((X,\Gamma)\) be a dynamical system that admits an invariant probability measure of full support. For such a dynamical system \((X,\Gamma)\), in Section 5, we establish that every non-trivial ideal \(I<C_{r}^{*}(\Gamma)\), corresponds an intermediate subalgebra \(\mathcal{B}_{I}\), \[C_{r}^{*}(\Gamma)\subset\mathcal{B}_{I}\subset C(X)\rtimes_{r}\Gamma,\] such that \(\mathcal{B}_{I}\cap C(X)=\mathbb{C}\). Thus the existence of a non-trivial ideal will be automatically satisfied when \(\Gamma\) is not \(C^{*}\)-simple. Moreover, this construction is also valid for general \(\Gamma\)-\(C^{*}\)-algebras \(\mathcal{A}\) as long as \(\mathcal{A}\) admits a faithful \(\Gamma\)-invariant state. However, to conclude that \(\mathcal{B}_{I}\) is not a crossed product \(C^{*}\)-subalgebra, we need additional assumptions on the group \(\Gamma\) or the \(C^{*}\)-algebra \(\mathcal{A}\). In particular, we have: **Theorem 1.10**.: _Let \(\Gamma\) be a discrete group that is not \(C^{*}\)-simple. Let \(\Gamma_{f}\) denote the FC-center of the group \(\Gamma\). Assume that either \(\Gamma_{f}=\{e\}\) or \(|\Gamma_{f}|>2\). Let \(\mathbb{C}\neq\mathcal{A}\) be a \(\Gamma\)-\(C^{*}\)-algebra admitting a faithful \(\Gamma\)-invariant state. Then, there is an intermediate \(C^{*}\)-algebra \(C_{r}^{*}(\Gamma)\subset\mathcal{D}\subset\mathcal{A}\rtimes_{r}\Gamma\) which is not of the form \(\mathcal{D}=\mathcal{B}\rtimes_{r}\Gamma\) with \(\mathcal{B}\subset\mathcal{A}\) a \(\Gamma\)-invariant subalgebra._ We would like to point out that when \(\Gamma_{f}=\{e\}\) (or, equivalently \(\Gamma\) is an i.c.c. group), every non-trivial ideal \(I<C_{r}^{*}(\Gamma)\) corresponds to an intermediate subalgebra \(\mathcal{B}_{I}\) which is not a crossed product \(C^{*}\)-algebra (see Theorem 5.9). On the other hand, if \(\Gamma_{f}\neq\{e\}\) (or, equivalently \(\Gamma\) is a non-i.c.c. group), the assumption on \(\Gamma_{f}\) can be removed whenever \(\mathcal{A}=C(X)\) for a compact Hausdorff space \(X\) with more than two elements (see Corollary 5.13). When \(X\) has exactly two points and \(\Gamma=\mathbb{Z}/2\mathbb{Z}\), we show that there is no intermediate subalgebra \(\mathcal{B}\) of the form \(C_{r}^{*}(\mathbb{Z}/2\mathbb{Z})\subset\mathcal{B}\subset C(X)\rtimes_{r} \mathbb{Z}/2\mathbb{Z}\) (see Example 5.7). Using similar methods, we also show that, for a \(\Gamma\)-\(C^{*}\)-algebra \(\mathcal{A}\), every non-trivial two-sided closed \(\Gamma\)-invariant ideal \(I\leq\mathcal{A}\) corresponds to an intermediate object \(\mathcal{A}\subset\mathcal{B}\subset\mathcal{A}\rtimes_{r}\Gamma\) such that \(\mathcal{B}\) is not a crossedproduct of the form \(\mathcal{A}\rtimes_{r}\Lambda\) for any normal subgroup \(\Lambda\lhd\Gamma\) (see Theorem 5.16). In subsequent two works in progress, we attempt to generalize our classification to the more general setting of \(C(X)\rtimes_{r}\Gamma\) with \(X\) a compact Abelian group and \(\Gamma\) a countable dense subgroup of \(X\). In the second, we treat analogous questions regarding intermediate von Neumann algebras \(\mathcal{N}\) of the form \(L(\Gamma)\subset\mathcal{N}\subset\mathcal{M}\rtimes\Gamma\), where \(\mathcal{M}\) is a von Neumann algebra. In particular, we study the case where \(\mathcal{M}\) is the commutative \(L^{\infty}(X,\mu)\) equipped with the automorphism corresponding to the irrational rotation \(R_{\alpha}\). When we showed a draft of our work to Ilan Hirshberg, he pointed out that our classification results are closely related and, in fact, partly overlap several existing works in the literature. More specifically, the works of Ruy Exel (see [1]), where the author shows the existence and investigates some of the properties of our ideal functions in a much more general setup. Then, his extensive book on "Fell Bundles" [1], and in particular, the results in Chapter 23 of this book. The second source of examples arises as a result of the dual nature of \(\mathcal{C}\). Our construction of ideal functions turns out to be a generalization (within \(\mathcal{C}\)) of the well-known Putnam's '\(Y\)-orbit breaking' subalgebras, as described, for example, in Phillips' contribution to the book [11] (see Subsection 2.8). Nonetheless, our approach and methods seem to be new, and so is our complete classification of intermediate subalgebras. ### Acknowledgements We thank Ilan Hirshberg, Hanfeng Li, Mehrdad Kalantar, Sven Raum, and Yongle Jiang for their helpful remarks, discussions, and corrections. ## 2. Some notations and preliminaries ### The crossproduct \(C(X)\rtimes_{r}\mathbb{Z}\) Let \((X,\mu,T)\) be a metric minimal uniquely ergodic cascade (i.e. a \(\mathbb{Z}\)-action, \(n\mapsto T^{n}\), where \(T\) is a homeomorphism of \(X\)). Let \(\mathcal{C}=C(X)\rtimes_{r}\mathbb{Z}\) be the corresponding reduced crossed product \(C^{*}\)-algebra and, as usual, we denote by \(C^{*}_{r}(\mathbb{Z})\) the \(C^{*}\)-algebra of \(\mathbb{Z}\). Via the Fourier transform \(\mathcal{F}:\ell^{2}(\mathbb{Z})\to L^{2}(S^{1},\frac{dt}{2\pi})\), \((a_{n})\mapsto\sum_{n\in\mathbb{Z}}a_{n}e^{int}\) (with \(S^{1}=\mathbb{R}/2\pi\mathbb{Z}\)), we can view \(C^{*}_{r}(\mathbb{Z})\) as the commutative algebra \(C(S^{1})\), where for \(\varphi\in C(S^{1})\) the corresponding operator in \(C^{*}(\mathbb{Z})\) is given by multiplication \(v\mapsto\varphi v,\ v\in L^{2}(S^{1})\). Under this correspondence, we identify \(\lambda_{1}\) (which corresponds to the shift operator on \(\ell^{2}(\mathbb{Z})\)) with the function \(e^{it}\in C(S^{1})\), and \(\lambda_{n}\) with \(e^{int}\), for every \(n\in\mathbb{Z}\). We will denote the opposite isomorphism \(C(S^{1})\to C^{*}_{r}(\mathbb{Z})\), by \(\varphi\mapsto\hat{\varphi}\). For trigonometric polynomials (and other nice functions), this is explicitly given by regular Fourier series \(\hat{\varphi}:=\sum_{n\in\mathbb{Z}}\hat{\varphi}(n)\lambda_{n}\) with \(\hat{\varphi}(n)=\frac{1}{2\pi}\int_{S^{1}}\varphi(t)e^{-int}dt\), and then extended by continuity to the whole of \(C(S^{1})\). We then identify \(\mathcal{C}\) with the \(C^{*}\)-subalgebra of \(\mathbb{B}(L^{2}(X,\mu))\) generated by the (unitary) Koopman operator \(U_{T}\), which we also denote as \(\lambda_{1}\), and the commutative \(C^{*}\)-algebra of multiplication operators \(M_{f}:v\mapsto fv,\ v\in L^{2}(X,\mu),\ f\in C(X)\). We then let \(\lambda_{n}=U_{T}^{n}\), so in particular \(\lambda_{0}=I\) and \(\lambda_{-1}=U_{T}^{-1}=\lambda_{1}^{*}\). A general element \(a\) of the group ring over \(C(X)\) (which is dense in \(\mathcal{C}\)) has the form \[a=\sum_{n=-N}^{N}M_{f_{n}}U_{T}^{n}\] which, when no confusion can arise, we also write as \[a=\sum_{n=-N}^{N}f_{n}\lambda_{n}.\] With these notations the group ring that generates \(C^{*}_{r}(\mathbb{Z})\) consists of elements of the form \[a=\sum_{n=-N}^{N}a_{n}\lambda_{n},\] with \(a_{n}\in\mathbb{C}\). Recall that the \(T\)-action on \(C(X)\) is implemented by conjugation by \(\lambda_{1}\): \[\lambda_{1}M_{f}\lambda_{1}^{*}=M_{f\circ T},\] or simply \[\lambda_{1}f\lambda_{1}^{*}=f\circ T.\] The reduced crossed product \(C(X)\rtimes_{r}\mathbb{Z}\) comes equipped with a \(\mathbb{Z}\)-equivariant canonical conditional expectation \(\mathbb{E}:C(X)\rtimes_{r}\mathbb{Z}\to C(X)\) defined on the algebraic crossed product by \[\mathbb{E}(a)=\mathbb{E}\left(\sum_{n=-N}^{N}f_{n}\lambda_{n}\right)=f_{0}.\] It follows from [1, Proposition 4.1.9] that \(\mathbb{E}\) extends to a faithful conditional expectation from \(C(X)\rtimes_{r}\Gamma\) onto \(C(X)\). For a general element \(a\in\mathcal{C}=C(X)\rtimes_{r}\mathbb{Z}\), its _Fourier coefficient of order \(n\)_ is the function \[\hat{a}(n)=\mathbb{E}(a\lambda_{n}^{*})\in C(X).\] ### Two circles From here until the end of Section 4, unless we say otherwise, we take \((X,\mu,T)\) to be the circle \(X\), equipped with normalized Lebesgue measure \(\mu\), and with \(T=R_{\alpha}\) an irrational rotation. Thus the set of characters \(\{e^{inx}|\ n\in\mathbb{Z}\}\) forms a basis for \(L^{2}(X,\mu)\) and their linear combinations form a dense \(*\)-subalgebra of \(C(X)\). When \((X,T)\) is an irrational rotation on the circle, it is important not to confuse the two circles. We will always refer to the Gelfand dual of \(C_{r}^{*}(\mathbb{Z})\) as \(S^{1}\) and denote functions \(\varphi(t),\psi(t)\in C(S^{1})\) by the lowercase Greek letters, with a variable \(t\). The phase space of the dynamical system will always be denoted by \(X\), and functions on \(X\) will be denoted by Roman letters, \(f(x),g(x)\in C(X)\) with a variable \(x\). The action of \(\mathbb{Z}\) on \(X\) is by the rotation \(T:X\to X\), \(Tx=x+\alpha\pmod{2\pi}\). We will also consider the rotation, by the same angle \(\alpha\), on the other circle and denote it by \(\tau:S^{1}\to S^{1}\), \(\tau t=t+\alpha\pmod{2\pi}\). The rotation \(\tau\) yields an action on \(C(S^{1})\) by \(\varphi\mapsto\varphi\circ\tau\). Via the standard identification of \(C_{r}^{*}(\mathbb{Z})\cong C(S^{1})\) we obtain an action \(\tau:C_{r}^{*}(\mathbb{Z})\to C_{r}^{*}(\mathbb{Z})\), which is explicitly given in terms of Fourier coefficients by \(\widehat{\tau\varphi}(n)=\hat{\varphi}(n)e^{in\alpha}\). We denote the normalised Lebesgue measure on \(S^{1}\) by \(\nu\). It is the unique \(\tau\)-invariant Borel probability measure on \(S^{1}\). Note that \(\mathbb{E}:\mathcal{C}\to C(X)\), our canonical conditional expectation, is given by \[\mathbb{E}\left(\sum_{-N}^{N}f_{n}\lambda_{n}\right)=\mathbb{E}\left(\sum_{-N }^{N}f_{n}\widehat{e^{int}}\right)=\sum_{-N}^{N}f_{n}\int_{S^{1}}e^{int}\ d \nu(t)=f_{0},\] so that \(\mathbb{E}=\mathbb{E}_{\nu}\) (see Sub-section 2.3 below). The \(\tau\)-action naturally extends to an action on the space of closed ideals \(\mathcal{I}\) in \(C_{r}^{*}(\mathbb{Z})\), identified with the closed subsets of the circle \(\mathrm{Cl}(S^{1})\). By abuse of notation, we will denote all of these by the same letter \(\tau\). The following discussion yields the involutive automorphism alluded to in Remark 1.8 _Remark 2.1_.: There exists an involutive automorphism \(\iota:\mathcal{C}\to\mathcal{C}\) with \(\iota C(X)=C_{r}^{*}(\mathbb{Z})\). As is shown e.g. in [1] the \(C\)*-algebra \(\mathcal{C}=\mathcal{C}_{\alpha},\ \alpha\in[0,1)\) irrational, is isomorphic to the universal \(C\)*-algebra \(C\)*\((U,V)\), with \(U,V\) two unitaries satisfying the relation \[UV=e^{2\pi i\alpha}VU. \tag{2}\] It is also shown that \(K_{0}(\mathcal{C}_{\alpha})=\mathbb{Z}+\mathbb{Z}\alpha\) and it follows that two such algebras \(\mathcal{C}_{\alpha}\) and \(\mathcal{C}_{\beta}\) are isomorphic if and only if \(\beta=\pm\alpha\). In our setup we have \(U=M_{e^{ix}}\in C(X)\) and \(V=\lambda_{1}\in C^{*}(\mathbb{Z})\cong C(S^{1})\). If we take \(U=\lambda_{1}=M_{e^{it}}\in C(S^{1})\) and \(V=M_{e^{ix}}\in C(X)\) we obtain exactly Equation (2) by Lemma 2.5 below (with \(\tau^{-1}\) replacing \(T\)). ### Generalized Fourier coefficients In addition to the standard Fourier coefficients of functions \(\varphi\in C(S^{1})\) defined above, we use \(C^{\ast}_{r}(\mathbb{Z})\)-valued Fourier expansion on \(X\). Using \(\mu\), the Lebesgue probability measure on \(X\), we define a \(\mathbb{Z}\)-equivariant conditional expectation \(\mathbb{E}_{\mu}:\mathcal{C}\to C^{\ast}_{r}(\mathbb{Z})\), by setting \(\mathbb{E}_{\mu}(f\lambda_{n})=\mu(f)\lambda_{n}\) and extending to the whole of \(\mathcal{C}\) using linearity and continuity (see [1], Exercise 4.1.4]). Be careful not to confuse this with the standard conditional expectation \(\mathbb{E}:\mathcal{C}\to C(X)\). **Definition 2.2**.: Let \(a\in\mathcal{C}\) we define its \(C^{\ast}_{r}(\mathbb{Z})\) valued Fourier coefficients by the following formula \[\overline{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! ### A basic lemma **Lemma 2.5**.: _Let \(m,n\in\mathbb{Z}\) and \(\hat{\varphi},\hat{\psi}\in C^{*}_{r}(\mathbb{Z})\) then:_ 1. \(\hat{\varphi}e^{inx}=e^{inx}\widehat{\varphi\circ\tau^{n}},\quad\forall n\in \mathbb{Z},\hat{\varphi}\in C^{*}_{r}(\mathbb{Z})\)_,_ 2. \(\big{(}e^{inx}\hat{\varphi}\big{)}^{*}=e^{-inx}\widehat{\varphi\circ\tau^{n}}\)_,_ 3. \(e^{inx}\hat{\varphi}e^{inx}\hat{\psi}=e^{i(n+m)x}\Big{[}(\widehat{\varphi\circ \tau^{m}})\psi\Big{]}\)_._ Proof.: The first property directly implies the other two. Also, it is enough to prove the first property when \(\varphi\) is a finite sum of the form \(\hat{\varphi}=\sum_{k=-K}^{K}a_{k}\lambda_{k}\) because such functions are uniformly dense in \(C(S^{1})\). In this case, we obtain a direct computation: \[\sum\limits^{m}a_{k}\lambda_{k}e^{inx} = \sum_{k}a_{k}\lambda_{k}e^{inx}\lambda_{-k}\lambda_{k}=\sum_{k}a _{k}e^{in(x+k\alpha)}\lambda_{k}\] \[= e^{inx}\sum_{k}a_{k}e^{ikn\alpha}\lambda_{k}=e^{inx}\widehat{ \varphi\circ\tau^{n}}\] Where the last equality is a standard property of Fourier coefficients. ### Ideal functions Given an intermediate algebra \(\mathcal{A}\in\mathfrak{A}\), with any function \(f\in C(X)\) we associate a closed ideal of \(C^{*}_{r}(\mathbb{Z})\) \[I_{\mathcal{A}}(f) = \{\eta\in C^{*}_{r}(\mathbb{Z})\ |\ f\eta\in\mathcal{A}\}\in \mathcal{I}\,.\] Recall that \(\mathcal{I}\) was defined in the introduction as the set of all closed ideals in \(C^{*}_{r}(\mathbb{Z})\). **Definition 2.6**.: Given an intermediate algebra \(C^{*}_{r}(\mathbb{Z})<\mathcal{A}<\mathcal{C}\) we define _the ideal function \(\Psi(\mathcal{A})=\mathfrak{c}_{\mathcal{A}}:\mathbb{Z}\to\mathcal{I}\) associated with \(\mathcal{A}\)_ by \[\mathfrak{c}_{\mathcal{A}}(n)=I_{\mathcal{A}}(e^{inx}).\] Note that \(\mathfrak{c}_{\mathcal{A}}(0)=C^{*}_{r}(\mathbb{Z})\) since by assumption \(C^{*}_{r}(\mathbb{Z})<\mathcal{A}\), so that \(\mathfrak{c}_{\mathcal{A}}\in\mathfrak{D}\). The _support_ of \(\mathfrak{c}_{\mathcal{A}}\) is the set \[\mathrm{Supp}(\mathfrak{c}_{\mathcal{A}})=\{n\in\mathbb{Z}\ |\ \mathfrak{c}_{ \mathcal{A}}(n)\neq(0)\}=\{n\in\mathbb{Z}\ |\ \tilde{\mathfrak{c}}(n)\ \mathbb{+}S^{1}\}.\] and we say that \(\mathfrak{c}_{\mathcal{A}}\) is trivial if \(\mathrm{Supp}(\mathfrak{c}_{\mathcal{A}})=\{0\}\). It is unclear that \(\mathfrak{c}_{\mathcal{A}}\) should be nontrivial for a given \(C^{*}_{r}(\mathbb{Z})\neq\mathcal{A}\in\mathfrak{A}\). Still, this turns out to be accurate, and our main Theorem 1.2 completely classifies intermediate algebras in terms of their ideal functions. With this in mind, we study the basic properties of an ideal function that comes from an algebra. The basic Lemma 2.5 is our main tool. **Proposition 2.7**.: _Let \(\mathcal{A}\in\mathfrak{A}\) and \(\mathfrak{c}=\Psi(\mathcal{A})\). The following properties are satisfied:_ * \(\mathfrak{Cif1}\ \tilde{\mathfrak{c}}(-n)=\tau^{n}\tilde{\mathfrak{c}}(n),\ \forall n\in\mathbb{Z}\)_,_ * \(\mathfrak{Cif2}\ \tilde{\mathfrak{c}}(m+n)\subset\tau^{-m}\tilde{ \mathfrak{c}}(n)\cup\tilde{\mathfrak{c}}(m)\) _for every_ \(m,n\in\mathrm{Supp}(\mathfrak{c})\)_._ Proof.: Let \(\varphi,\psi\in C(S^{1})\) be functions with \(\mathcal{Z}(\varphi):=\{t\in S^{1}\ |\ \varphi(t)=0\}=\tilde{\mathfrak{c}}(n)\) and \(\mathcal{Z}(\psi)=\tilde{\mathfrak{c}}(m)\). Namely \(\hat{\varphi},\hat{\psi}\) are generators of \(\mathfrak{c}(n),\mathfrak{c}(m)\) respectively. Now \(\mathcal{Z}((\varphi\circ\tau^{m})\psi)=\tau^{-m}\tilde{\mathfrak{c}}(n)\cup \tilde{\mathfrak{c}}(m)\) and \(\mathtt{Cif2}\) follows directly from equation (3) of Lemma 2.5. Similarly applying Equation (2) of the same lemma, to both \(\pm n\), yields property \(\mathtt{Cif1}\) **Definition 2.8**.: An ideal function \(\mathfrak{c}\in\mathfrak{D}\) is called _closed_ if it satisfies properties \(\mathtt{Cif1},\mathtt{Cif2}\). By \(\mathfrak{C}\subset\mathfrak{D}\), we denote the collection of closed ideal functions. The collection \(\mathfrak{D}\) (resp. \(\tilde{\mathfrak{D}}\)) admits a lattice structure, and \(\mathfrak{C}\) (resp. \(\tilde{\mathfrak{C}}\)) is a sub-lattice: * We define a partial order on the collection of ideal functions by inclusion. Setting \(\mathfrak{c}\preceq\mathfrak{c}^{\prime}\) if \(\mathfrak{c}(n)\leqslant\mathfrak{c}^{\prime}(n),\forall n\in\mathbb{Z}\) (resp. \(\tilde{\mathfrak{c}}(n)\supset\tilde{\mathfrak{c}}^{\prime}(n),\forall n\in \mathbb{Z}\)). * For \(\mathfrak{c},\mathfrak{c}^{\prime}\in\mathfrak{D}\) define \(\mathfrak{c}\wedge\mathfrak{c}^{\prime}\in\mathfrak{C}\) by \((\mathfrak{c}\wedge\mathfrak{c}^{\prime})(n)=\mathfrak{c}(n)\cap\mathfrak{c} ^{\prime}(n)\) (resp. \((\mathfrak{c}\wedge\mathfrak{c}^{\prime})(n)=\tilde{\mathfrak{c}}(n)\cup \tilde{\mathfrak{c}}^{\prime}(n)\)), and similarly for intersections of any finite collection of ideal functions. * For \(\mathfrak{c},\mathfrak{c}^{\prime}\in\mathfrak{D}\) define \(\mathfrak{c}\vee\mathfrak{c}^{\prime}\in\mathfrak{C}\) by \((\mathfrak{c}\vee\mathfrak{c}^{\prime})(n)=\overline{\langle\mathfrak{c}(n), \mathfrak{c}^{\prime}(n)\rangle}\) (resp. \((\mathfrak{c}\vee\mathfrak{c}^{\prime})(n)=\tilde{\mathfrak{c}}(n)\cap\tilde {\mathfrak{c}}^{\prime}(n)\)), and similarly for any collection of ideal functions. **Note 2.9**.: _Condition \(\mathtt{Cif2}\) can be rephrased more symmetrically:_ \[\tilde{\mathfrak{c}}(m+n)\subset\left(\tau^{-m}\tilde{\mathfrak{c}}(n)\cup \tilde{\mathfrak{c}}(m)\right)\cap\left(\tau^{-n}\tilde{\mathfrak{c}}(m)\cup \tilde{\mathfrak{c}}(n)\right),\ \forall m,n\in\mathrm{Supp}(\mathfrak{c})\] **Note 2.10**.: _Condition \(\mathtt{Cif1}\) implies that every closed ideal function is completely determined by its values on \(\mathbb{N}\). Thus every function \(\mathfrak{c}:\mathbb{N}\to\mathcal{I}\) subject to the following three conditions admits a unique extension to a closed ideal function defined on all of \(\mathbb{Z}\)._ \[\begin{array}{ll}\mathtt{Cif2}a&\tilde{\mathfrak{c}}(m+n)\subset\tau^{-m} \tilde{\mathfrak{c}}(n)\cup\tilde{\mathfrak{c}}(m),\quad\forall m,n\in \mathbb{N}\cap\mathrm{Supp}(\mathfrak{c})\\ \mathtt{Cif2}b&\tilde{\mathfrak{c}}(m-n)\subset\tau^{n-m}\tilde{\mathfrak{c}}( n)\cup\tilde{\mathfrak{c}}(m),\quad\forall 1\leqslant n<m,\quad m,n\in\mathrm{Supp}( \mathfrak{c})\\ \mathtt{Cif2}c&\tilde{\mathfrak{c}}(-n+m)\subset\tau^{n}(\tilde{\mathfrak{c}}( n)\cup\tilde{\mathfrak{c}}(m)),\quad\forall 1\leqslant n<m,\quad m,n\in\mathrm{Supp}( \mathfrak{c})\end{array}\] Proof.: We extend \(\tilde{\mathfrak{c}}\) to \(\mathbb{Z}\) by setting \(\tilde{\mathfrak{c}}(0)=\emptyset,\tilde{\mathfrak{c}}(-n)=\tau^{n}\tilde{ \mathfrak{c}}(n),\forall n\in\mathbb{N}\). With these definitions, condition \(\mathtt{Cif2}\) holds when \(m=n\). Hence, without loss of generality, we assume that \(m>n\). This leaves us with the task of verifying \(\mathtt{Cif2}\) in six different cases, three of which are the given equations. Let us verify that, assuming \(\mathtt{Cif2}a\), \(\mathtt{Cif2}b\), \(\mathtt{Cif2}c\) the other three hold as well: \[\begin{array}{rcl}\tilde{\mathfrak{c}}(n-m)&=&\tau^{m-n}\tilde{\mathfrak{c }}(m-n)\subset\tau^{m-n}\left(\tau^{-m}\tau^{n}\tilde{\mathfrak{c}}(n)\cup \tilde{\mathfrak{c}}(m)\right)=\tilde{\mathfrak{c}}(n)\cup\tau^{-n}\tilde{ \mathfrak{c}}(-m)\\ \tilde{\mathfrak{c}}(-m+n)&=&\tau^{m-n}\tilde{\mathfrak{c}}(-n+m)\subset\tau ^{m-n}\tau^{n}\left(\tilde{\mathfrak{c}}(n)\cup\tilde{\mathfrak{c}}(m)\right) =\tau^{m}\left(\tilde{\mathfrak{c}}(n)\cup\tilde{\mathfrak{c}}(m)\right)\\ \tilde{\mathfrak{c}}(-n-m)&=&\tau^{m+n}\tilde{\mathfrak{c}}(m+n)\subset\tau ^{m+n}\left(\tau^{-m}\tilde{\mathfrak{c}}(n)\cup\tilde{\mathfrak{c}}(m) \right)=\tilde{\mathfrak{c}}(-n)\cup\tau^{n}\tilde{\mathfrak{c}}(-m)\end{array}\] Which is exactly what we had to show. **Example 2.11**.: Given \(q\in\mathbb{N}\) and a closed subset \(P\in\mathrm{Cl}(S^{1})\). We define a closed ideal function \(\mathfrak{b}_{q,P}\in\mathfrak{C}\) by \[\tilde{\mathfrak{b}}_{q,P}(m)=\left\{\begin{array}{ll}\tau^{q(1-n)}P\cup \ldots\cup\tau^{-q}P\cup P&m=nq,\ n>0\\ \emptyset&m=0\\ \tau^{q}P\cup\tau^{2q}P\cup\ldots\cup\tau^{nq}P&m=-nq,\ n>0\\ S^{1}&q\nmid m\end{array}\right.\] We refer to this as the _basic ideal function with the data \((q,P)\)_. It is easiest to verify this is an abstract ideal function using the criterion given in Note 2.10. In fact, it is even enough to verify the case \(q=1\) by restricting our attention to the subalgebra \(\mathcal{A}<C^{q}<\mathcal{C}\). If \(J\lhd C^{*}_{r}(\mathbb{Z})\) is the closed ideal associated with the closed subset \(P\in\mathrm{Cl}(S^{1})\), we also denote \(\mathfrak{b}_{q,J}=\mathfrak{b}_{q,P}\). Directly from the definition of a closed ideal function, it follows that \(\mathfrak{b}_{q,J}\) is minimal among all closed ideal functions with \(J<\mathfrak{c}(q)\). Alternatively, this can be verified using Note 2.10. In particular for every \(\mathfrak{c}\in\mathfrak{C}\) and every \(n\in\mathbb{N}\) we have \(\mathfrak{b}_{n,\mathfrak{c}(n)}\leq\mathfrak{c}\). Taking the join over successive values of \(n\) yields better and better approximations for \(\mathfrak{c}\) from below: **Definition 2.12**.: For \(\mathfrak{c}\in\mathfrak{C}\) we define the \(n^{\mathrm{th}}\)_approximation of \(\mathfrak{c}\) from below_ by \[\mathfrak{c}^{n}=\bigvee_{\begin{subarray}{c}m\in\mathrm{Supp}\,\mathfrak{c }\\ 0<m\leqslant n\end{subarray}}\mathfrak{b}_{m,\mathfrak{c}(m)}=\bigvee_{ \begin{subarray}{c}m\in\mathrm{Crit}\,\mathfrak{c}\\ 0<m\leqslant n\end{subarray}}\mathfrak{b}_{m,\mathfrak{c}(m)}.\] When \(\mathfrak{c}(m+1)=\mathfrak{c}^{m}(m+1)\), then \(\mathfrak{b}_{m,\mathfrak{c}(m)}\) can be omitted from the above join without changing anything. We thus define _the critical points for \(\mathfrak{c}\)_ by \[\mathrm{Crit}(\mathfrak{c})=\{m\in\mathrm{Supp}(\mathfrak{c})\ |\ \mathfrak{c}^{m}(m+1) \lneqq\mathfrak{c}(m+1)\}.\] This explains the notation in the rightmost expression of the above definition. **Theorem 2.13**.: _(The canonical decomposition theorem) For every \(\mathfrak{c}\in\mathfrak{C}\) we have_ \[\mathfrak{c}=\bigvee_{n\in\mathrm{Crit}(\mathfrak{c})}\mathfrak{b}_{n, \mathfrak{c}(n)}=\bigvee_{n\in\mathrm{Crit}(\mathfrak{c})}\mathfrak{c}^{n}.\] _We refer to this join as the Canonical generation of \(\mathfrak{c}\)._ Proof.: The left-hand side is dominated by the right-hand side. Conversely, for every \(n\), we have \(\mathfrak{c}(n)\leqslant\mathfrak{b}_{n,c}\), which accounts for the converse inclusion. **Example 2.14**.: For a fixed \(q\in\mathbb{N}\) note that \[\mathfrak{b}_{q,C^{\ast}_{r}(\mathbb{Z})}(m)=\left\{\begin{array}{ll}C^{ \ast}_{r}(\mathbb{Z})&m=nq,\\ (0)&q\nmid m\end{array}\right.\] is exactly the ideal function coming from the "dynamical" intermediate algebra \(\mathcal{C}^{q}=C(Y)\rtimes_{r}C^{\ast}_{r}(\mathbb{Z})\). Namely the one coming from the \(q\)-fold factor \(X\xrightarrow{\times q}Y\) of the circle. Note that by Suzuki's theorem [13] mentioned at the beginning of the introduction, any larger ideal function is of the form \(\mathfrak{b}_{r,C^{\ast}_{r}(\mathbb{Z})}\) for some \(r|q\). ### The algebra \(\mathcal{A}_{\mathfrak{c}}\) We now turn to define a functor in the other direction \(\Psi:\mathfrak{D}\to\mathfrak{A}\). **Definition 2.15**.: For \(\mathfrak{c}\in\mathfrak{D}\) we define \[\mathcal{A}^{\prime}_{\mathfrak{c}}=\left\langle e^{inx}\eta_{n}\ |\ N\in\mathbb{N},\eta_{n}\in\mathfrak{c}(n)\right\rangle,\] As the abstract \(\ast\)-algebra, generated by these elements, and set \(\Psi(\mathfrak{c})=\mathcal{A}_{\mathfrak{c}}=\overline{\mathcal{A}^{\prime }_{\mathfrak{c}}}\) to be its closure. For \(\mathfrak{c}\in\mathfrak{C}\), the algebra \(\mathcal{A}^{\prime}\) assumes a much more explicit form. In fact it follows directly from the conditions Cif1, Cif2 that \[\mathcal{A}^{\prime}_{\mathfrak{c}}=\left\{\sum_{n=-N}^{N}e^{inx}\eta_{n}\ |\ N\in\mathbb{N},\eta_{n}\in\mathfrak{c}(n)\right\}. \tag{4}\] namely that the collection of all such generalized trigonometric polynomials is closed under \(\ast\) and under multiplication. For this reason, it is much easier to work with closed ideal functions \(\mathfrak{c}\in\mathfrak{C}\). Moreover, limiting our attention only to closed ideal functions does not limit the generality as our main theorem asserts in particular that \(\Psi(\mathfrak{C})=\Psi(\mathfrak{D})\). **Proposition 2.16**.: _For \(\mathfrak{c}\in\mathfrak{C}\), \(\mathcal{A}_{\mathfrak{c}}=\Psi(\mathfrak{c})\) is an intermediate \(C^{*}\) subalgebra \(C^{*}_{r}(\mathbb{Z})<\mathcal{A}_{\mathfrak{c}}<\mathcal{C}\). Moreover, the following conditions are equivalent:_ 1. \(\mathfrak{c}(q)=C^{*}_{r}(\mathbb{Z})\) _for some_ \(0\neq q\in\mathbb{Z}\)_,_ 2. \(\mathcal{A}_{\mathfrak{c}}=\mathcal{C}^{q}\) _comes from a dynamical factor_ \((X,T)\stackrel{{\times q}}{{\longrightarrow}}(X,T^{q})\)_,_ 3. \(\mathcal{A}_{\mathfrak{c}}=\mathcal{B}\rtimes_{r}\mathbb{Z}\) _for some subalgebra_ \(\mathcal{B}\) _of_ \(C(X)\)_,_ 4. \(\mathcal{A}_{\mathfrak{c}}\cap C(X)\neq\mathbb{C}\)_,_ Proof.: \(\mathcal{A}_{\mathfrak{c}}=\Psi(\mathfrak{c})\) is by definition a closed \(*\)-algebra and since \(\mathfrak{c}(0)=C^{*}_{r}(\mathbb{Z})\), we have \(C^{*}_{r}(\mathbb{Z})<\mathcal{A}_{\mathfrak{c}}\). The implications (2) \(\Rightarrow\) (3) \(\Rightarrow\) (4) are all obvious. To prove (1) \(\Rightarrow\) (2) note that if \(\mathfrak{c}(q)=C^{*}_{r}(\mathbb{Z})\) for some \(q\neq 0\) then \(\mathfrak{b}_{n,C^{*}_{r}(\mathbb{Z})}\preceq\mathfrak{c}\). By Example 2.14, we know that \(\mathfrak{c}\) is necessarily an intermediate dynamical algebra of the form \(\mathcal{C}^{r}\) for some \(r|q\). If \(q\) were the minimum positive value such that \(\mathfrak{c}(q)=C^{*}_{r}(\mathbb{Z})\), then \(q=r\). To show that (4) \(\Rightarrow\) (1), let \(f\in\mathcal{A}_{\mathfrak{c}}\cap C(X)\) be a non-constant function, with a non-trivial Fourier coefficient \(\mu(fe^{-iqx})\neq 0\) for some \(0\neq q\in\mathbb{Z}\). Since \(f\in\mathcal{A}_{\mathfrak{c}}\), we can find an element \(\sum_{n=-N}^{N}e^{inx}\eta_{n}\in\mathcal{A}^{\prime}_{\mathfrak{c}}\) such that \[\left\|f-\sum_{n=-N}^{N}e^{inx}\eta_{n}\right\|<\epsilon.\] Where the precise value of \(\epsilon\) will be determined later. By possibly adding zero coefficients, we may assume that \(q\in\{-N,-N+1,\ldots,N-1,N\}\). Multiplying the expression inside the norm, from the left by \(e^{-iqx}\) and applying the conditional expectation \(\mathbb{E}_{\mu}\), we obtain. \[\left\|\mu(fe^{-iqx})-\mathbb{E}_{\mu}\left(\sum_{n=-N}^{N}e^{i(n-q)x}\eta_{n }\right)\right\|=\left\|\mu(fe^{-iqx})-\eta_{q}\right\|<\epsilon\] Where we used the fact that \(\mathbb{E}_{\mu}\left(\sum_{n\neq q}e^{i(n-q)x}\eta_{n}\right)=0.\) Dividing out by \(\mu(fe^{-iqx})\) we obtain \[\left\|1-\frac{\eta_{q}}{\mu(fe^{-iqx})}\right\|<\frac{\epsilon}{|\mu(fe^{- iqx})|}\] But if \(\epsilon<\left|\mu(fe^{-iqx})\right|\) this latter expression guarantees that \(\eta_{q}\) is invertible (and in particular non-zero) element in the algebra. And since by definition \(\eta_{q}\in\mathfrak{c}(q)\) we conclude that \(\mathfrak{c}(q)=C^{*}_{r}(\mathbb{Z})\) proving (1). ### Some examples #### 2.7.1. Basic Subalgebras We denote the algebra associated with a basic ideal function \(\mathfrak{b}_{q,P}\) by \(\mathcal{A}_{q,P}\). It is easy to find a generator for this subalgebra. **Proposition 2.17**.: _If \(\varphi\in C(S^{1})\) is any function with \(\mathcal{Z}\varphi:=\{t\in S^{1}\mid\varphi(t)=0\}=P\), namely any generator for the ideal of functions vanishing on \(P\), then \(\mathcal{A}_{q,P}=\overline{\langle C^{*}_{r}(\mathbb{Z}),e^{iqx}\hat{\varphi}\rangle}\), the smallest closed subalgebra of \(\mathcal{C}\) containing the element \(a=e^{iqx}\hat{\varphi}\) and \(C^{*}_{r}(\mathbb{Z})\)._ Proof.: Set \(\mathcal{B}=\overline{\langle C^{*}_{r}(\mathbb{Z}),e^{iqx}\hat{\varphi}\rangle}\). By definition \(C^{*}_{r}(\mathbb{Z}),e^{iqx}\hat{\varphi}\) are both contained in \(\mathcal{A}_{q,P}\) and hence \(\mathcal{B}<\mathcal{A}_{q,P}\). Conversely for any \(m\in\mathbb{N}\) applying successively Equation (3) from Lemma 2.5\(m\) times we obtain \[a^{m}=e^{imqx}\left[(\varphi\circ\tau^{(m-1)q})(\varphi\circ\tau^{(m-2)q}) \ldots\varphi\right]^{\wedge}.\] If we denote the function inside the square brackets by \(\varphi(m)\) we see immediately that \(\mathcal{Z}\varphi(m)=P\cup\tau^{-q}P\cup\ldots\cup\tau^{-q(m-1)}P=\vec{ \mathfrak{b}}_{q,P}(m)\). And consequently \(e^{imqx}\overline{\varphi(m)}C^{*}_{r}(\mathbb{Z})=e^{imqx}\mathfrak{b}_{q,P }(mq)\subset\mathcal{B}\). Similarly, using Equation (2) from the same lemma, we obtain a similar inclusion for the negative values of \(m\). Hence \(\mathcal{A}^{\prime}_{q,P}\subset\mathcal{B}\), and we get the desired inclusion by passing to the closure. ### Ian Putnam's \(Y\)-orbit breaking subalgebras Following [1], we recall the following definition: **Definition 2.18**.: Let \(X\) be a compact metric space and \(T:X\to X\) be a homeomorphism. Consider the transformation group \(C^{*}\)-algebra \(\mathcal{C}=C(X)\rtimes_{r}\mathbb{Z}\). For a closed subset \(Y\subset X\), we define the \(C^{*}\)-subalgebra \(\mathcal{C}_{Y}\) to be the \(C^{*}\)-subalgebra of \(\mathcal{C}\) generated by \(C(X)\) and \(\lambda_{1}C_{0}(X\backslash Y)\). We say that \(\mathcal{C}_{Y}\) is the \(Y\)_-orbit breaking subalgebra_ of \(\mathcal{C}\). Applying the duality involution interchanging the unitary operators \(U=M_{e^{it}}\) with \(V=\lambda_{1}\) and \(T=R_{\alpha}\) with \(\tau=R_{-\alpha}\), as in of Remark 2.1 above, we can interpret Putnam's \(Y\)-orbit breaking subalgebras as follows: **Definition 2.19**.: Let \(X\) be a compact metric space and \(T:X\to X\) be a homeomorphism. Consider the transformation group \(C^{*}\)-algebra \(\mathcal{C}=C(X)\rtimes_{r}\mathbb{Z}\). For a closed subset \(P\subset S^{1}\), we define the \(C^{*}\)-subalgebra \(\mathcal{C}_{P}\) to be the \(C^{*}\)-subalgebra of \(\mathcal{C}\) generated by \(C(S^{1})\) and \(e^{ix}C_{0}(S^{1}\backslash P)\). Now comparing this definition with our definition of the basic subalgebra \(\mathcal{A}_{1,P}\), we conclude that \(\mathcal{C}_{P}=\mathcal{A}_{1,P}\), so that in the particular case of the irrational crossed product \(C^{*}\)-algebra, the basic algebra \(\mathcal{A}_{1,P}\) and Putnam's \(Y\)-orbit breaking subalgebra is the same object. ### Ideals in intermediate subalgebras Let \(\mathcal{A}_{\mathfrak{c}}=\Psi(\mathfrak{c})\in\mathfrak{A}\), and \(J\lhd\mathcal{A}_{\mathfrak{c}}\) be a closed two-sided ideal. Just like we did in the case of algebras, we can associate with \(J\) an ideal valued function \(\Phi_{\mathfrak{c}}(J)=\mathfrak{j}:\mathbb{Z}\to\mathcal{I}\) by setting \[\mathfrak{j}(n)=\{\eta\in\mathfrak{c}(n)\ |\ e^{inx}\eta\in J\}.\] A calculation analogous to the one carried out in Lemma 2.5 and Proposition 2.7 yields the following properties for this function. **Definition 2.20**.: Given \(\mathfrak{c}\in\mathfrak{C}\) a function \(\mathfrak{j}:\mathbb{Z}\to\mathcal{I}\) will be called a \(\mathfrak{c}\)_-ideal function_ if \(\ddot{\mathfrak{j}}(n)\supset\ddot{\mathfrak{c}}(n),\forall n\in\mathbb{Z}\). The _support_ of \(\mathfrak{j}\) is the set \[\mathrm{Supp(j)}=\{n\in\mathbb{Z}\ |\ \mathfrak{j}(n)\neq(0)\}=\{n\in \mathbb{Z}\ |\ \ddot{\mathfrak{j}}(n)\not=S^{1}\}.\] A \(\mathfrak{c}\)-ideal function will be called _closed_ if in addition it satisfies the following two conditions: \(\text{\sc Cif1}(\mathfrak{c})\)\(\ddot{\mathfrak{j}}(-n)=\tau^{n}\ddot{\mathfrak{j}}(n),\ \forall n\in\mathbb{Z}\), \(\text{\sc Cif2}(\mathfrak{c})\)\(\ddot{\mathfrak{j}}(m+n)\subset\tau^{-m}\ddot{\mathfrak{j}}(n)\cup\tilde{\mathfrak{c}}(m)\) for every \(m\in\text{Supp}(\mathfrak{c}),n\in\text{Supp}(\mathfrak{j})\). _Remark 2.21_.: Note that the second condition above is stated in a highly non-symmetric form. A longer but more explicit form, in the case where \(m,n\in\text{Supp}(\mathfrak{j})\) would read: \[\ddot{\mathfrak{j}}(m+n)\] \[\subset\left(\tau^{-m}\ddot{\mathfrak{j}}(n)\cup\tilde{\mathfrak{ c}}(m)\right)\cap\left(\tau^{-n}\ddot{\mathfrak{j}}(m)\cup\tilde{\mathfrak{c}}(n) \right)\cap\left(\tau^{-m}\tilde{\mathfrak{c}}(n)\cup\ddot{\mathfrak{j}}(m) \right)\cap\left(\tau^{-n}\tilde{\mathfrak{c}}(m)\cup\ddot{\mathfrak{j}}(n)\right)\] _Remark 2.22_.: Similar to the case of ideal functions, \(\mathfrak{c}\)-ideal functions are completely determined by their values on \(\mathbb{N}_{0}=\mathbb{N}\cup\{0\}\). For a function \(\ddot{\mathfrak{j}}:\mathbb{N}_{0}\to\text{Cl}(S^{1})\) satisfying \(\mathfrak{j}(n)\supset\mathfrak{c}(n),\forall n\in\mathbb{N}\) it is enough to verify the following conditions for \(m,n\in\mathbb{N}_{0}\) with \(m\in\text{Supp}(\mathfrak{c}),n\in\text{Supp}(\mathfrak{j})\) in order to guarantee that the unique extension to \(\mathbb{Z}\) forms a legal closed \(\mathfrak{c}\)-ideal function: \[\ddot{\mathfrak{j}}(m+n) \subset \left[\tau^{-m}\ddot{\mathfrak{j}}(n)\cup\tilde{\mathfrak{c}}(m) \right]\cap\left[\tau^{-n}\tilde{\mathfrak{c}}(m)\cup\ddot{\mathfrak{j}}(n) \right],\] \[\ddot{\mathfrak{j}}(m-n) \subset \left[\tau^{n-m}\ddot{\mathfrak{j}}(n)\cup\tilde{\mathfrak{c}}(m) \right]\cap\left[\tau^{n}\left(\tilde{\mathfrak{c}}(m)\cup\ddot{\mathfrak{j}}(n )\right)\right],\forall 0\leqslant n\leqslant m,\] \[\ddot{\mathfrak{j}}(n-m) \subset \left[\tau^{m-n}\tilde{\mathfrak{c}}(m)\cup\ddot{\mathfrak{j}}(n )\right]\cap\left[\tau^{m}\left(\tilde{\mathfrak{c}}(m)\cup\ddot{\mathfrak{j}}( n)\right)\right],\forall 0\leqslant m\leqslant n,\] Proof.: We extend the function \(\mathfrak{j}\) to \(\mathbb{Z}\) using \(\text{\sc Cif1}(\mathfrak{c})\). We must verify condition \(\text{\sc Cif2}(\mathfrak{c})\) in the three missing cases. In all cases we assume \(m,n\in\mathbb{N}_{0}\) with \(m\in\text{Supp}(\mathfrak{c}),n\in\text{Supp}(\mathfrak{j})\). \[\ddot{\mathfrak{j}}(-m-n) =\tau^{m+n}\ddot{\mathfrak{j}}(m+n)\subset\tau^{m+n}\left(\tau^{- n}\tilde{\mathfrak{c}}(m)\cup\ddot{\mathfrak{j}}(n)\right)=\tau^{m}\ddot{ \mathfrak{j}}(-n)\cup\tilde{\mathfrak{c}}(-m)\] \[\mathfrak{j}(-m+n) =\tau^{m-n}\mathfrak{j}(m-n)\subset\tau^{m-n}\tau^{n}(\tilde{ \mathfrak{c}}(m)\cup\ddot{\mathfrak{j}}(n)=\tau^{m}(\tilde{\mathfrak{c}}(m) \cup\ddot{\mathfrak{j}}(n), 0\leqslant n\leqslant m\] \[\ddot{\mathfrak{j}}(m-n) =\tau^{n-m}\mathfrak{j}(n-m)\subset\tau^{n-m}(\tau^{m-n}\tilde{ \mathfrak{c}}(m)\cup\ddot{\mathfrak{j}}(n))=\tau^{-m}\mathfrak{j}(-n)\cup \tilde{\mathfrak{c}}(m), 0\leqslant m\leqslant n\] As required. Conversely, with any \(\mathfrak{c}\)-ideal function, we can associate an ideal **Definition 2.23**.: With any \(\mathfrak{c}\)-ideal function \(\mathfrak{j}\) as above, we may associate an ideal \[J_{\mathfrak{j}}=\Psi_{\mathfrak{c}}(\mathfrak{j})=\overline{\langle e^{ inx}\eta_{n}|\,n\in\mathbb{Z},\ \eta_{n}\in\mathfrak{j}(n)\rangle}\lhd\Psi(\mathfrak{c}).\] **Proposition 2.24** (Properties of \(\mathfrak{c}\)-ideal functions).: _Let \(J=\Psi_{\mathfrak{c}}(\mathfrak{j})\) for some closed \(\mathfrak{c}\)-ideal function as above. Then_ * \(\mathfrak{j}(0)=\mathbb{E}_{\mu}(J)=J\cap C^{\mathfrak{s}}(\mathbb{Z})\)_._ * \(J=\mathcal{A}_{\mathfrak{c}}\) _if and only if_ \(\mathfrak{j}(0)=C^{\ast}_{r}(\mathbb{Z})\)_, if and only if_ \(\ddot{\mathfrak{j}}(0)=\emptyset\)_,_ * \(J=(0)\) _if and only if_ \(J\cap C^{\ast}(\mathbb{Z})=(0)\)_, if and only if_ \(\ddot{\mathfrak{j}}(0)=S^{1}\)_._ * \(\ddot{\mathfrak{j}}(0)\subset\ddot{\mathfrak{j}}(n),\forall n\in\mathbb{Z}\)_._ _In particular \(J\) is nontrivial in the sense that \((0)\lneqq J\lneqq\mathcal{A}\), if and only if \(\emptyset\subset\ddot{\mathfrak{j}}(0)\leqslant S^{1}\)._ Proof.: The first three items are clear; for the last item, it is enough to consider \(n\in\operatorname{Supp(j)}\). Using \(\operatorname{\mathtt{Cif2(c)}}\) we obtain for such \(n\) \[\ddot{\mathfrak{j}}(0)=\ddot{\mathfrak{j}}(n+(-n))\subset\tau^{-n}\tau^{n} \ddot{\mathfrak{j}}(n)\cup\ddot{\mathfrak{c}}(n)=\ddot{\mathfrak{j}}(n)\cup \ddot{\mathfrak{c}}(n)=\ddot{\mathfrak{j}}(n).\] ## 3. The main theorem ### Extraction of generalized Fourier coefficients Let \(V_{n}=\operatorname{Span}\{e^{imx}\ |\ \left|m\right|\leq n\}<C(X)\) be the space of trigonometric polynomials of degree at most \(n\). With respect to the standard \(L^{2}\)-metric, let \(Q_{n}:C(X)\to V_{n}\) be the orthogonal projection and \(V_{n}^{\perp}=\ker(Q_{n})\). The projection \(Q_{n}\) is explicitly given by the formula \(Q_{n}f=\sum_{m=-n}^{n}\hat{f}(m)e^{imx}.\) It is easy to verify that \(Q_{n}\) is a \(T\)-equivariant map. It turns out that \(Q_{n}\) can be extended to the whole \(\mathcal{C}\). By abuse of notation, we denote the extended operator by the same name. **Proposition 3.1**.: _The map \(Q_{n}\) extends naturally to a bounded linear map, \(Q_{n}:\mathcal{C}\to\mathcal{C}\), that restricts to the identity on \(C_{r}^{*}(\mathbb{Z})\)._ Proof.: Clearly \(Q_{n}\upharpoonright_{C(X)}:C(X)\to C(X)\) is a bounded linear operator. Using [10, Proposition 1.10], we see that \(Q_{n}\) is completely bounded. Now, it follows from [10, Theorem 3.5] that \(Q_{n}\) extends uniquely to a completely bounded map, denoted again by \(Q_{n}\) from \(\mathcal{C}\to\mathcal{C}\) such that for every \(N\) \[Q_{n}\left(\sum_{i=-N}^{N}f_{i}\lambda_{i}\right)\mapsto\sum_{i=-N}^{N}Q_{n}(f _{i})\lambda_{i}.\] **Definition 3.2**.: For every element \(a\in\mathcal{C}\), let \(q(a)\) be the smallest number \(q\) such that \(Q_{q(a)}(a)\notin C_{r}^{*}(\mathbb{Z})\), with the convention that \(q(a)=\infty\) whenever \(a\in C_{r}^{*}(\mathbb{Z})\) and \(Q_{\infty}=\operatorname{id}\). Explicitly for \(f\in C(X)\) we have \(f\in V_{q(f)-1}^{\perp}\) and \[f=Q_{q(f)}(f)+g=e^{-iq(f)x}\overset{\raisebox{-1.0pt}{\mbox{\Large$\bullet$}} }{f}(-q(f))+\overset{\raisebox{-1.0pt}{\mbox{\Large$\bullet$}}}{f}(0)+e^{iq(f)x }\overset{\raisebox{-1.0pt}{\mbox{\Large$\bullet$}}}{f}(q(f))+g \tag{5}\] with \(g\in V_{q(f)}^{\perp}\). A similar decomposition holds for every \(a\in\mathcal{C}\). Using Proposition 3.1 above, Equation (5) can be applied to each Fourier coefficient \(f_{i}=\mathbb{E}(a\lambda_{i}^{*})\) of \(a\) separately and \(q(a)\) is just the minimum over all \(q(f_{i})\). The following two propositions are crucial to our proof. **Proposition 3.3**.: _Let \(\mathcal{A}\in\mathfrak{A}\) then for every \(a\in\mathcal{A}\) with \(q=q(a)\) we have:_ \[Q_{q(a)}(a)=e^{-iqx}\overset{\raisebox{-1.0pt}{\mbox{\Large$\bullet$}}}{a}(-q) +\overset{\raisebox{-1.0pt}{\mbox{\Large$\bullet$}}}{a}(0)+e^{iqx}\overset{ \raisebox{-1.0pt}{\mbox{\Large$\bullet$}}}{a}(q)\in\mathcal{A}.\] _If in addition \(a\in J\lhd\mathcal{A}\), with \(J\) a closed two-sided ideal, then \(Q_{q(a)}\in J\)._ Proof.: We focus first on functions \(f\in C(X)\) to approximate the operator \(Q=Q_{q}:C(X)\to C(X)\), using operations that can be realized within the algebra. Define an operator \(L_{K,q}:C(X)\to C(X)\) as follows \[(L_{K,q}f)(x)=\frac{q\pi}{2K+1}\sum_{\begin{subarray}{c}k\in[-K,K]\\ k\alpha\in[-\frac{\pi}{2q},\frac{\pi}{2q}]\end{subarray}}f(x+k\alpha).\] Asymptotically \(\|L_{K,q}\|_{\infty}\leqslant 1\). For \(f(x)=e^{irx}\), \(r\pm 0\), we obtain \[L_{K,q}e^{irx} =e^{irx}\frac{q\pi}{2K+1}\sum_{\begin{subarray}{c}k\in[-K,K]\\ k\alpha\in[-\frac{\pi}{2q},\frac{\pi}{2q}]\end{subarray}}e^{irk\alpha}\] \[\stackrel{{ K\to\infty}}{{\longrightarrow}}q\pi e^{ irx}\int_{-\frac{\pi}{2q}}^{\frac{\pi}{2q}}e^{irs}\frac{ds}{2\pi}=\frac{q}{r} \sin(\frac{\pi r}{2q})e^{irx}.\] Where the convergence follows from the ergodic theorem for uniquely ergodic systems, applied to the (Riemann integrable) function \(h(s)=\mathbf{1}_{[\frac{\pi}{2q},\frac{\pi}{2q}]}e^{irs}\). The rate of convergence that we obtain is independent of the value of \(x\). Let us define \(L:C(X)\to C(X)\) using the above limiting expression. Namely, in the space of trigonometric polynomials, we define \[L\left(\sum_{r=-N}^{N}a_{j}e^{irx}\right)=\sum_{r=-N}^{N}\frac{q}{r}\sin\left( \frac{\pi r}{2q}\right)e^{irx}.\] and extend this to the whole \(C(X)\) by uniform continuity. We conclude that for every \(f\in C(X)\) we have \(\lim_{K\to\infty}\left\|L_{K,q}f-Lf\right\|_{\infty}=0\). The operators \(L_{K,q}\) and the limiting operator \(L\) can be extended to \(\mathcal{A}\). Furthermore \(L_{K,q}\) can be realized within the algebra \(\mathcal{A}\) by setting \[L_{K,q}(a)=\frac{q\pi}{2K+1}\sum_{\begin{subarray}{c}k\in[-K,K]\\ k\alpha\in[-\frac{\pi}{2q},\frac{\pi}{2q}]\end{subarray}}\lambda_{k}a\lambda_ {-k}. \tag{6}\] By this explicit expression, it is immediate that \(L_{K,q}(\mathcal{A})\subset A\), and since \(\mathcal{A}\) is norm closed, we also have \(L(\mathcal{A})\subset\mathcal{A}\). Now let us consider \(L^{m}\) with \(m\to\infty\). For \(|r|=q\) we have, \[L\left(e^{\pm iqx}\right)=\lim_{K\to\infty}L_{K,q}\left(e^{\pm iqx}\right)=e^ {\pm iqx}.\] And the same holds, of course, for \(L^{m}\) for any \(m\). For \(|r|>q\) we have \[\|L\left(e^{irx}\right)\|=\lim_{K\to\infty}\|L_{K,q}\left(e^{irx}\right)\|_{ \infty}\leqslant\frac{q}{q+1}\|e^{irx}\|.\] So that \(\lim_{m\to\infty}\left\|L^{m}(e^{irx})\right\|=0,\ \forall\,|r|>q\). Thus for every trigonometric polynomial of the form \(h=\sum_{q\leqslant|r|\leqslant N}a_{r}e^{irx}\in V_{q-1}^{\perp}\) we have \[\lim_{m\to\infty}L^{m}h=Q_{q}h.\] This is extended by uniform continuity, first to \(V_{q-1}^{\perp}\) and then to the whole space \(\ker(Q_{q-1})<\mathcal{A}\). Now let \(a\in\mathcal{A}\) be given. Fix \(q=q(a)\) and \(Q=Q_{q}\). Since \(Q_{q-1}(a-\overline{\mathfrak{d}}(0))\in\ker(Q_{q-1})\cap\mathcal{A}\), we can apply the conclusion of the previous paragraph to deduce that \[Q(a)=\overline{\mathfrak{d}}(0)+Q(a-\overline{\mathfrak{d}}(0))\] \[=\operatorname{\overline{\mathcal{A}}}(0)+\lim_{m\to\infty}L^{m}(a- \operatorname{\overline{\mathcal{A}}}(0))\] \[=\lim_{m\to\infty}L^{m}(a)\in\mathcal{A},\] which is our desired conclusion. Now if \(a\in J\lhd\mathcal{A}\), the explicit expression given in Equation (6) shows that \(Q_{q}(a)\in J\) so that the same proof applies, as \(J\) is invariant under conjugation by \(\lambda_{1}\). We go one step further and extract individual generalized Fourier coefficients. **Proposition 3.4**.: _Let \(\mathcal{A}\in\mathfrak{A}\) and \(a\in\mathcal{A}\). Then for every \(q\in\mathbb{Z}\) we have \(e^{iqx}\operatorname{\overline{\mathcal{A}}}(q)\in\mathcal{A}\). If in addition \(a\in J\lhd\mathcal{A}\), withe \(J\) a closed two-sided ideal, then \(Q_{q}(a)\in J\)._ Proof.: First, we claim it is enough to prove the proposition for the special case \(q=\pm q(a)\). Indeed set \(q_{0}=q(a)\), Proposition 3.3 implies that \(a_{1}:=a-Q_{q_{0}}(a)\in\mathcal{A}\). Moreover, for \(q_{1}=q(a_{1})=q_{0}+1\) we have \(\operatorname{\overline{\mathcal{A}_{1}}}(q_{1})=\operatorname{\overline{ \mathcal{A}}}(q_{1})\). Continuing inductively in this form, we can extract all Fourier coefficients of \(a\). Thus, we will assume that \(q=q(a)\). Since \(C_{r}^{\star}(\mathbb{Z})<\mathcal{A}\) by assumption we may assume that \(\operatorname{\overline{\mathcal{A}}}(0)=0\). By Proposition 3.3 we can replace \(a\) by \(Q_{q}(a)\) and thus assume from now on that \[a=e^{iqx}\operatorname{\overline{\mathcal{A}}}(q)+e^{-iqx}\operatorname{ \overline{\mathcal{A}}}(-q).\] Let us define the derivative of \(a\) to be the element \[a^{\prime}=iqe^{iqx}\operatorname{\overline{\mathcal{A}}}(q)-iqe^{-iqx} \operatorname{\overline{\mathcal{A}}}(-q).\] We claim that \(a^{\prime}\in\mathcal{A}\). Indeed since \(\alpha\) is assumed to be irrational, we can find a sequence such that \(\lim_{j\to\infty}\overline{n_{j}\alpha}=0\) where \(\overline{n_{j}\alpha}\) denotes the representative of \(n_{j}\alpha+\mathbb{Z}\) in the interval \((-1/2,1/2]\). Now \[a^{\prime} =\lim_{j\to\infty}\left(\frac{e^{iq(x+\overline{n_{j}\alpha})}- e^{iqx}}{\overline{n_{j}\alpha}}\operatorname{\overline{\mathcal{A}}}(q)+ \frac{e^{-iq(x+\overline{n_{j}\alpha})}-e^{-iqx}}{\overline{n_{j}\alpha}} \operatorname{\overline{\mathcal{A}}}(-q)\right)\] \[=\lim_{j\to\infty}\frac{\lambda_{n_{j}}a\lambda_{-n_{j}}-a}{ \overline{n_{j}\alpha}}\] And this convergence is uniform in \(x\) because the exponential functions involved are twice continuously differentiable. Thus \(a^{\prime}\in\mathcal{A}\) as claimed. Finally \[e^{iqx}\operatorname{\overline{\mathcal{A}}}(q)=\frac{a}{2}+\frac{a^{\prime}} {2iq}\in\mathcal{A},\qquad\qquad e^{-iqx}\operatorname{\overline{\mathcal{A }}}(-q)=\frac{a}{2}-\frac{a^{\prime}}{2iq}\in\mathcal{A}.\] Which completes the proof of the proposition. Again the same proof works in the case \(a\in J\lhd\mathcal{A}\). ### Proof of Theorems 1.2 and 1.4 Assume that \(\Phi(\mathfrak{c})<\mathcal{A}\) for \(\mathfrak{c}\in\mathfrak{D},\mathcal{A}\in\mathfrak{A}\). By definition this means that \(e^{inx}\varphi\in\mathcal{A}\) for every \(n\in\mathbb{Z},\varphi\in\mathfrak{c}(n)\); which in turn shows that \(\varphi\in\Psi(\mathcal{A})(n)\). Thus \(\mathfrak{c}<\Psi(\mathcal{A})\). Conversely suppose \(\mathfrak{c}<\Psi(\mathcal{A})\) and assume that \(\varphi\in\mathfrak{c}(n)<\Psi(\mathcal{A})(n)\) for some \(n\in\mathbb{Z}\). By definition of \(\Psi(\mathcal{A})\) this means that \(e^{inx}\varphi\in\mathcal{A}\). But by definition \(\Phi(\mathcal{A})=\overline{\langle e^{inx}\varphi\mid\varphi\in\mathfrak{c}( n)\rangle}\) and the desired inclusion \(\Phi(\mathfrak{c})<\mathcal{A}\) follows. This verifies (1) from the main theorem, namely that \(\Phi\) and \(\Psi\) form a Galois connection. Unlike the one in Galois theory, this is a monotone (i.e., order-preserving) Galois connection. However, one naturally obtains an order reversing Galois connection upon composing with the natural identification \(\mathfrak{D}\cong\tilde{\mathfrak{D}}\). The images \(\Phi(\mathfrak{D})\subset\mathfrak{A}\) and \(\Psi(\mathfrak{A})\subset\mathfrak{D}\) are now called closed ideal functions and closed intermediate algebras, respectively. Statement (4) of the main theorem now follows from standard results about Galois connections which we recall here: **Proposition 3.5**.: _For a monotone Galois connection as above, the following hold:_ 1. \(\mathfrak{c}\preceq\Psi\Phi\mathfrak{c},\ \forall\mathfrak{c}\in\mathfrak{D}\) _and_ \(\mathcal{A}\succcurlyeq\Phi\Psi\mathcal{A},\forall\mathcal{A}\in\mathfrak{A}\)_._ 2. \(\Phi(\mathfrak{c}_{1})\leq\Phi(\mathfrak{c}_{2})\) _whenever_ \(\mathfrak{c}_{1}\preceq\mathfrak{c}_{2}\) _and similarly_ \(\Psi(\mathcal{A}_{1})\preceq\Psi(\mathcal{A}_{2})\) _whenever_ \(\mathcal{A}_{1}\leq\mathcal{A}_{2}\)_._ 3. \(\Phi\Psi\Phi\mathfrak{c}=\Phi\mathfrak{c},\ \forall\mathfrak{c}\in \mathfrak{D}\) _and_ \(\Psi\Phi\Psi\mathcal{A}=\Psi\mathcal{A},\ \forall\mathcal{A}\in\mathfrak{A}\)_._ 4. \(\Phi|_{\Psi\mathfrak{A}}:\Psi\mathfrak{A}\to\Phi\mathfrak{D}\) _is an isomorphism of lattices between the closed elements on both sides. And_ \(\Psi\) _is its inverse._ Proof.: Applying the Galois connection property to \(\Phi(\mathfrak{c})\leq\Phi(\mathfrak{c})\) and to \(\Psi\mathcal{A}\preceq\Psi\mathcal{A}\) yields 1. Assume that \(\mathfrak{c}_{1}\preceq\mathfrak{c}_{2}\). By 1 we have \(\mathfrak{c}_{1}\preceq\mathfrak{c}_{2}\preceq\Psi\Phi\mathfrak{c}_{2}\), and the defining property of the Galois connection yields \(\Phi(\mathfrak{c}_{1})\leq\Phi(\mathfrak{c}_{2})\). The dual assertion in 2 is proved similarly. Now applying 1 to \(\Phi(\mathfrak{c})\in\mathfrak{A}\), immediately implies that \(\Phi\mathfrak{c}\geq\Phi\Psi\Phi\mathfrak{c},\ \forall\mathfrak{c}\in \mathfrak{D}\). Similarly 1 also implies that \(\mathfrak{c}\preceq\Psi\Phi\mathfrak{c}\), and by 2\(\Phi\mathfrak{c}\leq\Phi\Psi\Phi\mathfrak{c}\). Equality \(\Phi\mathfrak{c}=\Phi\Psi\Phi\mathfrak{c},\ \forall\mathfrak{c}\in \mathfrak{D}\) follows which is one clause in 3. The dual clause is proved identically. The last property 4 is a direct consequence of 3. The property 2 above is referred to as the monotonicity property for obvious reasons. In our case, it is evident from the definitions of \(\Phi,\Psi\), but as we saw above, it also follows formally from the defining property of a monotone Galois connection. We now turn to the less formal statements of the main theorem. Let \(\mathcal{A}\in\mathfrak{A}\) be an intermediate algebra and denote \(\mathfrak{c}=\Psi\mathcal{A}\). By the above Proposition, \(\mathcal{A}\geq\Phi\mathfrak{c}\). We must establish the opposite inclusion to prove 2. Thus let \(a\in\mathcal{A}\), by Proposition 3.4 we know that \(e^{inx}\,\overline{\!\!d}(n)\in\mathcal{A},\ \forall n\in\mathbb{Z}\), which by definition means that \(\overline{\!\!d}(n)\in\mathfrak{c}(n),\ \forall n\in\mathbb{Z}\). By definition of \(\Phi\) this means that \(e^{inx}\,\overline{\!\!d}(n)\in\Phi(\mathfrak{c}),\ \forall n\in\mathbb{Z}\). Thus \(Q_{n}(a)=\sum_{k=-n}^{n}e^{ikx}\,\overline{\!\!d}(k)\in\Phi(\mathfrak{c})\) and by Fejer's Theorem 2.3 we deduce that \[a=\lim_{N\to\infty}\frac{1}{N}\sum_{n=0}^{N-1}Q_{n}(a)\in\Phi(\mathfrak{c}),\] showing that the Galois connection is perfect on the algebra side, and finishing the proof of 2. On the other side of the Galois connection, Proposition 3.5 implies \(\mathfrak{c}\leq\Psi\Phi\mathfrak{c}\) for every \(\mathfrak{c}\in\mathfrak{D}\). To prove 3, we must show that the ideal functions satisfying the opposite inclusion are exactly these in \(\mathfrak{C}\). From Proposition 2.7, we already know that the conditions 5, 6, defining the class \(\mathfrak{C}\), are necessary for an ideal function to be closed. It remains to show that these conditions are also sufficient. Let \(\mathfrak{c}\in\mathfrak{C}\), set \(\mathcal{A}=\Phi\mathfrak{c}\) and assume, towards a contradiction, that \(\eta\in\Psi\mathcal{A}(n)\backslash\mathfrak{c}(n)\) for some \(n\). We may assume that \(n\in\mathbb{N}\) is the minimal such index. By definition \(\eta\in\Psi(\mathcal{A})(n)\) means that \(e^{inx}\eta\in\mathcal{A}\). Since \(\mathfrak{c}\in\mathfrak{C}\) \[\mathcal{A}^{\prime}_{\mathfrak{c}}=\left\{\sum_{k=-K}^{K}e^{ikx}\eta_{k}\ \mid K\in\mathbb{N},\eta_{k}\in\mathfrak{c}(k)\right\}.\] is an algebra, closed under the \(*\)-operation and by definition \(\mathcal{A}=\Phi(\mathfrak{c})\) is its closure in \(\mathcal{C}\). Thus for every \(\epsilon>0\) we can find an approximation \[\left\|e^{inx}\eta-\sum_{k=-K}^{K}e^{ikx}\eta_{k}\right\|=\left\|(\eta-\eta_{k} )-\sum_{\begin{subarray}{c}k=-K\\ k\neq n\end{subarray}}^{K}e^{i(k-n)x}\eta_{k}\right\|<\epsilon\] Applying the conditional expectation \(\mathbb{E}_{\mu}\) on both sides we obtain that \[\left\|\mathbb{E}_{\mu}(\eta-\eta_{k})-\mathbb{E}_{\mu}\left(\sum_{ \begin{subarray}{c}k=-K\\ k\neq n\end{subarray}}^{K}e^{i(k-n)x}\eta_{k}\right)\right\|<\epsilon\] Since \(\eta_{k}\) falls in the multiplicative domain of \(\mathbb{E}_{\mu}\) and \(\mathbb{E}_{\mu}(e^{i(k-n)x})=0\) for \(k\neq n\), we obtain that \(\|\eta-\eta_{k}\|<\epsilon\). Since \(\epsilon>0\) was arbitrary and the ideal \(\mathfrak{c}(n)\lhd C_{\tau}^{*}(\mathbb{Z})\) is closed, we obtain the desired contradiction. This completes the proof of Theorem 1.2. We now turn to the proof of Theorem 1.4. It is quite similar to the proof of Theorem 1.2, but we include all the details. Fix \(\mathfrak{c}\in\mathfrak{C}\) and let \(\mathcal{A}=\Phi(\mathfrak{c})\). That \(\Phi_{\mathfrak{c}}:\mathfrak{D}_{\mathfrak{c}}\to\mathfrak{J}_{\mathcal{A}}\) and \(\Psi_{\mathcal{A}}:\mathfrak{J}_{\mathcal{A}}\to\mathfrak{D}_{\mathfrak{c}}\) form a monotone Galois connection follows as before. This preoves (1), as well as (4), using Proposition 3.5. To prove (2), namely that the connection is perfect on the ideal side, we have to show that \(J\leqslant\Phi_{\mathfrak{c}}(\mathfrak{j})\) for every \(J\in\mathfrak{J}_{\mathcal{A}}\) and \(\mathfrak{j}=\Psi_{\mathcal{A}}(J)\). Indeed, by Proposition 3.4, together with every \(a\in J\) we have \(e^{inx}\overline{\mathfrak{d}}(n)\in J\) for every \(n\in\mathbb{Z}\). Hence by the definition of \(\mathfrak{j}=\Psi_{\mathcal{A}}(J)\) we know that \(\overline{\mathfrak{d}}(n)\in\mathfrak{j}(n),\ \forall n\in\mathbb{Z}\). We conclude using Fejer approximations: \[a=\lim_{N\to\infty}\frac{1}{N}\sum_{n=0}^{N-1}Q_{n}(a)\in\Phi_{\mathfrak{a}} (\mathfrak{j}).\] To prove (3) we must show that the \(\mathfrak{c}\)-ideal functions satisfying \(\Psi_{\mathcal{A}}\Phi_{\mathfrak{c}}\mathfrak{j}=\mathfrak{j}\) are exactly those satisfying conditions \(\texttt{Cif1}(\mathfrak{c}),\texttt{Cif2}(\mathfrak{c})\) of Definition 2.20. Set \(J=\Phi_{\mathfrak{c}}(\mathfrak{j})\). In view of Lemma 2.5, the conditions \(\texttt{Cif1}(\mathfrak{c})\) and \(\texttt{Cif2}(\mathfrak{c})\) follow directly from the respective requirements that \((e^{inx}\hat{\varphi})^{*}\in J\) and \(e^{inx}\hat{\varphi}e^{imx}\hat{\psi}\in J\), whenever \(e^{inx}\hat{\varphi}\in J,e^{imx}\hat{\psi}\in\mathcal{A}\). Namely, they follow from \(J\) being \(*\)-closed right ideal. It remains to show that if \(\mathfrak{j}\) satisfies the conditions \(\texttt{Cif1}(\mathfrak{c}),\texttt{Cif2}(\mathfrak{c})\) then \(\Psi_{\mathcal{A}}\Phi_{\mathfrak{c}}(\mathfrak{j})\preceq\mathfrak{j}\), where by property (i) of Galois connections the other inclusion is automatic for every \(\mathfrak{j}\). So let us assume by way of contradiction that \(\eta\in\Psi_{\mathcal{A}}J(n)\backslash\mathfrak{j}(n)\) for some \(n\in\mathbb{N}\). By conditions \(\texttt{Cif1}(\mathfrak{c}),\texttt{Cif2}(\mathfrak{c})\) \[J^{\prime}=\left\{\sum_{k=-K}^{K}e^{ikx}\eta_{k}\ |\ K\in\mathbb{N},\eta_{k} \in\mathfrak{j}(k)\right\}.\] is a \(*\)-closed right ideal of \(\mathcal{A}\). Hence it is automatically also a left ideal. \(J=\Phi_{\mathfrak{c}}(\mathfrak{j})=\overline{J^{\prime}}\) is by definition the closure. Thus for every \(\epsilon>0\) we can find an approximation \[\left\|e^{inx}\eta-\sum_{k=-K}^{K}e^{ikx}\eta_{k}\right\|=\left\|(\eta-\eta_{ k})-\sum_{\begin{subarray}{c}k=-K\\ k\neq n\end{subarray}}^{K}e^{i(k-n)x}\eta_{k}\right\|<\epsilon\] Applying the conditional expectation \(\mathbb{E}_{\mu}\) on both sides we obtain that \[\left\|\mathbb{E}_{\mu}(\eta-\eta_{k})-\mathbb{E}_{\mu}\left(\sum_{ \begin{subarray}{c}k=-K\\ k\neq n\end{subarray}}^{K}e^{i(k-n)x}\eta_{k}\right)\right\|<\epsilon\] Since \(\eta_{k}\) falls in the multiplicative domain of \(\mathbb{E}_{\mu}\) and \(\mathbb{E}_{\mu}(e^{i(k-n)x})=0\) for \(k\neq n\), we obtain that \(\|\eta-\eta_{k}\|<\epsilon\). Since \(\epsilon>0\) was arbitrary and the ideal \(\mathfrak{j}(n)\) is closed, we obtain the desired contradiction and conclude the proof of Theorem 1.4. ## 4. Structural results about subalgebras ### Residual algebras The special class of residual intermediate subalgebras was singled out in the introduction as algebras admitting especially nice properties. Recall that an ideal function \(\mathfrak{c}\in\mathfrak{C}\) (resp. an \(\mathfrak{c}\)-ideal function \(\mathfrak{j}\)) are called _residual_ if \(\mathfrak{\mathfrak{c}}(n)\) (resp. \(\ddot{\mathfrak{j}}(n)\)) has an empty interior for every \(n\) in the support. We also refer to the corresponding intermediate algebra (resp ideal) as residual. We denote by \(\mathfrak{C}^{r}\) (resp. \(\mathfrak{J}_{\mathfrak{c}}^{r}\)) the sub-lattices of residual objects. **Note 4.1**.: _The notation could be confusing. Residual algebras are large, but we are talking about small sets at the level of closed subsets of the circle. In fact, \(\mathfrak{c}\) is residual exactly when the complement \(S^{1}\backslash\ddot{\mathfrak{c}}(n)\) is residual (in the standard Baire sense of the word), for every \(n\in\operatorname{Supp}(\mathfrak{c})\)._ Proof of Theorem 1.6.: It is clear that the class of residual algebras \(\mathfrak{C}^{r}\) is closed under arbitrary joins. The fact that it is closed under finite intersections follows from the (finite) Baire category theorem. This establishes Properties (1) and (2). Any ideal function's support \(\operatorname{Supp}(\mathfrak{c})\) is symmetric by property \(\mathtt{Cif1}\). In the residual case, it is closed under addition by \(\mathtt{Cif2}\) because the union of two closed nowhere dense sets is still nowhere dense. This establishes Property (3). Now let \(J\lhd\mathcal{A}\in\mathfrak{A}\) be a nontrivial closed two-sided ideal with associated \(\mathfrak{c}\)-ideal function \(\mathfrak{j}\) and set \(M:=\ddot{\mathfrak{j}}(0)\subset S^{1}\). When we say that \(J\) is nontrivial, we mean that \((0)\lneqq J\lneqq\mathcal{A}\), which means that \(\emptyset\subset M\subsetneq S^{1}\), as in Proposition (2.24). As \(\mathfrak{\mathfrak{c}}(n)\subset\ddot{\mathfrak{j}}(n)\) for every \(n\in\mathbb{Z}\), it is clear that \(\operatorname{Supp}(\mathfrak{j})\subset\operatorname{Supp}(\mathfrak{c})\). The reverse inclusion also holds. Indeed by \(\mathtt{Cif2}(\mathfrak{c})\) we have, for \(k\in\operatorname{Supp}(\mathfrak{c})\), that \(\ddot{\mathfrak{j}}(k)=\ddot{\mathfrak{j}}(0+k)\subset\ddot{\mathfrak{c}}(k) \cup\ddot{\mathfrak{j}}(0)\subsetneq S^{1}\), where the proper containment follows since \(\ddot{\mathfrak{j}}(0)\) is a proper closed subset of \(S^{1}\) and \(\ddot{\mathfrak{c}}(k)\) is closed with an empty interior. Thus \(\operatorname{Supp}(\mathfrak{c})=\operatorname{Supp}(\mathfrak{j})\), establishing Property (4). To prove Property (5) we first argue that \(M=\ddot{\mathfrak{j}}(0)\) is nowhere dense. By (3) and (4) \(\operatorname{Supp}(\mathfrak{c})=\operatorname{Supp}(\mathfrak{j})=r \mathbb{Z}\) for some \(r\geq 1\) is a subgroup of \(\mathbb{Z}\). Recall that by Proposition (2.24)\(M\subset\ddot{\mathfrak{j}}(k),\forall k\in\mathbb{Z}\). Now applying \(\mathtt{Cif2}(\mathfrak{c})\) we have for every \(k,n\in\operatorname{Supp}(\mathfrak{c})\) \[M \subset \ddot{\mathfrak{j}}(k)=\ddot{\mathfrak{j}}(n+k-n)\subset\tau^{-n -k\ddot{\mathfrak{j}}}(-n)\cup\ddot{\mathfrak{c}}(n+k)=\tau^{-k\ddot{ \mathfrak{j}}}(n)\cup\ddot{\mathfrak{c}}(n+k) \tag{7}\] \[\subset \tau^{-k\ddot{\mathfrak{j}}}(n)\cup\tau^{-k\ddot{\mathfrak{c}}} (n)\cup\ddot{\mathfrak{c}}(k)=\tau^{-k\ddot{\mathfrak{j}}}(n)\cup\ddot{ \mathfrak{c}}(k),\] where the last equality uses, again, the fact that \(\ddot{\mathfrak{j}}(n)\supset\ddot{\mathfrak{c}}(n)\). Taking intersection over all \(n\in\operatorname{Supp}(\mathfrak{c})\) and using the fact that \(\bigcap_{n\in\mathbb{Z}}\ddot{\mathfrak{j}}(n)=M\), by Proposition 2.24, we obtain \[M\subset\ddot{\mathfrak{j}}(k)\subset\tau^{-k}\left(\bigcap_{n\in\mathbb{Z}} \ddot{\mathfrak{j}}(n)\right)\cup\ddot{\mathfrak{c}}(k)=\tau^{-k}M\cup\ddot{ \mathfrak{c}}(k) \tag{8}\] Let us denote by \(\Omega=\bigcup_{m,n\in\operatorname{Supp}(\mathfrak{c})}\tau^{m}\ddot{ \mathfrak{c}}(n)\). This is a meager \(\tau^{r}\)-invariant subset of \(S^{1}\). Now using the last step, we have \[M\backslash\Omega\subset\tau^{-r}\left(M\backslash\Omega\right).\] Namely \(M\backslash\Omega\) is \(\tau^{r}\)-invariant. It cannot be dense because \(M\) is a proper closed subset of \(S^{1}\). Hence, it has to be empty by the minimality of \(\tau^{r}\). Thus \(M\subset\Omega\), particularly \(M\), is meager. Now Equation (8) implies that \(\ddot{\mathfrak{j}}(k)\subset\tau^{-k}M\cup\ddot{\mathfrak{c}}(k)\subset\Omega\), for every \(k\in\operatorname{Supp}(\mathfrak{j})=\operatorname{Supp}(\mathfrak{c})\). In particular, \(\ddot{\mathfrak{j}}(k)\) is closed and nowhere dense for every \(k\in\operatorname{Supp}(\mathfrak{j})\). This completes the proof of Property (5). Finally, towards a proof of Property (6), suppose that \(a\) is in the center of \(\mathcal{A}\). We first show that \(a\in C^{*}_{r}(\mathbb{Z})\). For every \(n\) we have \(a\lambda_{n}=\lambda_{1}(a\lambda_{n})\lambda_{1}^{*}\), hence \[\mathbb{E}(a\lambda_{n})=\mathbb{E}(\lambda_{1}(a\lambda_{n})\lambda_{1}^{*}) =\lambda_{1}\mathbb{E}(a\lambda_{n})\lambda_{1}^{*}=\mathbb{E}(a\lambda_{n}) \circ T.\] By ergodicity, each Fourier coefficient \(\mathbb{E}(a\lambda_{n})\) is a constant, and we conclude that \(a\in C^{*}_{r}(\mathbb{Z})\). Thus \(a=\hat{\xi}\) form some \(\xi\in C(S^{1})\) with.. Let \(n\in\operatorname{Supp}(\mathfrak{c})\) be such that \(\ddot{\mathfrak{c}}(n)\) is nowhere dense and let \(\varphi\in C(X)\) be such that \(\hat{\varphi}\) is a generator of the ideal \(\mathfrak{c}(n)\). Or in other words \(\mathcal{Z}\varphi=\{t\in S^{1}\mid\varphi(t)=0\}=\ddot{\mathfrak{c}}(n)\). We then have, using Lemma 2.5(1) \[e^{inx}\widehat{\varphi\xi}=e^{inx}\hat{\varphi}\hat{\xi}=\hat{\xi}e^{inx} \hat{\varphi}=[(\widehat{\xi\circ\tau^{n}})\varphi]e^{inx},\] so on the open dense set \(S^{1}\backslash\ddot{\mathfrak{c}}(n)\) we get \(\xi\circ\tau^{n}=\xi\). Using the ergodicity of \(\tau^{n}\), we conclude that \(\xi\) is a constant. This establishes Property (6) and completes the proof of Corollary (Theorem 1.6). **Note 4.2**.: _According to Property (2) in the above proof, the intersection of any finite collection of residual subalgebras is still residual. This does not generalize to countable intersections, though. Consider, for example, the basic algebras \(\mathcal{A}_{i}:=\mathcal{A}_{1,\{p_{i}\}}\) with \(\{p_{i}\ |\ i\in\mathbb{N}\}\subset S^{1}\) a dense countable set. Finite intersections of these algebras are residual, and in fact, they are still basic:_ \[\Psi\left(\bigcap_{i=1}^{n}\mathcal{A}_{i}\right)=\mathfrak{b}_{1,\{p_{1},p_{ 2},\ldots,p_{n}\}}.\] _But clearly_ \[\bigcap_{i\in\mathbb{N}}\mathcal{A}_{i}=C(X).\] ### Small algebras **Definition 4.3**.: We call an abstract ideal function \(\mathfrak{c}\)_small_ if \(\operatorname{Supp}(\mathfrak{c})\) is finite. An algebra will be called small if the associated ideal function is small. Proof of Proposition 1.7.: It is clear from its definition in Example 2.11 and the minimality of the irrational rotation \(\tau:S^{1}\to S^{1}\), that a basic ideal function \(\mathfrak{b}_{q,P}\) is small if and only if it is nonresidual. Namely, if and only if \(P\) has a nonempty interior. By Theorem 2.13, it follows that an abstract ideal function is small if and only if it is the intersection of finitely many small basic ideal functions. When \(P\in\operatorname{Cl}(S^{1})\) has nonempty interior there is, by minimality of \(\tau^{q}\), a positive integer \(N\) such that \(S^{1}=\bigcup_{j=0}^{N-1}\tau^{-jq}P\). It then follows that (for every \(q\geq 1\)) we have \(\vec{\mathfrak{b}}_{q,P}(n)=S^{1}\) for \(|n|\geq qN\), whence the algebra \(\mathcal{A}_{\mathfrak{b}_{q,P}}\) is a finite dimensional module over \(C^{*}_{r}(\mathbb{Z})\). **Example 4.4**.: Let \(J=S^{1}\backslash P=(-\pi/10,\pi/10)\) and suppose that \(\pi/10<\alpha<2\pi/10\). Then any function \(\varphi\in C(S^{1})\) such that \(\varphi(t)=\varphi(t-\alpha)\) for \(t\in J\) and \(\varphi(t)=\varphi(t+\alpha)\) for \(t\in\tau J\), is a central element of the algebra \(\mathcal{A}_{1,P}\). Proof.: Clearly \(P\cup\tau P=S^{1}\) so that every element \(a\) of \(\mathcal{A}_{1,P}\) has the form \[a=e^{-ix}\hat{\psi}_{-1}+\hat{\psi}_{0}+e^{ix}\hat{\psi}_{1},\] with \(\psi_{-1}\upharpoonright\tau P=0=\psi_{1}\upharpoonright P\). In order for an element \(\hat{\varphi}\in C^{*}(\mathbb{Z})\) to be central in \(\mathcal{A}_{1,P}\) it should satisfy the equation \(\hat{\varphi}a=a\hat{\varphi}\) for every \(a\in\mathcal{A}_{1,P}\). Writing this equation more explicitly, we get \[e^{-ix}(\hat{\varphi}\circ\tau)\hat{\psi}_{-1}+\hat{\varphi}\hat{\psi}_{0}+e^{ ix}(\hat{\varphi}\circ\tau^{-1})\hat{\psi}_{1}=e^{-ix}\hat{\psi}_{-1}\hat{ \varphi}+\hat{\psi}_{0}\hat{\varphi}+e^{ix}\hat{\psi}_{1}\hat{\varphi}.\] From this we deduce that on \(J\) we have \(\varphi=\varphi\circ\tau^{-1}\) and on \(\tau J\) we have \(\varphi=\varphi\circ\tau\). By our assumption the sets \(J\), \(\tau^{-1}J\), \(\tau J\) and \(\tau^{2}J\) are pairwise disjoint, and it follows that any \(\varphi\) which satisfies these conditions will be a central element of \(\mathcal{A}_{1,P}\). ### About the simplicity of intermediate algebras It is well known that the algebras \(\mathcal{C}^{q}=C(X_{q})\rtimes_{r}\mathbb{Z}\) are simple (each \((X,R_{q\alpha})\) being minimal). We would like to determine which of the intermediate algebras are simple. To approach this question, consider a general intermediate algebra \(\mathcal{A}_{\mathfrak{c}}\). The collection of closed sets \(\{\vec{\mathfrak{c}}(n):|n|\geq 1\}\) either has the finite intersection property, with \[Q=Q(\mathfrak{c}):=\bigcap_{|n|\geq 1}\vec{\mathfrak{c}}(n)\pm\emptyset,\] or it does not. In the latter case there is an \(N\geq 1\) such that \(\bigcap_{1\leq|n|\leq N}\vec{\mathfrak{c}}(n)=\emptyset\). **Proposition 4.5**.: _Let \(\mathfrak{c}\) be an ideal function such that \(Q=\bigcap_{|n|\geq 1}\vec{\mathfrak{c}}(n)\upharpoonright\emptyset\), then the algebra \(\mathcal{A}_{\mathfrak{c}}\) is not simple. In particular, for the algebra \(\mathcal{A}=\mathcal{A}_{\mathfrak{b}_{q,P}}\) corresponding to the basic ideal functions \(\mathfrak{b}_{q,P}\), we have \(Q=P\cap\tau^{q}P\) so that \(\mathcal{A}\) is not simple whenever this set is not empty._ Proof.: \(Q\pm\emptyset\Rightarrow\mathcal{A}=\mathcal{A}_{\mathfrak{c}}\)**is not simple**. Let \[I=\{a\in\mathcal{A}:\overrightarrow{a}(0)\upharpoonright Q=0\}. \tag{9}\] It is easy to check that this is a proper ideal: Given \(d\in I\) and \(a\in\mathcal{A}_{1,P}\) we need to show that both \(\overrightarrow{ad}(0)=\mathbb{E}_{\mu}(ad)\) and \(\overrightarrow{da}(0)=\mathbb{E}_{\mu}(da)\) correspond to functions that vanish on \(Q\). Let \(a^{\prime}=\sum_{|n|\leq N}e^{inx}\hat{\varphi}_{n}\) and \(d^{\prime}=\sum_{|k|\leq N}e^{ikx}\hat{\psi}_{k}\) be good Fejer approximations to \(a\) and \(d\) respectively. Applying the conditional expectation, we get the following: \[\mathbb{E}_{\mu}(ad) \approx\ \mathbb{E}_{\mu}\left((\sum_{|n|\leq N}e^{inx}\hat{\varphi}_{n})( \sum_{|k|\leq N}e^{ikx}\hat{\psi}_{k})\right)\] \[=\ \sum_{0<|n|\leq N}\widehat{(\varphi_{n}\circ\tau^{-n})}\hat{ \psi}_{-n}+\widehat{\varphi_{0}\cdot\psi_{0}}. \tag{10}\] Now the first summoned corresponds to a function on \(C(S^{1})\) which vanishes on \(Q\), as, for \(n\,\pm\,0\), we have that \(\psi_{-n}\) vanishes on \(\mathfrak{\check{c}}(-n)\), \(\varphi_{n}\circ\tau^{-n}\) also vanishes on \(\tau^{-n}\mathfrak{\check{c}}(n)=\mathfrak{\check{c}}(-n)\), and \(Q\subset\mathfrak{\check{c}}(-n)\). The second summoned corresponds to the function \(\varphi_{0}\psi_{0}\) which belongs to \(I\). Since \(I\) is closed, we conclude that indeed \(ad\in I\). A similar computation shows that also \(da\in I\). For the last assertion, we note that for \(\mathfrak{b}_{q,P}\) we have \(Q=P\cap\tau^{q}P\). Is the condition \(Q(\mathfrak{c})\,\pm\,\emptyset\) also a necessary condition for \(\mathcal{A}_{\mathfrak{c}}\) to be simple? We only have a partial answer to this question in Proposition 4.11 below. **Lemma 4.6**.: _If \(I<\mathcal{A}_{\mathfrak{c}}\) is a proper ideal, then \(\mathcal{B}=\overline{C^{*}(\mathbb{Z})+I}\) is a \(C^{*}\)-subalgebra: \(C^{*}(\mathbb{Z})<\mathcal{B}<\mathcal{A}_{\mathfrak{c}}\), and \(I\) is a proper ideal in \(\mathcal{B}\)._ Proof.: Easy to check. **Definition 4.7**.: We say that an ideal \(I<\mathcal{A}_{\mathfrak{c}}\) is _generating_ when \(\mathcal{A}_{\mathfrak{c}}=\overline{C^{*}(\mathbb{Z})+I}\). See also [11, Corollary 1.5.8] for the following lemma. **Lemma 4.8**.: _If \(I\) is a generating ideal for \(\mathcal{A}_{\mathfrak{c}}\) then we actually have:_ \[\mathcal{A}_{\mathfrak{c}}=C^{*}(\mathbb{Z})+I.\] Proof.: By Theorem 1.4 we have \(I=I_{\mathfrak{j}}\) for some ideal function \(\mathfrak{j}\). Let \(I_{0}=I\cap C^{*}(\mathbb{Z})\). It then follows that \(\mathbb{E}_{\mu}(I)\subset I_{0}\). Let \(c\in\mathcal{A}_{\mathfrak{c}}\), then we have \[c=\lim(a_{n}+b_{n}),\] with \(a_{n}\in C^{*}(\mathbb{Z})\) and \(b_{n}\in I\). Now \[\mathbb{E}_{\mu}(c)=\lim\ \mathbb{E}_{\mu}(a_{n}+b_{n})=\lim\ (a_{n}+\mathbb{E}_{\mu}(b_{n})).\] It follows that \(c-\mathbb{E}_{\mu}(c)=\lim(b_{n}-\mathbb{E}_{\mu}(b_{n}))\). As, for each \(n\), both \(b_{n}\) and \(\mathbb{E}_{\mu}(b_{n})\) are in \(I\) and \(I\) is closed, we conclude that \(c-\mathbb{E}_{\mu}(c)\in I\), whence \(c\in C^{*}(\mathbb{Z})+I\). _Remark 4.9_.: We note that the ideal \(I<\mathcal{A}_{\mathfrak{c}}\) (defined in Equation (9) of Proposition 4.5) is of the form \(I=I_{\mathfrak{j}}\) with \(\mathfrak{\check{j}}(0)=Q\) and \(\mathfrak{\check{j}}(n)=\mathfrak{\check{c}}(n)\) for \(n\neq 0\), hence it is generating. Let us denote such an ideal by \(I_{Q}^{\mathcal{A}}\). If \(Q_{1}\subset Q\) is a smaller closed subset, then Formula (10) shows that \(I_{Q_{1}}^{\mathcal{A}}\triangleleft\mathcal{A}\) as well. It is easy to check that if \(Q_{1}\) is taken to be a point, we obtain a maximal ideal (of codimension 1) inside \(\mathcal{A}\). **Proposition 4.10**.: _Let \(\mathcal{A}_{\mathfrak{c}}\) be an intermediate algebra with a generating proper ideal \(I<\mathcal{A}_{\mathfrak{c}}\); then \(Q(\mathfrak{c})=\bigcap_{|n|\geq 1}\mathfrak{\check{c}}(n)\,\pm\,\emptyset\). Conversely, if \(Q(\mathfrak{c})\,\pm\,\emptyset\) then \(I=\{a\in\mathcal{A}:\,\overline{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrakmathfrakmathfrakmathfrak{ \mathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrak \mathfrakmathfrak{\mathfrakmathfrakmathfrak{\mathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrak{ \mathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrak{ \mathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrak{ \mathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrak{ \mathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrak{ \mathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrak{ \mathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrak{ \mathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrak{ \mathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrak{ \mathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrak{ \mathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrak{ \mathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrak{ \mathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrak { \mathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrak { \mathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrak { \mathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrak { \mathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrak { \mathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrak {\mathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrak {\mathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrak {\mathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrak {\mathfrak Proof.: Let \(I_{0}=I\cap C^{*}(\mathbb{Z})\) and let \(I=I_{\mathfrak{j}}\) as in Theorem 1.4. It then follows that \(\mathbb{E}_{\mu}(I)\subset I_{0}\). Now suppose to the contrary that \(Q=\bigcap_{|n|\geq 1}\check{\mathfrak{c}}(n)=\emptyset\). As we observed above there is an \(N\geq 1\) such that \(\bigcap_{1\leq|n|\leq N}\check{\mathfrak{c}}(n)=\emptyset\). Denoting \(U_{n}=S^{1}\backslash\check{\mathfrak{c}}(n)\), we have \(S^{1}=\bigcup_{1\leq|n|\leq N}U_{n}\). Let \(\{\xi_{n}\}_{1\leq|n|\leq N}\) be a partition of unity subordinate to the open cover \(\{U_{n}\}_{1\leq|n|\leq N}\); i.e. \(0\leq\xi_{n}\leq 1\), \(\sum_{1\leq|n|\leq N}\xi_{n}^{2}=1\), and \(\operatorname{Supp}\xi_{n}\subset U_{n}\). Let \(c=\sum_{1\leq|n|\leq N}e^{inx}\check{\xi}_{n}\). By Lemma 4.8 we have \(c=\hat{\varphi}_{0}+d\), with \(\hat{\varphi}_{0}\in C^{*}(\mathbb{Z})\) and \(d\in I\). Now \(0=\mathbb{E}_{\mu}(c)=\mathbb{E}_{\mu}(\hat{\varphi}_{0}+d)=\hat{\varphi}_{0}+ \mathbb{E}_{\mu}(d)\), hence \(\hat{\varphi}_{0}=-\mathbb{E}_{\mu}(d)\in I_{0}\subset I\) and therefore also \(c=\hat{\varphi}_{0}+d\in I\). We now conclude that \(\mathbb{E}_{\mu}(c^{*}c)=\sum_{1\leq|n|\leq N}\hat{\xi}_{n}^{2}\in I\). But \(\sum_{1\leq|n|\leq N}\xi_{n}^{2}=1_{S^{1}}\), contradicting our assumption that \(I\) is a proper ideal. The other direction follows from Proposition 4.5 and Remark 4.9. **Proposition 4.11**.: _Suppose \(\mathcal{A}_{\mathfrak{c}}\) is not simple. Let \(I<\mathcal{A}_{\mathfrak{c}}\) be a proper ideal (which we can assume to be maximal) and let \(\mathcal{B}=C^{*}(\mathbb{Z})+I\subset\mathcal{A}_{\mathfrak{c}}\). Then \(\mathcal{B}\) is a sub-\(C^{*}\)-algebra, and \(I\) is a generating ideal for \(\mathcal{B}\). Moreover, \(\mathcal{B}=\mathcal{A}_{\mathfrak{d}}\) for an ideal function \(\mathfrak{d}\) with \(\tilde{\mathfrak{d}}(n)\supset\check{\mathfrak{c}}(n)\) for every \(n\in\mathbb{Z}\), and such that \(Q(\mathfrak{d})=\bigcap_{1\leq|n|}\tilde{\mathfrak{d}}(n)\neq\emptyset\)._ As was mentioned above, the algebras \(\mathcal{C}^{q}=C(X_{q})\rtimes_{r}\mathbb{Z}\) are simple. We next describe additional examples of simple intermediate subalgebras. These include the basic subalgebras corresponding to the ideal functions \(\mathfrak{b}_{q,p}=\mathfrak{b}_{q,\{p\}}\), with \(0\neq q\in\mathbb{N},p\in S^{1}\), thus generalizing (in the context of \(\mathcal{C}\)) [1, Proposition 11.3.21] (in view of Subsection 2.8). **Proposition 4.12**.: _Assume that \(\mathcal{A}\in\mathfrak{A}^{r}\) is a residual intermediate algebra with \(\mathfrak{c}=\Psi(\mathcal{A})\) such that \(\mathfrak{c}(q)=\{p\}\) is a singleton for some \(0\neq q\in\mathbb{Z}\). Then \(\mathcal{A}\) is simple._ Proof.: Assume towards a contradiction that \(J\lhd\mathcal{A}\) is a non-trivial ideal and let \(\mathfrak{j}\) be the corresponding \(\mathfrak{c}\)-ideal function. Since \(\mathcal{A}\) is residual, everything from the proof of Corollary Theorem 1.6 in sub-Section 4.1 applies. We thus adopt the notation from that proof and, in particular, set \(M=\check{\mathfrak{j}}(0)\). As in that proof, the non-triviality of the ideal implies \(M\neq\emptyset\) and \(M\neq S^{1}\). Using Equation (8) from the proof of Corollary Theorem 1.6, we conclude that \[\emptyset\subsetneq M\cap\check{\mathfrak{c}}(k)\subset\check{\mathfrak{c}} (k),\qquad\forall k\neq 0. \tag{11}\] Indeed if \(M\cap\check{\mathfrak{c}}(k)=\emptyset\) then Equation (8) would read \(M\subset\tau^{-k}M\). By minimality of \(\tau\), the closed subset \(M\subset S^{1}\) is either empty or everything, which is not the case since \(J\) is a nontrivial ideal. Similarly if \(\check{\mathfrak{c}}(k)\subset M\) we can apply Equation (8) with \(-k\) instead of \(k\), to obtain \[M\subset\tau^{k}M\cup\check{\mathfrak{c}}(-k)=\tau^{k}\left(M\cup\check{ \mathfrak{c}}(k)\right)=\tau^{k}M,\] leading to the same contradiction. Taking \(k=q\), we obtain a contradiction to the assumption that \(\check{\mathfrak{c}}(q)=\{p\}\) is a singleton. _Remark 4.13_.: By [1, Theorem 12.2.5], when a closed subset \(P\subset S^{1}\) is such that \(\tau^{n}P\cap P=\emptyset\) for all \(n\neq 0\), the algebra \(\mathcal{A}_{1,P}\) is _centrally large_ (hence large). By [1, Theorem 12.2.6], it follows that it is simple and infinite dimensional. Presently we don't know how to prove this result using our methods. ## 5. Crossed products of non \(C^{*}\)-simple groups Many of the methods that we used to obtain a complete classification of intermediate algebras in Theorem 1.2 are pretty specific to the system we considered there. This section is dedicated to the proof of Theorem (1.10), where we show that the mere existence of intermediate algebras that do not come from dynamical factors holds for a much more general class of crossed products of the form \(\mathcal{A}\rtimes_{r}\Gamma\). For example, this is the case whenever \(\mathcal{A}\) admits a faithful \(\Gamma\)-invariant state, \(\Gamma\) is not \(C^{*}\)-simple and does not admit a normal subgroup isomorphic to \(\mathbb{Z}/2\mathbb{Z}\). ### Crossed products We briefly describe the construction of the crossed product for unital \(C^{*}\)-algebras in general and refer the reader to [1] for more details. Let \(\Gamma\) be a discrete group and \(\mathcal{A}\) be an unital \(C^{*}\)-algebra. An action of \(\Gamma\) on \(\mathcal{A}\) is a group homomorphism \(\alpha\) from \(\Gamma\) into the group of \(*\)-automorphisms on \(\mathcal{A}\). A \(C^{*}\)-algebra equipped with a \(\Gamma\)-action is called a \(\Gamma\)-\(C^{*}\)-algebra. Suppose that \(\mathcal{A}\) is a unital \(\Gamma\)-\(C^{*}\)-algebra. Let \(\pi:\mathcal{A}\to\mathbb{B}(\mathcal{H})\) be a faithful \(*\)-representation. Let \(\ell^{2}(\Gamma,\mathcal{H})\) be the space of square summable \(\mathcal{H}\)-valued functions on \(\Gamma\), i.e., \[\ell^{2}(\Gamma,\mathcal{H})=\left\{\xi:\Gamma\to\mathcal{H}\text{ such that }\sum_{t\in\Gamma}\|\xi(t)\|_{\mathcal{H}}^{2}<\infty.\right\}\] There is an action \(\Gamma\rightharpoonup\ell^{2}(\Gamma,H)\) by left translation: \[\lambda_{s}\xi(t):=\xi(s^{-1}t),\xi\in\ell^{2}(\Gamma,\mathcal{H}),s,t\in\Gamma\] Let \(\sigma\) be the \(*\)-representation \[\sigma:\mathcal{A}\to\mathbb{B}(\ell^{2}(\Gamma,\mathcal{H}))\] defined by \[\sigma(a)(\xi)(t):=\pi(t^{-1}a)\xi(t),a\in\mathcal{A}\] where \(\xi\in\ell^{2}(\Gamma,\mathcal{H})\), \(t\in\Gamma\). The reduced crossed product \(C^{*}\)-algebra \(\mathcal{A}\rtimes_{r}\Gamma\) is the closure in \(\mathbb{B}(\ell^{2}(\Gamma,\mathcal{H}))\) of the subalgebra generated by the operators \(\sigma(a)\) and \(\lambda_{s}\). Note that \(\lambda_{s}\sigma(a)\lambda_{s^{-1}}=\sigma(s.a)\) for all \(s\in\Gamma\) and \(a\in\mathcal{A}\). In particular, \[\lambda_{s}\sigma(\mathbf{1}_{\mathcal{A}})=\sigma(s.\mathbf{1}_{\mathcal{A}} )\lambda(s)\] It follows from the construction that \(\mathcal{A}\rtimes_{r}\Gamma\) contains \(C^{*}_{\lambda}(\Gamma)\) as a \(C^{*}\)-sub-algebra. The reduced crossed product \(\mathcal{A}\rtimes_{r}\Gamma\) comes equipped with a \(\Gamma\)-equivariant canonical conditional expectation \(\mathbb{E}:\mathcal{A}\rtimes_{r}\Gamma\to\mathcal{A}\) defined by \[\mathbb{E}\left(\sigma(a_{s})\lambda_{s}\right)=\left\{\begin{array}{ll}0& \text{if }s\neq e\\ \sigma(a_{e})&\text{otherwise}\end{array}\right\}\] It follows from [1, Proposition 4.1.9] that \(\mathbb{E}\) extends to a faithful conditional expectation from \(\mathcal{A}\rtimes_{r}\Gamma\) onto \(\mathcal{A}\). Moreover, for a subgroup \(H\leq\Gamma\), there is a faithful conditional expectation \(\mathbb{E}_{H}:\mathcal{A}\rtimes_{r}\Gamma\to\mathcal{A}\rtimes_{r}H\) (see [1, Remark 3.2]) defined by \[\mathbb{E}_{H}\left(\sigma(a_{s})\lambda_{s}\right)=\left\{\begin{array}{ll} \sigma(a_{s})\lambda_{s}&\text{if }s\in H\\ 0&\text{otherwise}\end{array}\right\}\] Explicit examples of intermediate algebras not coming from any factor \(Z\) of the form \(Y\to Z\to X\), associated with an inclusion \(C(X)\subset C(Y)\) were presented in [11, Proposition 2.6]. Ideals have also been used to create intermediate algebras for tensor product inclusion which do not split (see, e.g., [22, Corollary 3.4]). This section demonstrates that the ideals obstruct the intermediate algebras being crossed products. Given an unital \(\Gamma\)-\(C\)*-algebra, we show that we can continually create an algebra that is not a crossed product in a canonical way as long as the reduced \(C\)*-algebra is not simple under some assumptions on the unital \(C\)*-algebra \(\mathcal{A}\). ### Intermediate \(C\)*-algebra associated to an ideal Let \(\Gamma\) be a discrete group acting continuously on an unital \(C\)*-algebra \(\mathcal{A}\) by \(*\)-automorphisms. Assume that there exists a normal subgroup \(N\lhd\Gamma\) such that \(C^{*}_{r}(N)\) is not \(\Gamma\)-simple, i.e., \(C^{*}_{r}(N)\) has a \(\Gamma\)-invariant closed two sided ideal. Let \(J\) be a two-sided, non-trivial, closed \(\Gamma\)-invariant ideal in \(C^{*}_{r}(N)\). We observe that if \(N\lhd\Gamma\) is such that \(C^{*}_{r}(N)\) is not \(\Gamma\)-simple, then \(\Gamma\) is a not a \(C\)*-simple group (see [1, Theorem 1.1]). Using \(J\), we can build a non-trivial closed two-sided ideal \(I\) inside \(C^{*}_{r}(\Gamma)\). **Lemma 5.1**.: _Let \(N\lhd\Gamma\) be such that \(C^{*}_{r}(N)\) is not \(\Gamma\)-simple. Let \(J\) be a two-sided, non-trivial, closed \(\Gamma\)-invariant ideal in \(C^{*}_{r}(N)\). Then,_ \[I=I_{J}=\overline{\mathit{Span}\left\{\eta a:\eta\in J,a\in C^{*}_{r}(\Gamma)\right\}}\] _is a non-trivial closed two sided ideal of \(C^{*}_{r}(\Gamma)\) which contains \(J\)._ Proof.: Clearly, \(I\) is closed in \(\|.\|\) and also, under addition. For any \(a\in C^{*}_{r}(\Gamma)\), let \(\{a_{i}\}_{i\in\Lambda}\in\mathbb{C}[\Gamma]\) be a net approximating \(a\) in \(\|.\|\). Let us write \(a_{i}=\sum_{s\in F_{i}}c_{s}\lambda(s)\), where \(F_{i}\subset\Gamma\) is a finite set and \(c_{s}\in\mathbb{C}\) for all \(s\in F_{i}\). We now observe that for \(\eta\in I\), \[a_{i}\eta=\sum_{s\in F_{i}}c_{s}\lambda(s)\eta=\sum_{s\in F_{i}}c_{s}(\lambda( s)\eta\lambda(s)^{*})\lambda(s).\] Since \(J\) is \(\Gamma\)-invariant, \(\lambda(s)\eta\lambda(s)^{*}=\eta_{s}\in J\) for all \(s\in F_{i}\). Therefore, \[a_{i}\eta=\sum_{s\in F_{i}}c_{s}\eta_{s}\lambda(s)\in I,\forall i\in\Lambda.\] Therefore, \(a\eta=\lim_{i}a_{i}\eta\in I\) for every \(a\in C^{*}_{r}(\Gamma)\) and \(\eta\in J\). Since \(J\) itself is \(*\)-closed, it follows that \(I\) is closed under \(*\)-operation. Now, let \(\eta_{1},\eta_{2}\in J\) and \(a_{1},a_{2}\in C^{*}_{r}(\Gamma)\) be given. We show that \((\eta_{1}a_{1})(\eta_{2}a_{2})\in I\). Let \(\{a_{1,i}\}_{i\in\Lambda}\in\mathbb{C}[\Gamma]\) be a net approximating \(a_{1}\) in \(\|.\|\). Again, using the fact that \(J\) is \(\Gamma\)-invariant, we see that \[(\eta_{1}a_{1})(\eta_{2}a_{2}) =\lim_{i}\sum_{s\in F_{i}}c_{s}\eta_{1}(\lambda(s)\eta_{2} \lambda(s)^{*})\lambda(s)a_{2}\] \[=\lim_{i}\sum_{s\in F_{i}}c_{s}\eta_{1}(s.\eta_{2,s})\lambda(s)a_ {2}\in I.\] Now, given arbitrary elements \(\tilde{a}\),\(\tilde{b}\in I\), we can find sequences \(\{a_{l}\}_{l\in L}\) and \(\{b_{j}\}_{j\in J}\) in \(\mathrm{Span}\left\{\eta a:\eta\in J,a\in C^{*}_{r}(\Gamma)\right\}\) such that \(\lim_{l\in L}a_{l}=\tilde{a}\) and \(\lim_{j\in J}b_{j}=\tilde{b}\). Consequently, we see that \[\tilde{a}\tilde{b}=\lim_{l}a_{l}\lim_{j}b_{j}=\lim_{l,j}a_{l}b_{j}\in I.\] By construction, \(I\) is closed under right multiplication. To show that \(I\) is closed under the left multiplication, it is enough to show that \(\lambda(s)\eta a\in I\) for any \(\eta\in J\) and \(a\in C^{*}_{r}(\Gamma)\). Again writing \(\lambda(s)\eta=\eta_{s}\lambda(s)\) and using the \(\Gamma\)-invariance of \(J\), \[\lambda(s)\eta a=\left(\lambda(s)\eta\lambda(s)^{*}\right)\lambda(s)a=\eta_{s} \lambda(s)a\in I.\] All that needs to be established now is that \(I\) is non-trivial. Since \(J\neq 0\), \(I\neq 0\). Let us show that \(I\neq C^{*}_{r}(\Gamma)\). Towards a contradiction, let us assume otherwise. Then, given \(0<\epsilon<1\), we can find \(\eta_{1},\eta_{2},\ldots,\eta_{k}\in I\) and \(a_{1},a_{2},\ldots,a_{k}\in C^{*}_{r}(\Gamma)\) such that \[\left\|\lambda(e)-\sum_{i=1}^{k}\eta_{i}a_{i}\right\|<\epsilon.\] Let \(\mathbb{E}_{N}:C^{*}_{r}(\Gamma)\to C^{*}_{r}(N)\) denote the canonical conditional expectation from \(C^{*}_{r}(\Gamma)\) onto \(C^{*}_{r}(N)\). Note that \(\mathbb{E}_{N}\) sends every element in \(N\) to itself and sends each element of \(\Gamma\backslash N\) to zero. Applying \(\mathbb{E}_{N}\) to the above inequality and using the fact that \(J\) falls in the multiplicative domain of \(\mathbb{E}_{N}\), we see that \[\left\|\lambda(e)-\sum_{i=1}^{k}\eta_{i}\mathbb{E}_{N}(a_{i})\right\|<\epsilon<1.\] This forces \(\sum_{i=1}^{k}\eta_{i}\mathbb{E}_{N}(a_{i})\in J\) to be an invertible operator. However, \(J\vartriangleleft C^{*}_{r}(N)\) is a non-trivial ideal. This contradicts our assumption that \(I=C^{*}_{r}(\Gamma)\). Therefore, we are done since \(I\neq C^{*}_{r}(\Gamma)\). Given an ideal \(I\vartriangleleft C^{*}_{r}(\Gamma)\), let us associate the following operator space \(\mathcal{A}_{I}\) to \(I\) defined by \[\mathcal{A}_{I}=\operatorname{Span}\left\{\eta\tilde{a}\eta^{\prime}:\eta, \eta^{\prime}\in I,\ \tilde{a}\in\mathcal{A}\rtimes_{r}\Gamma\right\}\] Since \(I\) is a closed two-sided ideal, \(\mathcal{A}_{I}\) is closed under the \(*\)-operation. We claim that \(\overline{\mathcal{A}_{I}}\) is a \(C^{*}\)-algebra. It is enough to show that \(\overline{\mathcal{A}_{I}}\) is closed under multiplication. Let us denote by \(\mathcal{A}_{\rtimes_{r,\mathrm{alg}}}\Gamma\) the collection of finite sums of the form \(\sum_{i=1}^{n}a_{s_{i}}\lambda(s_{i})\), or more formally \[\mathcal{A}\rtimes_{r,\mathrm{alg}}\Gamma=\left\{\sum_{s\in F}a_{s}\lambda(s )\ |\ F\subset\Gamma,|F|<\infty,\ a_{s}\in\mathcal{A}\right\}\] Also, recall that \(\mathcal{A}\rtimes_{r,\mathrm{alg}}\Gamma\) is norm dense inside \(\mathcal{A}\rtimes_{r}\Gamma\). If there exists a \(\Gamma\)-invariant state \(\varphi\) on \(\mathcal{A}\), then the map \(\mathbb{E}_{\varphi}:\mathcal{A}\rtimes_{r,\mathrm{alg}}\Gamma\to C^{*}_{r}( \Gamma),a\lambda(s)\mapsto\varphi(a)\lambda(s)\) extends to a well-defined map at the level of \(\mathcal{A}\rtimes_{r}\Gamma\) (see e.g., [1, Exercise 4.1.4]). **Proposition 5.2**.: \(\overline{\mathcal{A}_{I}}\) _is a \(C^{*}\)-algebra. Moreover, \(\mathcal{B}=C^{*}_{r}(\Gamma)+\overline{\mathcal{A}_{I}}\) is an intermediate \(C^{*}\)-algebra._ Proof.: To show that \(\overline{\mathcal{A}_{I}}\) is a \(C^{*}\)-algebra, it is enough to show that \(\overline{\mathcal{A}_{I}}\) is closed under multiplication. Towards that end, let us choose elements of the form \(\eta_{1}\tilde{a_{1}}\eta^{\prime}_{1},\eta_{2}\tilde{a_{2}}\eta^{\prime}_{2} \in\mathcal{A}_{I}\) with \(\eta_{1},\eta^{\prime}_{1},\eta_{2},\eta^{\prime}_{2}\in I\) and \(\tilde{a_{1}},\tilde{a_{2}}\in\mathcal{A}\rtimes_{r}\Gamma\). Clearly, \[\eta_{1}\tilde{a_{1}}\eta^{\prime}_{1}\eta_{2}\tilde{a_{2}}\eta^{\prime}_{2}= \eta_{1}\left(\tilde{a_{1}}\eta^{\prime}_{1}\eta_{2}\tilde{a_{2}}\right)\eta^{ \prime}_{2}\in\mathcal{A}_{I}.\] In particular, for elements \(x=\left(\sum_{i}\eta_{i}\tilde{a_{i}}\eta^{\prime}_{i}\right)\in\mathcal{A}_{I}\) and \(y=\left(\sum_{j}\eta_{j}\tilde{a_{j}}\eta^{\prime}_{j}\right)\in\mathcal{A}_{I}\), we have \[xy=\left(\sum_{i}\eta_{i}\tilde{a_{i}}\eta^{\prime}_{i}\right)\left(\sum_{j} \eta_{j}\tilde{a_{j}}\eta^{\prime}_{j}\right)=\sum_{i,j}\eta_{i}\left(\tilde{a_{i }}\eta^{\prime}_{i}\eta_{j}\tilde{a_{j}}\right)\eta^{\prime}_{j}\in\overline{ \mathcal{A}_{I}}\] Now, given arbitrary elements \(\tilde{x}\),\(\tilde{y}\in\overline{\mathcal{A}_{I}}\), we can find sequences \(\{x_{l}\}_{l\in L}\) and \(\{y_{j}\}_{j\in J}\) in \(\mathcal{A}_{I}\) such that \(\lim_{l\in L}x_{l}=\tilde{x}\) and \(\lim_{j\in J}y_{j}=\tilde{y}\) and as a result, we see that \[\tilde{x}\tilde{y}=\lim_{l}x_{l}\lim_{j}y_{j}=\lim_{l,j}x_{l}y_{j}\in \overline{\mathcal{A}_{I}}.\] Let us first check that \(\mathcal{B}\) is norm closed. If \(\{a_{j}+b_{j}\}_{j\in J}\in\mathcal{B}\) is such that \(a_{j}\in C_{r}^{*}(\Gamma)\), \(b_{j}\in\overline{\mathcal{A}_{I}}\) and \(a_{j}+b_{j}\to c\). then \(\mathbb{E}_{\varphi}(a_{j}+b_{j})\to\mathbb{E}_{\varphi}(c)\). Therefore, \(a_{j}+\mathbb{E}_{\varphi}(b_{j})\to\mathbb{E}_{\varphi}(c)\). As a result, we see that \(b_{j}-\mathbb{E}_{\varphi}(b_{j})\to c-\mathbb{E}_{\varphi}(c)\). Since \(b_{j}\in\overline{\mathcal{A}_{I}}\) and \(\mathbb{E}_{\varphi}\left(\overline{\mathcal{A}_{I}}\right)\subset\overline{ \mathcal{A}_{I}}\), we see that \(b_{j}-\mathbb{E}_{\varphi}(b_{j})\in\overline{\mathcal{A}_{I}}\) and therefore, \(c-\mathbb{E}_{\varphi}(c)\in\overline{\mathcal{A}_{I}}\). Hence, \(c=\mathbb{E}_{\varphi}(c)+(c-\mathbb{E}_{\varphi}(c))\in\mathcal{B}\). Since \(I\) is an ideal of \(C_{r}^{*}(\Gamma)\), it follows that \(\lambda(s)\mathcal{A}_{I}\subset\mathcal{A}_{I}\) and \(\mathcal{A}_{I}\lambda(s)\subset\mathcal{A}_{I}\) for all \(s\in\Gamma\). Hence, \(\mathcal{B}\) is closed under multiplication. _Remark 5.3_.: It is also true that \[\overline{\mathcal{A}_{I}}=\overline{\operatorname{Span}\left\{\eta a\eta^{ \prime}:\eta,\eta^{\prime}\in I,\ a\in\mathcal{A}\right\}}\] It is clear that \[\overline{\operatorname{Span}\left\{\eta a\eta^{\prime}:\eta,\eta^{\prime} \in I,\ a\in\mathcal{A}\right\}}\subseteq\overline{\operatorname{Span}\left\{ \eta\tilde{a}\eta^{\prime}:\eta,\eta^{\prime}\in I,\ \tilde{a}\in\mathcal{A}\rtimes_{r}\Gamma\right\}}\] We now show the reverse inclusion. For an element \(\tilde{a}\in\mathcal{A}\rtimes_{r}\Gamma\), let \(\tilde{a_{i}}\in\mathcal{A}\rtimes_{r,\operatorname{alg}}\Gamma\) be a net approximating \(\tilde{a}\) in \(\|.\|\). Let us write \(\tilde{a_{i}}=\sum_{s\in F_{i}}a_{s}\lambda(s)\), where \(a_{s}\in\mathcal{A}\) and \(s\in\Gamma\) for all \(s\in F_{i}\). Since \(I\lhd C_{r}^{*}(\Gamma)\) and \(\eta^{\prime}\in I\), we see that \(\lambda(s)\eta^{\prime}=\eta^{\prime}_{s}\in I\) for all \(s\in\Gamma\). In particular, for \(\eta,\eta^{\prime}\in I\), \[\eta\tilde{a_{i}}\eta^{\prime}=\sum_{s\in F_{i}}\eta a_{s}(\lambda(s)\eta^{ \prime})=\sum_{s\in F_{i}}\eta a_{s}\eta^{\prime}_{s}\in\operatorname{Span} \left\{\eta a\eta^{\prime}:\eta,\eta^{\prime}\in I,\ a\in\mathcal{A}\right\}.\] Therefore, \(\eta\tilde{a}\eta^{\prime}=\lim_{i}\eta\tilde{a_{i}}\eta^{\prime}\in \overline{\operatorname{Span}\left\{\eta a\eta^{\prime}:\eta,\eta^{\prime}\in I,\ a\in\mathcal{A}\right\}}\) for any \(\tilde{a}\in\mathcal{A}\rtimes_{r}\Gamma\) and \(\eta,\eta^{\prime}\in I\). Consequently, the reverse inclusion follows. _Remark 5.4_.: We can also show that \(\mathcal{B}\) is closed in \(\|.\|\) without taking the help of \(\mathbb{E}_{\varphi}\). Let us denote \(\mathcal{D}=\overline{\mathcal{B}}\). Since \(\mathcal{B}\) is closed under multiplication, so is \(\mathcal{D}\). Consequently, \(\mathcal{D}\) is a \(C^{*}\)-algebra. Moreover, \(\overline{\mathcal{A}_{I}}\) is a closed two-sided ideal of \(\mathcal{D}\). Using [10, Theorem 3.1.7], we see that \(\overline{\mathcal{A}_{I}}+C_{r}^{*}(\Gamma)\) is a \(C^{*}\)-subalgebra. **Proposition 5.5**.: _Let \(\Gamma\) be a discrete group and \(\mathcal{A}\), a \(\Gamma\)-\(C^{*}\)-algebra. Let \(I\) be a non-trivial closed \(\Gamma\)-invariant two-sided ideal in \(C_{r}^{*}(\Gamma)\). Let \(\varphi\) be a faithful \(\Gamma\)-invariant state on \(\mathcal{A}\). Let \(\mathcal{B}=C_{r}^{*}(\Gamma)+\overline{\mathcal{A}_{I}}\). Then, \(\mathcal{B}\cap\mathcal{A}=\mathbb{C}\)._ Proof.: It follows from Proposition 5.2 that \(\mathcal{B}\) is an intermediate \(C^{*}\)-algebra. Let us now suppose that \(\varphi\) is a \(\Gamma\)-invariant state on \(\mathcal{A}\) and \(\mathbb{E}_{\varphi}\), the associated conditional expectation onto \(C_{r}^{*}(\Gamma)\). We first claim that \(\mathbb{E}_{\varphi}\left(\overline{\mathcal{A}_{I}}\right)\subset\overline{ \mathcal{A}_{I}}\). For this to hold, it is enough to show that \(\mathbb{E}_{\varphi}\left(\mathcal{A}_{I}\right)\subset\mathcal{A}_{I}\) after which the claim would follow by the density of \(\mathcal{A}_{I}\) inside \(\overline{\mathcal{A}_{I}}\) and the continuity of \(\mathbb{E}_{\varphi}\). Towards that end, for an element of the form \(\sum_{i=1}^{n}\eta_{i}\tilde{a_{i}}\eta^{\prime}_{i}\in\mathcal{A}_{I}\), we see that \[\mathbb{E}_{\varphi}\left(\sum_{i=1}^{n}\eta_{i}\tilde{a_{i}}\eta^{\prime}_{i} \right)=\sum_{i=1}^{n}\eta_{i}\mathbb{E}_{\varphi}(\tilde{a_{i}})\eta^{\prime}_{i}\] Since \(\eta^{\prime}_{i}\in I\) and \(\mathbb{E}_{\varphi}(\tilde{a_{i}})\in C_{r}^{*}(\Gamma)\), we see that \(\mathbb{E}_{\varphi}(\tilde{a_{i}})\eta^{\prime}_{i}\in I\) and hence, \[\sum_{i=1}^{n}\eta_{i}\mathbb{E}_{\varphi}(\tilde{a_{i}})\eta^{\prime}_{i}=\sum_{i= 1}^{n}\eta_{i}\mathbf{1}_{\mathcal{A}}\mathbb{E}_{\varphi}(\tilde{a_{i}})\eta^{ \prime}_{i}\in\mathcal{A}_{I}.\] We now show that \(\overline{\mathcal{A}_{I}}\cap\mathcal{A}=0\). Assume that \(0\neq a\in\overline{\mathcal{A}_{I}}\cap\mathcal{A}\). By replacing \(a\) with \(a^{*}a\), we can assume that \(a\geq 0\). In particular, \(\varphi(a)>0\). Let \(0<\epsilon<1\) be given. We can find an element of the form \(\sum_{i=1}^{n}\eta_{i}\tilde{a_{i}}\eta_{i}^{\prime}\in\overline{\mathcal{A}_{ I}}\) such that \[\left\|a-\sum_{i=1}^{n}\eta_{i}\tilde{a_{i}}\eta_{i}^{\prime}\right\|<\varphi( a)\epsilon\] Applying the conditional expectation \(\mathbb{E}_{\varphi}\), we see that \[\left\|\varphi(a)-\sum_{i=1}^{n}\eta_{i}\mathbb{E}_{\varphi}(\tilde{a_{i}}) \eta_{i}^{\prime}\right\|<\varphi(a)\epsilon\] Let us observe that \[\sum_{i=1}^{n}\eta_{i}\mathbb{E}_{\varphi}(\tilde{a_{i}})\eta_{i}^{\prime}\in I,\] where \(I\) is a non-trivial ideal in \(C_{r}^{*}(\Gamma)\). Hence, \[\left\|1-\frac{\sum_{i=1}^{n}\eta_{i}\mathbb{E}_{\varphi}(\tilde{a_{i}})\eta_ {i}^{\prime}}{\varphi(a)}\right\|<\epsilon<1\] Since \(\sum_{i=1}^{n}\eta_{i}\mathbb{E}_{\varphi}(\tilde{a_{i}})\eta_{i}^{\prime}\in I\), it follows that \(\frac{\sum_{i=1}^{n}\eta_{i}\mathbb{E}_{\varphi}(\tilde{a_{i}})\eta_{i}^{ \prime}}{\varphi(a)}\in I\). Now, \(\frac{\sum_{i=1}^{n}\eta_{i}\mathbb{E}_{\varphi}(\tilde{a_{i}})\eta_{i}^{ \prime}}{\varphi(a)}\in I\). Now, \(\frac{\sum_{i=1}^{n}\eta_{i}\mathbb{E}_{\varphi}(\tilde{a_{i}})\eta_{i}^{ \prime}}{\varphi(a)}\in\overline{\mathcal{A}_{I}}\), we see that \(b-\mathbb{E}_{\varphi}(b)\in\overline{\mathcal{A}_{I}}\). Now, \[b-\mathbb{E}_{\varphi}(b) =\mathbb{E}(b)+\tau_{0}(a)-a-\mathbb{E}_{\varphi}\left(\mathbb{E }(b)+\tau_{0}(a)-a\right)\] \[=\mathbb{E}(b)-\varphi\left(\mathbb{E}(b)\right)\in\mathcal{A}\] Since \(\overline{\mathcal{A}_{I}}\cap\mathcal{A}=\{0\}\) and \((b-\mathbb{E}_{\varphi}(b))\in\overline{\mathcal{A}_{I}}\), it follows that \(b=\mathbb{E}_{\varphi}(b)\). As a consequence, we see that \(\tilde{a}=a+b=a+\mathbb{E}_{\varphi}(b)\). Applying the canonical conditional expectation on both sides, we see that \[\tilde{a}=\mathbb{E}(\tilde{a})=\mathbb{E}(a+\mathbb{E}_{\varphi}(b))=\tau_{0 }\left(a+\mathbb{E}_{\varphi}(b)\right)\in\mathbb{C}.\] For the group \(\mathbb{Z}/2\mathbb{Z}\) with two elements, it is possible to construct an intermediate algebra that is not a crossed product in a canonical way without using the ideals from \(C_{r}^{*}(\mathbb{Z}/2\mathbb{Z})\). **Example 5.6** (Intermediate algebra for \(\mathbb{Z}/2\mathbb{Z}\)).: Let \(\Gamma=\mathbb{Z}/2\mathbb{Z}=\{e,s\}\) and \(X\), a compact Hausdorff \(\Gamma\)-space. Assume that \(X\) has more than two points. Let \(\mathcal{A}=C(X)\). Let \[J=\{a_{e}+a_{s}\lambda(s):a_{e},a_{s}\in\mathbb{C},\ a_{e}+a_{s}=0\}\] be the non-trivial augmentation ideal. Let \(\mathcal{B}=C_{r}^{*}(\Gamma)+\overline{\mathcal{A}_{I}}\). In the light of Proposition 5.2 and Proposition 5.5, it is enough to show that \(\mathbb{E}(\mathcal{B})\neq\mathbb{C}\). Since \(|X|>2\), there exist \(x_{1}\) and \(x_{2}\in X\) such that \(x_{1}\notin\Gamma x_{2}\). Let \(U\) and \(V\) be two open neighborhoods containing \(x_{1}\) and \(\Gamma x_{2}\) respectively such that \(U\cap V=\emptyset\). Let \(f\in C(X)\) be such that \(0\leq f\leq 1\), \(f(x_{1})=1\) and \(\operatorname{Supp}(f)\subset U\). Then, \(a=(\lambda(e)-\lambda(s))f(\lambda(e)-\lambda(s))=f-f\lambda(s)-s.f\lambda(s)+s.f\). So, applying the canonical conditional expectation \(\mathbb{E}\) on it, we see that \(\mathbb{E}(a)=f+s.f\). Let us now observe that \(\mathbb{E}(a)(x_{1})=f(x_{1})+f(s^{-1}x_{1})\geq 1\). On the other hand, \(\mathbb{E}(a)(x_{2})=f(x_{2})+f(sx_{2})=0\) since \(x_{2},sx_{2}\in V\) and \(V\cap U=\emptyset\). The following example shows that the assumption on \(X\) in the above example is necessary as long as the action is non-trivial. **Example 5.7**.: Let \(X=\{x_{1},x_{2}\}\) be a two point space and \(\Gamma=\{e,s\}\) denote \(\mathbb{Z}/2\mathbb{Z}\). In this case, \(C(X)\rtimes_{r}\Gamma\) can be identified with \(\mathbb{M}_{2}(\mathbb{C})\) via the map \[u\lambda(e)+v\lambda(s)\mapsto\begin{bmatrix}u(x_{1})&v(x_{1})\\ v(x_{2})&u(x_{2})\end{bmatrix}.\] Moreover under this identification, we see that: \[C^{*}_{r}(\Gamma)\mapsto\left\{\begin{bmatrix}z&w\\ w&z\end{bmatrix}\right|z,w\in\mathbb{C}\right\},\qquad C(X)\mapsto\left\{ \begin{bmatrix}z&0\\ 0&w\end{bmatrix}\right|z,w\in\mathbb{C}\right\}.\] It can now be easily verified that there is no intermediate algebra between \(C^{*}_{r}(\Gamma)\) and \(C(X)\rtimes_{r}\Gamma\). We now proceed to deal with all the other non-\(C^{*}\)-simple groups. We start with the infinite i.c.c groups. ### Intermediate algebras for i.c.c. group actions Let us recall that an infinite group \(\Gamma\) is i.c.c.. if the conjugacy class of every non-trivial group element is infinite. **Lemma 5.8**.: _Let \(\Gamma\) be an i.c.c. group. Let \(F\subset\Gamma\backslash\{e\}\) be a finite subset. Then, there exists an element \(t\in\Gamma\) such that \(tFt^{-1}\cap F=\emptyset\)._ Proof.: Write \(F=\{s_{i}:1\leq i\leq n\}\). Let \(\Gamma(s_{i},s_{j})=\{t\in\Gamma:ts_{i}t^{-1}=s_{j}\}\). If the claim doesn't hold, \(\Gamma=\cup_{i,j}\Gamma(s_{i},s_{j})\). Since \(\Gamma(s_{i},s_{j})\) is a left coset of \(\Gamma(s_{i},s_{i})\), it follows from a result of Neumann [10] that at least one of the subgroups \(\Gamma(s_{i},s_{i})\) is of finite index, and consequently \(s_{i}\) has a finite conjugacy class. **Theorem 5.9**.: _Let \(\Gamma\) be an i.c.c. group and \(I\leq C^{*}_{r}(\Gamma)\), a non-trivial closed two-sided ideal. Let \(\mathcal{A}\neq\mathbb{C}\) be a unital \(\Gamma\)-\(C^{*}\)-algebra with a faithful \(\Gamma\)-invariant state \(\varphi\). Then, \(\mathcal{B}=C^{*}_{r}(\Gamma)+\overline{A_{I}}\) is not a crossed product in a canonical way._ Proof.: The fact that \(\mathcal{B}\) is an intermediate algebra is a consequence of Proposition 5.2. It also follows from Proposition 5.5 that \(\mathcal{B}\cap\mathcal{A}=\mathbb{C}\) so that \(\mathcal{B}\lneq\mathcal{A}\rtimes_{r}\Gamma\). We show that \(\overline{\mathbb{E}\left(\mathcal{A}_{I}\right)}=\mathcal{A}\). This will show that \(C^{*}_{r}(\Gamma)\lneqq\mathcal{B}\) and complete the proof. Let \(a\in\mathcal{A}\) and \(\eta\in I\) be fixed. Without any loss of generality, assume that \(\|a\|=\|\eta\|=1\). Moreover, replacing \(\eta\) by \(\eta^{*}\eta\) if required, we can assume that \(\tau_{0}(\eta)\neq 0\). Let \(1>\epsilon>0\). We can find a finite subset \(F\subset\Gamma\backslash\{e\}\) such that \[\left\|\eta-\sum_{s\in F}c_{s}\lambda(s)-\tau_{0}(\eta)\right\|<\frac{\epsilon }{4}.\] This, in particular, implies that \[\left\|\sum_{s\in F}c_{s}\lambda(s)+\tau_{0}(\eta)\right\|\leq\left\|\sum_{s\in F }c_{s}\lambda(s)+\tau_{0}(\eta)-\eta\right\|+\|\eta\|<2.\] Let \(t=t(F,\epsilon)\in\Gamma\) (guaranteed by Lemma 5.8) be such that \(tFt^{-1}\cap F=\emptyset\). Then, \[\left\|\lambda(t)\eta^{*}\lambda(t^{-1})-\sum_{s\in F}\overline{c_{s}} \lambda(ts^{-1}t^{-1})-\overline{\tau_{0}(\eta)}\right\|<\frac{\epsilon}{4}.\] Let us also observe that \[\mathbb{E}\left(\left(\sum_{s\in F}c_{s}\lambda(s)+\tau_{0}(\eta) \right)a\left(\sum_{s\in F}\overline{c_{s}}\lambda(ts^{-1}t^{-1})+\overline{ \tau_{0}(\eta)}\right)\right)\] \[=\mathbb{E}\left(\sum_{s,u\in F}c_{s}\overline{c_{u}}(s.a) \lambda(stu^{-1}t^{-1})+\sum_{s\in F}c_{s}\overline{\tau_{0}(\eta)}(s.a) \lambda(s)+\sum_{u\in F}\tau_{0}(\eta)\overline{c_{u}}a\lambda(tu^{-1}t^{-1})\right)\] \[+|\tau_{0}(\eta)|^{2}\mathbb{E}(a)\] If \(stu^{-1}t^{-1}=e\) for some \(s,u\in F\), then it would follow that \(s=tut^{-1}\) for \(s,u\in F\) and this would contradict the choice of \(t\). Therefore, we see that \[\mathbb{E}\left(\left(\sum_{s\in F}c_{s}\lambda(s)+\tau_{0}(\eta)\right)a \left(\sum_{s\in F}\overline{c_{s}}\lambda(ts^{-1}t^{-1})+\overline{\tau_{0}( \eta)}\right)\right)=|\tau_{0}(\eta)|^{2}\mathbb{E}(a)\] Now, \[\left\|\eta a\left(\lambda(t)\eta^{*}\lambda(t^{-1})\right)- \left(\sum_{s\in F}c_{s}\lambda(s)+\tau_{0}(\eta)\right)a\left(\sum_{s\in F} \overline{c_{s}}\lambda(ts^{-1}t^{-1})+\overline{\tau_{0}(\eta)}\right)\right\|\] \[\leq\left\|\eta a\left(\lambda(t)\eta^{*}\lambda(t^{-1})\right)- \left(\sum_{s\in F}c_{s}\lambda(s)+\tau_{0}(\eta)\right)a\left(\lambda(t)\eta ^{*}\lambda(t^{-1})\right)\right\|\] \[+\left\|\left(\sum_{s\in F}c_{s}\lambda(s)+\tau_{0}(\eta)\right) a\left(\lambda(t)\eta^{*}\lambda(t^{-1})\right)-\left(\sum_{s\in F}c_{s}\lambda(s)+ \tau_{0}(\eta)\right)a\left(\sum_{s\in F}\overline{c_{s}}\lambda(ts^{-1}t^{-1} )+\overline{\tau_{0}(\eta)}\right)\right\|\] \[\leq\left\|\eta-\sum_{s\in F}c_{s}\lambda(s)+\tau_{0}(\eta) \right\|\|a\|\|\eta\|\] \[+\left\|\sum_{s\in F}c_{s}\lambda(s)+\tau_{0}(\eta)\right\|a\| \left\|\lambda(t)\eta^{*}\lambda(t^{-1})-\sum_{s\in F}\overline{c_{s}} \lambda(ts^{-1}t^{-1})-\overline{\tau_{0}(\eta)}\right\|\] \[\leq\frac{\epsilon}{2}+\frac{2\epsilon}{4}=\epsilon.\] Therefore, \[\left\|\mathbb{E}\left(\eta a\left(\lambda(t)\eta^{*}\lambda(t^{-1 })\right)\right)-|\tau_{0}(\eta)|^{2}\mathbb{E}(a)\right\|\] \[=\left\|\mathbb{E}\left(\eta a\left(\lambda(t)\eta^{*}\lambda(t^ {-1})\right)-\left(\sum_{s\in F}c_{s}\lambda(s)+\tau_{0}(\eta)\right)a\left( \sum_{s\in F}\overline{c_{s}}\lambda(ts^{-1}t^{-1})+\overline{\tau_{0}(\eta) }\right)\right)\right\|\] \[\leq\left\|\eta a\left(\lambda(t)\eta^{*}\lambda(t^{-1})\right)- \left(\sum_{s\in F}c_{s}\lambda(s)+\tau_{0}(\eta)\right)a\left(\sum_{s\in F} \overline{c_{s}}\lambda(ts^{-1}t^{-1})+\overline{\tau_{0}(\eta)}\right)\right\|\] \(\leqslant\epsilon\) Since \(a\) and \(\eta\) are fixed in the beginning and \(\epsilon>0\) is arbitrary, we see that \(|\tau_{0}(\eta)|^{2}\mathbb{E}(a)\in\overline{\mathbb{E}(\mathcal{A}_{I})}\). The claim follows. ### Intermediate algebras for non-i.c.c. group actions Let \(\Gamma\) be a non-i.c.c. group. Let \(\Gamma_{f}\) be the union of all finite conjugacy classes of \(\Gamma\). \(\Gamma_{f}\) is known as the \(FC\)-center of the group \(\Gamma\). It is well known that \(\Gamma_{f}\lhd\Gamma\) is a normal amenable subgroup (see [1, Section X]). We include proof nonetheless for the sake of completeness. **Lemma 5.10**.: \(\Gamma_{f}\) _is a normal amenable subgroup of \(\Gamma\)._ Proof.: Clearly \[\Gamma_{f}=\{h\in\Gamma\ |\ [\Gamma:N_{\Gamma}(h)]<\infty\}=\{h\in\Gamma\ |\ [ \Gamma:C_{\Gamma}(h)]<\infty\}.\] Indeed, the normalizer and the centralizer are the setwise and pointwise stabilizers for the action of \(\Gamma\) on the finite conjugacy class of \(h\). This set is closed under conjugation and inverses. It is also closed under multiplication because \(C_{\Gamma}(h_{1}h_{2})<C_{\Gamma}(h_{1})\cap C_{\Gamma}(h_{2})\), showing that this is a normal subgroup. Finally, this group is locally, virtually Abelian, and hence amenable. Indeed any finitely generated subgroup \(\Delta=\langle h_{1},h_{2},\ldots h_{n}\rangle<\Gamma_{f}\) contains \(\Delta\cap(\bigcap_{i=1}^{n}C_{\Gamma}(h_{i}))\) as a finite index Abelian normal subgroup. **Lemma 5.11**.: _Let \(\Gamma\) be a non-trivial amenable group. Let_ \[J=\overline{\left\{\sum_{s}c_{s}\lambda(s)\in\mathbb{C}[\Gamma]:\sum_{s}c_{s}= 0\right\}}.\] _Then, \(J\) is a non-trivial closed ideal of \(C_{r}^{*}(\Gamma)\)_ Proof.: Let \(J_{0}=\{\eta=\text{ Let }J\lhd C_{r}^{*}(\Gamma)\) be as above. Since \(\Gamma\) is non-trivial, \(J\neq\mathbb{C}\). We now show that \(J\neq C_{r}^{*}(\Gamma)\). Let us assume otherwise. Then, \(\lambda(e)\in J\). Since \(\Gamma\) is amenable, there is a unit character on \(C_{r}^{*}(\Gamma)\), which we denote by \(\tau\). In particular, \(\tau(\lambda(s))=1\) for all \(s\in\Gamma\). We denote an element \(\eta\in\mathbb{C}[\Gamma]\) by \(\sum_{s}\eta(s)\lambda(s)\). Now, we can find \(\eta\in\mathbb{C}[\Gamma]\) with \(\sum_{s}\eta(s)=0\) such that \[\|\lambda(e)-\eta\|<\epsilon<1.\] Let us observe that for any \(\eta=\sum_{s\in F}\eta(s)\lambda(s)\in\mathbb{C}[\Gamma]\) with \(F\subset\Gamma\), \(|F|<\infty\) and \(\sum_{s\in F}\eta(s)=0\), \(\tau(\eta)=\sum_{s\in F}\eta(s)=0\). Applying the unit character \(\tau\) to the above inequality, we get that \[\|1-\tau(\eta)\|<\epsilon.\] Since \(\tau(\eta)=0\), we get that \(1=\|1\|<\epsilon\) which is a contradiction. **Theorem 5.12**.: _Let \(\Gamma\) be a non-i.c.c. group with \(|\Gamma_{f}|>2\). Assume that \(\mathcal{A}\neq\mathbb{C}\) is a unital \(\Gamma\)-\(C^{*}\)-algebra with a faithful \(\Gamma\)-invariant state \(\varphi\). Then, there exists an intermediate algebra \(\mathcal{B}\) with \(C_{r}^{*}(\Gamma)\subset\mathcal{B}\subset\mathcal{A}\rtimes_{r}\Gamma\)such that \(\mathcal{B}\) is not a crossed product in a canonical way._ Proof.: Let \(J\lhd C_{r}^{*}(\Gamma_{f})\) be the nontrivial closed ideal given by Lemma 5.11. By Lemma 5.1 this can be extended to a nontrivial ideal \(I_{J}\lhd C_{r}^{*}(\Gamma)\), and then \(\mathcal{B}=C_{r}^{*}(\Gamma)+\overline{A_{I_{J}}}\) is an intermediate \(C^{*}\) algebra such that \(\mathcal{B}\cap\mathcal{A}=\mathbb{C}\) by Proposition 5.5. All that remains to be shown is that \(\mathbb{E}(\mathcal{B})\neq\mathbb{C}\). We shall show that \(\mathbb{E}\left(\overline{\mathcal{A}}_{J}\right)=\mathcal{A}\) and this will complete the proof. Let \(s\in\Gamma_{f}\) be a non-identity element. Since \(|\Gamma_{f}|>2\), we can find an element \(t\in\Gamma_{f}\backslash\{e\}\) such that \(st\neq e\). Let \(a\in\mathcal{A}\). Let \(\eta=\lambda(s)-\lambda(e)\) and \(\tilde{\eta}=\lambda(e)-\lambda(t)\) be two elements in \(J\subset I_{J}\). We observe that \(\eta,\tilde{\eta}\in J\), and therefore, \(\eta a\tilde{\eta}\in\overline{\mathcal{A}_{I_{J}}}\) for all \(a\in\mathcal{A}\). Now, \[\eta(-a)\tilde{\eta} =\left(\lambda(s)-\lambda(e)\right)(-a)\left(\lambda(e)-\lambda( t)\right)\] \[=-\lambda(s)a+\lambda(s)a\lambda(t)+a-a\lambda(t)\] \[=-\lambda(s)a+s.a\lambda(st)+a-a\lambda(t)\] Since \(st\neq e\), \(\mathbb{E}(\eta(-a)\tilde{\eta})=a\). In particular, \(\mathbb{E}\left(\overline{\mathcal{A}_{J}}\right)=\mathcal{A}\). Whenever \(\mathcal{A}\) is a commutative \(C^{*}\)-algebra \(C(X)\) with \(|X|>2\), arguing similarly as in Example 5.6, we can remove the assumption of \(|\Gamma_{f}|>2\) in the above theorem. In particular, we have the following. **Corollary 5.13**.: _Let \(\Gamma\) be a non-i.c.c. group and \(X\) a compact Hausdorff \(\Gamma\)-space with \(|X|>2\). Assume that \(X\) has a \(\Gamma\)-invariant probability measure \(\nu\) with full support. Then, there exists an intermediate algebra \(\mathcal{B}\) with \(C_{r}^{*}(\Gamma)\subset\mathcal{B}\subset C(X)\rtimes_{r}\Gamma\) such that \(\mathcal{B}\) is not a crossed product in a canonical way._ Proof.: Let \(\Gamma_{f}\) be the FC-center of the group \(\Gamma\). Let \(J\lhd C_{r}^{*}(\Gamma_{f})\) be the closed non trivial ideal and \(I_{J}\lhd C_{r}^{*}(\Gamma)\) the corresponding non-trivial ideal (Lemma 5.1). Also, let \(\mathcal{B}=C_{r}^{*}(\Gamma)+\overline{A_{I_{J}}}\) be the intermediate \(C^{*}\) algebra. If \(|\Gamma_{f}|>2\), then the claim follows from Theorem 5.12. If \(|\Gamma_{f}|=2\), then we can argue similarly as in the proof of Example 5.6 to conclude that \(\mathbb{E}(\mathcal{B})\neq\mathbb{C}\). Let us conclude by noting that in the special case where the algebra \(\mathcal{A}\) is Abelian we obtain a complete classification in Theorem 1.10: **Theorem 5.14**.: _Let \(\Gamma\) be a non \(C^{*}\)-simple group and \(X\) a compact Hausdorff \(\Gamma\)-space admitting a \(\Gamma\)-invariant measure of full support. Then there exists an intermediate subalgebra_ \[C_{r}^{*}(\Gamma)<\mathcal{B}<C(X)\rtimes_{r}\Gamma\] _with the property that \(\mathcal{B}\cap C(X)=\mathbb{C}\), unless we are in the specific situation of example 5.7. Namely, unless \(|X|=2\) and \(\Gamma=\mathbb{Z}/2\mathbb{Z}\) acts by permuting the two points._ Proof.: Let us assume that \(|X|>2\). If \(\Gamma_{f}=\{e\}\), the assertion follows from Theorem 5.9. If \(\Gamma_{f}\neq\{e\}\), we obtain the claim from Corollary 5.13. ### An intermediate \(C^{*}\)-algebra associated to an ideal-II In this section, for a non-trivial ideal \(I\lhd\mathcal{A}\), we associate an algebra \(\mathcal{B}\) with \(\mathcal{A}\subset\mathcal{B}\subset\mathcal{A}\rtimes_{r}\Gamma\) such that \(\mathcal{B}\) is not of the form \(\mathcal{A}\rtimes_{r}\Lambda\) for a normal subgroup \(\Lambda\lhd\Gamma\). **Proposition 5.15**.: _Let \(\Gamma\) be a discrete group and \(\mathcal{A}\) be a unital \(\Gamma\)-\(C^{*}\)-algebra. Let \(I\) be a non-trivial closed \(\Gamma\)-invariant ideal in \(\mathcal{A}\). Then,_ \[\mathcal{A}_{I}=\overline{\text{Span}\,\{\lambda(s)a:s\in\Gamma,\ a\in I\}}\] _is a \(C^{*}\)-algebra._ Proof.: Since \(a\lambda(s)=\lambda(s)(s^{-1}.a)\) and \(I\) is \(\Gamma\)-invariant, it follows that \(\mathcal{A}_{I}\) is \(*\)-closed. Moreover, for \(a\),\(b\) in \(I\) and \(s,t\in\Gamma\), we see that \(\lambda(s)a\lambda(t)b=\lambda(st)(t^{-1}.a)b\in\mathcal{A}_{I}\). This shows that \(\mathcal{A}_{I}\) is closed under multiplication. We also note that \(\mathcal{A}_{I}\) is \(\Gamma\)-invariant, i.e, \(\lambda(s)\mathcal{A}_{I}\lambda(s^{-1})\subset\mathcal{A}_{I}\) for all \(s\in\Gamma\). **Theorem 5.16**.: _Let \(\Gamma\) be a discrete group. Let \(\mathcal{A}\) be a \(\Gamma\)-\(C^{*}\)-algebra such that \(\mathcal{A}\) is not \(\Gamma\)-simple. Then, there exits an intermediate \(C^{*}\)-algebra \(\mathcal{A}\subset\mathcal{B}\subset\mathcal{A}\rtimes_{r}\Gamma\) such that \(\mathcal{B}\) is not of the form \(\mathcal{A}\rtimes_{r}\Lambda\) for any normal subgroup \(\Lambda\operatorname{\prec}\Gamma\)._ Proof.: Since \(\mathcal{A}\) is not \(\Gamma\)-simple, let \(I\) be a non-trivial \(\Gamma\)-invariant ideal in \(\mathcal{A}\). Let \(\mathcal{A}_{I}\) be the associated \(C^{*}\)-algebra as constructed in Proposition 5.15. Let \(\mathcal{B}=\mathcal{A}+\mathcal{A}_{I}\). We show that \(\mathcal{B}\) is a \(C^{*}\)-algebra. Since \(\mathcal{A}\) falls in the multiplicative domain of \(\mathbb{E}\), we observe that \[\mathbb{E}(\eta a)=\tau_{0}(\eta)a\in I,\ \eta\in C^{*}_{r}(\Gamma)\text{ and }a\in I\] Therefore, \(\mathbb{E}(\mathcal{A}_{I})\subset\mathcal{A}_{I}\). If \(\{a_{\lambda}+\tilde{a}_{\lambda}\}_{\lambda}\in\mathcal{B}\) is such that \(a_{\lambda}+\tilde{a}_{\lambda}\to b\) in \(\|.\|\), then \[a_{\lambda}+\mathbb{E}(\tilde{a}_{\lambda})=\mathbb{E}(a_{\lambda}+\tilde{a}_{ \lambda})\rightarrow\mathbb{E}(b).\] Therefore, \(\tilde{a}_{\lambda}-\mathbb{E}(\tilde{a}_{\lambda})\to b-\mathbb{E}(b)\). Since \(\mathbb{E}(\mathcal{A}_{I})\subset\mathcal{A}_{I}\), it follows that \(b-\mathbb{E}(b)\in\mathcal{A}_{I}\). Therefore, \(b=\mathbb{E}(b)+(b-\mathbb{E}(b))\in\mathcal{B}\). Hence, \(\mathcal{B}\) is norm-closed. Now, for \(a\in\mathcal{A}\), \(s\in\Gamma\) and \(b\in I\), \(a\lambda(s)b=\lambda(s)(s^{-1}a)b\in\mathcal{A}_{I}\) and \(\lambda(s)ba\in\mathcal{A}_{I}\). Therefore, \(\mathcal{B}\) is closed under multiplication. We now claim that \(\mathcal{A}_{I}\cap C^{*}_{r}(\Gamma)=\{0\}\). Towards a contradiction, let us suppose otherwise. Let \(0\neq a\in\mathcal{A}_{I}\cap C^{*}_{r}(\Gamma)\). By looking at \(a\)*\(a\), we can assume that \(a\geq 0\). Let \(\epsilon<1\). Then, we can find \(\eta_{1},\eta_{2},\ldots,\eta_{n}\in C^{*}_{r}(\Gamma)\) and \(b_{1},b_{2},\ldots,b_{n}\in I\) such that \[\left\|a-\sum_{i=1}^{n}\eta_{i}b_{i}\right\|<\epsilon\tau_{0}(a)\] Applying the canonical conditional expectation \(\mathbb{E}\), we see that \[\left\|\tau_{0}(a)-\sum_{i=1}^{n}\tau_{0}(\eta_{i})b_{i}\right\|<\epsilon\tau_ {0}(a),\] which in turn implies that \[\left\|1-\frac{\sum_{i=1}^{n}\tau_{0}(\eta_{i})b_{i}}{\tau_{0}(a)}\right\|< \epsilon<1.\] Therefore, \(\frac{\sum_{i=1}^{n}\tau_{0}(\eta_{i})b_{i}}{\tau_{0}(a)}\in I\) is an invertible operator and hence, \(I\) must be \(\mathcal{A}\). This is a contradiction to the non-triviality of \(I\). Therefore, \(a=0\). We now show that \(\mathcal{B}\cap C^{*}_{r}(\Gamma)=\mathbb{C}\). Let \(\tilde{a}\in\mathcal{B}\cap C^{*}_{r}(\Gamma)\). Then, we can find \(a\in\mathcal{A}\) and \(b\in\mathcal{A}_{I}\) such that \(\tilde{a}=a+b\). Applying the canonical conditional expectation \(\mathbb{E}\) on both sides, we obtain that \(\tau_{0}(\tilde{a})=a+\mathbb{E}(b)\). Therefore, \(\tilde{a}-\tau_{0}(\tilde{a})=b-\mathbb{E}(b)\). Since \(\mathcal{A}_{I}\) is invariant under the canonical conditional expectation, we see that \(\tilde{a}-\tau_{0}(\tilde{a})=b-\mathbb{E}(b)\in\mathcal{A}_{I}\). On the other hand, \(\tilde{a}-\tau_{0}(\tilde{a})\in C^{*}_{r}(\Gamma)\). Since \(C^{*}_{r}(\Gamma)\cap\mathcal{A}_{I}=\{0\}\), it must be the case that \(\tilde{a}-\tau_{0}(\tilde{a})=0\). Therefore, \(\tilde{a}=\tau_{0}(\tilde{a})\in\mathbb{C}\). Now, we must ensure that \(\mathcal{B}\) is strictly bigger than \(\mathcal{A}\). To that end, let us observe the following. For \(s\in\Gamma\) and \(a\in I\), if \(\lambda(s)a\in\mathcal{A}\), then we will obtain that \[0=\mathbb{E}(\lambda(s)a)=\lambda(s)a.\] This will force \(a=0\). Hence, for \(a\neq 0\), \(a\lambda(s)\notin\mathcal{A}\). The proof is now complete.
2306.03704
Delocalization and Universality of the Fractional Quantum Hall Plateau-to-Plateau Transitions
Disorder and electron-electron interaction play essential roles in the physics of electron systems in condensed matter. In two-dimensional, quantum Hall systems, extensive studies of disorder-induced localization have led to the emergence of a scaling picture with a single extended state, characterized by a power-law divergence of the localization length in the zero-temperature limit. Experimentally, scaling has been investigated via measuring the temperature dependence of plateau-to-plateau transitions between the integer quantum Hall states (IQHSs), yielding a critical exponent $\kappa\simeq 0.42$. Here we report scaling measurements in the fractional quantum Hall state (FQHS) regime where interaction plays a dominant role. Our study is partly motivated by recent calculations, based on the composite fermion theory, that suggest identical critical exponents in both IQHS and FQHS cases to the extent that the interaction between composite fermions is negligible. The samples used in our experiments are two-dimensional electron systems confined to GaAs quantum wells of exceptionally high quality. We find that $\kappa$ varies for transitions between different FQHSs observed on the flanks of Landau level filling factor $\nu=1/2$, and has a value close to that reported for the IQHS transitions only for a limited number of transitions between high-order FQHSs with intermediate strength. We discuss possible origins of the non-universal $\kappa$ observed in our experiments.
P. T. Madathil, K. A. Villegas Rosales, C. T. Tai, Y. J. Chung, L. N. Pfeiffer, K. W. West, K. W. Baldwin, M. Shayegan
2023-06-06T14:17:15Z
http://arxiv.org/abs/2306.03704v1
# Delocalization and Universality of the Fractional Quantum Hall Plateau-to-Plateau Transitions ###### Abstract Disorder and electron-electron interaction play essential roles in the physics of electron systems in condensed matter. In two-dimensional, quantum Hall systems, extensive studies of disorder-induced localization have led to the emergence of a scaling picture with a single extended state, characterized by a power-law divergence of the localization length in the zero-temperature limit. Experimentally, scaling has been investigated via measuring the temperature dependence of plateau-to-plateau transitions between the integer quantum Hall states (IQHSs), yielding a critical exponent \(\kappa\simeq 0.42\). Here we report scaling measurements in the fractional quantum Hall state (FQHS) regime where interaction plays a dominant role. Our study is partly motivated by recent calculations, based on the composite fermion theory, that suggest identical critical exponents in both IQHS and FQHS cases to the extent that the interaction between composite fermions is negligible. The samples used in our experiments are two-dimensional electron systems confined to GaAs quantum wells of exceptionally high quality. We find that \(\kappa\) varies for transitions between different FQHSs observed on the flanks of Landau level filling factor \(\nu=1/2\), and has a value close to that reported for the IQHS transitions only for a limited number of transitions between high-order FQHSs with intermediate strength. We discuss possible origins of the non-universal \(\kappa\) observed in our experiments. In 1958, P. W. Anderson introduced the theory of localization in disordered systems [1]. He showed that in sufficiently dilute systems with only short-range forces, states return to their original site with a finite probability in the long-time limit and thus, there is an absence of diffusion. While the scaling theory of localization predicts the lack of extended states in two dimensions [2], quantum Hall systems are reported to host both localized and extended states [3; 4; 5; 6]. In the zero-temperature limit, as the Fermi energy approaches a single critical energy (\(E_{c}\)), theory predicts that the localization length (\(\xi\)) diverges according to the power law \(\xi\propto|E-E_{c}|^{-\gamma}\) with a universal critical exponent \(\gamma\)[7; 8; 9]. Criticality is also associated with fundamental phenomena such as anamolous diffusion, multifractal conductance fluctuations, and power-law-density-correlations [10; 11; 12] owing to the large fluctuations in the local densities and currents in the absence of a length scale. Since its inception, the theory of criticality for the non-interacting integer quantum Hall states (IQHSs) has garnered immense interest, with recent numerical calculations suggesting substantial corrections to the critical exponent and predicting model-dependent exponents [2; 13; 14; 15; 16]. The strongly-interacting nature of the _fractional_ quantum Hall states (FQHSs) poses a more challenging theoretical framework in understanding critical phenomena. Exact-diagonalization studies are limited to very small systems and are often inadequate in capturing the dynamics in the thermodynamic limit. The composite-fermion (CF) theory provides a fruitful way to distill the physics of the FQHSs by treating the system of strongly-interacting electrons as a collection of weakly-interacting, magnetic-flux-electron quasi-particles, namely the CFs [17; 18]. The simplest FQHSs occur in the lowest Landau level, flanking the filling factor \(\nu=1/2\) at \(\nu=\frac{p}{2p\pm 1}\) where \(p\) is a positive integer. The FQHS at a particular \(\nu\) can then be thought of as the \(p^{th}\) IQHS of CFs. [17; 18]. Early theoretical work suggested the same scaling exponents for the transitions between FQHSs as those for the IQHSs [19] but a microscopic confirmation of this correspondence was lacking. More recent, rigorous calculations elaborate on the correspondence and highlight similar localization physics in the two regimes [20; 21; 22]. Experimentally, one can measure the divergence of \(\xi\) via studying the temperature (_T_) dependence of the Hall (\(R_{xy}\)) and longitudinal (\(R_{xx}\)) resistances at the transitions between the QHS plateaus. The derivative of \(R_{xy}\) (with respect to the magnetic field, \(B\)) at the critical magnetic field, and the inverse of the half-width of \(R_{xx}\) between two successive quantum Hall states (\(1/\Delta\)), both diverge according to the power law \(T^{-\kappa}\). The quantum phase coherence length (\(L_{\phi}\)) also diverges with temperature as \(L_{\phi}\propto T^{-q/2}\), and the three exponents \(\kappa\), \(q\) and \(\gamma\) follow the relation \(\kappa=\frac{q}{2\gamma}\)[7; 9; 23; 24; 25; 26; 27]. Despite some discrepancies in earlier studies, systematic measurements for the transitions between the IQHSs have concluded a value of \(\kappa\simeq 0.42\), in excellent agreement with theoretical expectations [26; 27]. Experimental scaling measurements in the FQHS regime, however, are quite scarce. An early study on a sample with relatively low mobility suggested that the transition between the strongest FQHSs (at \(\nu=1/3\) and \(2/5\)) has the same exponent of criticality (\(\kappa\)) as the transitions between the IQHSs [28], but a complete set of exponents to test universality for transitions between various, high-order FQHSs is still lacking. The focus of this Letter is to investigate scaling in ultra-high quality GaAs two-dimensional electron systems (2DESs) in the FQHS regime. Our experiments were performed on a series of 2DESs confined to GaAs quantum wells (QWs) of well widths 30 to 50 nm with densities \(\simeq 1\times 10^{11}\) cm\({}^{-2}\)[29; 30]. This was achieved by flanking the QWs with 220-nm-thick Al\({}_{0.24}\)Ga\({}_{0.76}\)As barriers and placing the Si doping layers inside doping wells [31]. The mobilities in these samples are \(\simeq 20\times 10^{6}\) cm\({}^{2}\)V\({}^{-1}\)s\({}^{-1}\). The samples were then cooled in a dilution refrigerator and magnetoresistance measurements were carried out using standard lock-in techniques. The samples had a van der Pauw geometry, with alloyed InSn contacts at the corners and midpoints of edges of \(4\times 4\) mm\({}^{2}\) square pieces. In Fig. 1(a), we show the \(R_{xx}\) vs. \(B\) trace for the 30-nm-wide GaAs QW at \(T\simeq 45\) mK. The exceptional sample quality is seen from the presence of a series of FQHSs at \(\nu=\frac{p}{2p+1}\) around \(\nu=1/2\) extending up to \(p=10\), namely \(\nu=10/21\) on the electron side (\(\nu<1/2\)) and \(\nu=10/19\) on the hole side (\(\nu>1/2\)). We observe well-developed FQHSs, with vanishingly small \(R_{xx}\), for states from \(\nu=1/3\) to 6/13 and \(\nu=2/3\) to 6/11. We also see emerging \(R_{xx}\) minima between \(\nu=1/3\) and 2/5 at \(\nu=4/11\), 3/8, and 5/13 which correspond to the FQHSs of CFs in an interacting CF picture [32; 33; 34; 35]. Figures 1(b) and (c) describe the procedure employed in extracting the critical exponent, \(\kappa\), from the dependence of \(R_{xx}\) on \(B\). The blue trace in Fig. 1(b) shows \(R_{xx}\) vs. \(B\) between \(\nu=5/11\) and 4/9. We first employ a Savitzky-Golay filter [36] with order 2 to smooth out the raw data shown in Fig. 1(a). We then determine \(dR_{xx}/dB\), as shown in red. The extrema in \(dR_{xx}/dB\), corresponding to the highest rate of change in resistance with \(B\) between the two FQHSs, are marked by the two vertical grey lines. The difference between the magnetic fields at which \(dR_{xx}/dB\) has an extremum is defined as \(\Delta\). We repeat this procedure for a range of temperatures and proceed to extract \(\kappa\) as shown in Fig. 1(c). The circles correspond to \(1/\Delta\) obtained at different \(T\) and are shown in a log-log plot. The line is a least-squares-fit to the data points and the magnitude of its slope yields \(\kappa\). We then proceed to analyze the temperature dependence of \(1/\Delta\) for the transitions between different FQHSs, as shown in Fig. 1(d). While \(1/\Delta\) for all the transitions exhibit linear dependencies on \(T\) in log-log plots, the slopes and thus \(\kappa\) are strikingly different. A summary of all the extracted \(\kappa\) vs. \(1/\nu^{*}\) is shown in Fig. 2(a) for the 30-nm-wide QW sample; similar Figure 1: (a) Longitudinal resistance (\(R_{xx}\)) vs. magnetic field (\(B\)) for a 2DES confined to a 30-nm-wide GaAs QW at \(T\simeq 45\) mK. (b) The blue trace is the smoothed \(R_{xx}\) between \(\nu=4/9\) and 5/11. The red trace is the corresponding \(dR_{xx}/dB\) vs. \(B\). The vertical grey lines mark \(B\) at which \(dR_{xx}/dB\) has an extremum, and the field difference between the two extrema is denoted by \(\Delta\). (c) Log-log plot of \(1/\Delta\) vs. \(T\) for the transition between the \(\nu=4/9\) and 5/11 FQHSs; the red line is a least-squares-fit through the data points according to \(1/\Delta\propto T^{-\kappa}\) and \(\kappa\) is the magnitude of the slope extracted from the fit. (d) Log-log plot of \(1/\Delta\) vs. \(T\) for the 30-nm-wide QW for the different FQHS transitions; the lines are fits to the data points. data for the 40- and 50-nm-wide samples are shown in Figs. 2(b,c). For the x-axis of Figs. 2(a,b,c), we use the harmonic mean of the filling factors (\(\nu_{1}\) and \(\nu_{2}\)) of two successive FQHSs, namely \(1/\nu^{*}=(1/\nu_{1}+1/\nu_{2})/2\). The grey horizontal lines indicate \(\kappa=0.42\), expected from measurements and calculations for IQHSs. Our experimentally-extracted \(\kappa\) for the FQHS transitions, however, exhibit a non-universal and non-monotonic behavior. For the transitions between the strongest FQHSs (farthest away from \(\nu=1/2\)), \(\kappa\) is much smaller than \(0.42\). As we move towards \(\nu=1/2\), \(\kappa\) increases dramatically and reaches maximum values that exceed \(0.42\). It then decreases again as \(\nu\) approaches \(1/2\). The trend for the evolution of \(\kappa\) on the hole side (\(\nu>1/2\)) is qualitatively similar to its electron counterpart (\(\nu<1/2\)). The exponent \(\kappa\) can also be extracted from the temperature dependence of the Hall resistance (\(R_{xy}\)). The maximum value of the derivative of \(R_{xy}\) with respect to \(B\), at the critical magnetic field (\(B_{c}\)), which corresponds to the critical energy, exhibits a power-law divergence with temperature, with the same critical exponent, i.e., \(\frac{dR_{xy}}{dB}|_{B=B_{c}}\propto T^{-\kappa}\)[26; 7; 9; 23]. We report the values of \(\kappa\) extracted from \(R_{xy}\) for the 50-nm-wide QW in the Supplemental Material [37] and show that they closely follow the values obtained from \(R_{xx}\). In order to discuss Fig. 2 data, we first briefly review what is known for the localization and scaling in the IQHS and FQHS regimes. For the IQHS case, numerous theoretical attempts have been made to determine the value of the critical exponent \(\gamma\) that quantifies the divergence of the localization length at transitions between the plateaus [7; 8; 9; 13; 14; 15; 38]. While different models of localization predict slightly dissimilar values for \(\gamma\), it is generally found that \(\gamma\simeq 2.4\)[9]. Assuming a value of \(q\simeq 2\) for the exponent of the phase coherence length, \(\gamma\simeq 2.4\) implies that \(\kappa\simeq 0.42\). Experimentally, early studies on 2DESs confined to different materials (Si-MOSFETs, In\({}_{x}\)Ga\({}_{1-x}\)As, and GaAs) provided different values for \(\kappa\), deduced from the \(T\)-dependence of the plateau-to-plateau transition widths [23; 25; 39; 40; 41; 42; 43; 44; 45; 46]. While in some specific materials and for certain transitions, a \(\kappa\simeq 0.42\) was indeed measured, this was not found to be universal; see Ref. [9] for a comprehensive review of early results. Later systematic studies by Li _et al._[26; 27], performed on 2DESs confined to Al\({}_{y}\)Ga\({}_{1-y}\)As samples shed new light on the experimental situation. They demonstrated that for these samples, where the dominant electron scattering mechanism is the short-range, alloy scattering, the scaling exponents are indeed universal and have values \(\kappa\simeq 0.42\), \(q\simeq 2\), and \(\gamma\simeq 2.4\), very much consistent with the theoretical expectations. Considerably less in known for the transitions between the plateaus in the FQHS case. An early experimental study by Engel _et al._[28] reported \(\kappa\simeq 0.43\) for the transition between the \(\nu=1/3\) and 2/5 FQHSs, i.e., a value very close to the IQHS case. Note that the density of the sample used in Ref. [28] was very close to the density of our sample, but the quality was much inferior as judged by its much (about 20 times) lower mobility and the presence of only very few FQHSs, namely those at \(\nu=1/3\), 2/5, 2/3, and 3/5. Very recently, the transitions between FQHSs were studied theoretically by Pu _et al._[22] in a non-interacting CF formalism, and it was concluded that the critical exponents for these transitions should be the same as in the IQHS regime, confirming the data of Ref. [28]. Note that the conclusion of Ref. [22] can be readily understood: In a non-interacting CF picture, the FQHSs can be simply mapped into the IQHSs of CFs. Now our data in Fig. 2 reveal that \(\kappa\simeq 0.20\) for the \(\nu=1/3\) to 2/5 transition (\(1/\nu^{*}=2.75\)). This is significantly smaller than the theoretically-expected value of Figure 2: (a) The extracted \(\kappa\) for the 30-nm-wide QW are plotted vs. \(1/\nu^{*}\), defined as \(1/\nu^{*}=(1/\nu_{1}+1/\nu_{2})/2\), where \(\nu_{1}\) and \(\nu_{2}\) are the fillings of two consecutive FQHSs, e.g., \(\nu_{1}=1/3\) and \(\nu_{2}=2/5\) yield a value of \(1/\nu^{*}=2.75\). The colors of data points for \(2<1/\nu^{*}<3\) represent the colors of data presented in Fig. 1(d) for different transitions. The dashed lines connecting the data points are guides to the eye. The grey, horizontal line at \(\kappa=0.42\) represents the expected exponent. (b) and (c) summarize the extracted \(\kappa\) vs. \(1/\nu^{*}\) for the 40- and 50-nm-wide QWs. 0.42 [22], or previously reported in experiments of Ref. [28] (\(\simeq 0.43\)). The discrepancy likely stems from the much higher quality of our present samples and the fact that they exhibit numerous developing FQHSs between \(\nu=1/3\) and \(2/5\) [see Fig. 1(a)]. These FQHSs at intermediate fillings between the standard, Jain-series FQHSs (i.e., those at \(\nu=1/3\) and \(2/5\)) are a common feature of ultra-high-quality samples such as ours, and can be described as the FQHSs of CFs, originating from interaction between CFs [32; 33; 34; 35]. Note that such additional FQHSs are completely absent in the sample of Ref. [28] which exhibits only a single, sharp maximum in \(R_{xx}\) between the deep and wide \(R_{xx}\) minima at \(\nu=1/3\) and \(2/5\). As a result, \(\Delta\) is significantly smaller in Ref. [28] and, more importantly, \(1/\Delta\) diverges faster as temperature approaches zero. In contrast, in our much better quality 2DESs, the growth of \(1/\Delta\) at low \(T\) is limited by the presence of these intermediate FQHSs, rendering \(\kappa\simeq 0.2\); see also Supplemental Material [37]. It is worth mentioning that, in some of the experiments in the IQHS regime on samples where the spin splitting in Landau Levels was not well resolved, a \(\kappa\simeq 0.21\) was also found for transitions between two IQHSs which were separated by a weakly-developed or undeveloped IQHS [42]. In our sample, as seen in Fig. 1(a), emerging features are also seen at transitions between other consecutive Jain-series FQHSs: \(\nu=2/5\) to \(3/7\), \(2/3\) to \(3/5\), \(3/5\) to \(4/7\), and \(4/7\) to \(5/9\). The measured \(\kappa\) for these transitions are also \(\simeq 0.2\), much smaller than \(0.42\) [Fig. 2(a)], consistent with our conjecture that the presence of intermediate features in the transition region is the cause of smaller than expected \(\kappa\). In Fig. 2(a) we also observe a decrease of \(\kappa\) for the transitions between the highest-order FQHSs closest to \(\nu=1/2\), e.g., between \(7/15\) and \(8/17\). While we do not know the reason for this decrease, it is worth noting that these FQHSs are not well-developed even at the lowest temperatures achieved in our experiments. They are akin to the weak, high-filling-factor IQHSs, more appropriately termed Shubnikov-de Haas oscillations, seen near zero magnetic field. The apparent decrease we observe in \(\kappa\) as \(\nu=1/2\) is approached might be related to this weakness of the highest-order FQHSs. The non-universality of \(\kappa\) we measure and its deviations from the expected value might also be related to the type of disorder present in our samples. Experiments in the IQHS regime have indeed shown that the nature of the disorder in the 2DES does play an important role in determining the value of \(\kappa\) and its universality. Li _et al._[26; 27] performed a systematic localization study in 2DESs confined to Al\({}_{y}\)Ga\({}_{1-y}\)As alloy QWs (rather than single-crystal GaAs QWs) with different Al alloy compositions \(y\). Their results revealed that the scaling follows the theoretical power-law only in the range \(0.0065\leq y\leq 0.016\) where the disorder and electron scattering are dominated by short-range alloy potential fluctuations. In contrast to their samples, the primary contributions to disorder in the ultra-high-quality 2DESs studied in our experiments come from remote and background (residual) ionized impurities [29; 30]. These lead to long-range potential fluctuations. Indeed, in GaAs 2DESs similar to ours, with long-range disorder, Wei _et al._[43] reported significant deviations from the theoretically-expected \(\kappa\) in the IQHS regime. While it is in principle possible to fabricate 2DESs confined to Al\({}_{y}\)Ga\({}_{1-y}\)As QWs and study localization phenomena in the FQHS regime, the experiments would be challenging: \(y\) has to be sufficiently large to induced significant alloy disorder, and yet small enough to preserve the quality of the 2DES at low densities so that FQHSs could be still observed at accessible magnetic fields [47]. In summary, we report values of the critical exponent \(\kappa\) for transitions between the plateaus of FQHSs flanking \(\nu=1/2\) in ultra-high-quality GaAs 2DES samples. Several samples with different QW widths exhibit a qualitatively similar behavior: \(\kappa\) changes non-monotonically as a function of filling and, only for a limited number of transitions between high-order FQHSs with intermediate strength, has a value close to \(\simeq 0.42\), the value predicted theoretically based on a non-interacting CF picture. The non-universality of \(\kappa\) might be a result of the additional, unconventional FQHSs that emerge between the neighboring, strong, Jain-sequence FQHSs when CFs are interacting. It can also be a consequence of the nature of the disorder in the samples. Our results shed light on the complex role of interaction, and highlight the need for future experimental and theoretical efforts to understand the physics of criticality for the FQHS plateau-to-plateau transitions. We acknowledge support by the National Science Foundation (NSF) Grant No. DMR 2104771 for measurements. For sample characterization, we acknowledge support by the U.S. Department of Energy Basic Energy Office of Science, Basic Energy Sciences (Grant No. DEFG02-00-ER45841) and, for sample synthesis, NSF Grants No. ECCS 1906253 and the Gordon and Betty Moore Foundation's EPiQS Initiative (Grant No. GBMF9615 to L.N.P.). This research is funded in part by QuantEmX Travel Grants from the Institute for Complex Adaptive Matter. A portion of this work was performed at the National High Magnetic Field Laboratory (NHMFL), which is supported by National Science Foundation Cooperative Agreement No. DMR-1644779 and the state of Florida. We thank S. Hannahs, T. Murphy, A. Bangura, G. Jones, and E. Green at NHMFL for technical support. We also thank J. K. Jain for illuminating discussions. ## References * Anderson [1958]P. W. Anderson, Absence of Diffusion in Certain Random Lattices, Phys. Rev. **109**, 1492 (1958). * Abrahams _et al._ [1979]E. Abrahams, P. W. Anderson, D. C. Licciardello, and T. V. Ramakrishnan, Scaling Theory of Localization: Absence of Quantum Diffusion in Two Dimensions, Phys. Rev. Lett. **42**, 673 (1979). * Aoki and Ando [1985]Hideo Aoki and Tsuneya Ando, Critical localization in two-dimensional Landau quantization, Phys. Rev. Lett. **54**, 831 (1985). * Chalker and Coddington [1988]J. T. Chalker and P. D. Coddington, Percolation, quantum tunnelling and the integer Hall effect, J. Phys. C: Solid State Phys. **21**, 2665 (1988). * Wei _et al._ [1986]H. P. Wei, D. C. Tsui, and A. M. M. Pruisken, Localization and scaling in the quantum Hall regime, Phys. Rev. B **33**, 1488 (1986). * Goldman _et al._ [1990]V. J. Goldman, J. K. Jain, and M. Shayegan, Nature of the extended states in the fractional quantum Hall effect, Phys. Rev. Lett. **65**, 907 (1990). * Pruisken [1988]A. M. M. Pruisken, Universal Singularities in the Integral Quantum Hall Effect, Phys. Rev. Lett. **61**, 1297 (1988). * Huo _et al._ [1993]Y. Huo, R. E. Hetzel, and R. N. Bhatt, Universal conductance in the lowest Landau level, Phys. Rev. Lett. **70**, 481 (1993). * Huckestein [1995]B. Huckestein, Scaling Theory of the Integer Quantum Hall Effect, Rev. Mod. Phys. **67**, 357, (1995). * Chalker and Daniell [1988]J. T. Chalker and G. J. Daniell, Scaling, diffusion, and the integer quantized Hall effect, Phys. Rev. Lett. **61**, 593 (1988). * Pook and Janssen [1991]W. Pook and M. Janssen, Multifractality and scaling in disordered mesoscopic systems, Zeitschrift fur Physik B Condensed Matter **82**, 295 (1991). * Amin _et al._ [2022]K. R. Amin, R. Nagarajan, R. Pandit, and A. Bid, Multifractal Conductance Fluctuations in High-Mobility Graphene in the Integer Quantum Hall Regime, Phys. Rev. Lett. **129**, 186802 (2022). * Dresselhaus _et al._ [2021]E. J. Dresselhaus, B. Sbierski, and I. A. Gruzberg, Numerical evidence for marginal scaling at the integer quantum Hall transition, Ann. Phys. (N. Y.) **435**, 168676 (2021). * Dresselhaus _et al._ [2022]E. J. Dresselhaus, B. Sbierski, and I. A. Gruzberg, Scaling Collapse of Longitudinal Conductance near the Integer Quantum Hall Transition, Phys. Rev. Lett. **129**, 026801 (2022). * Zhu _et al._ [2019]Qiong Zhu, Peng Wu, R. N. Bhatt and Xin Wan, Localization-length exponent in two models of quantum Hall plateau transitions, Phys. Rev. B **99**, 024205 (2019). * Zirnbauer [2019]Martin R. Zirnbauer, The integer quantum Hall plateau transition is a current algebra after all, Nucl. Phys. B **941**, 458 (2019). * Jain [1989]J. K. Jain, Composite-fermion approach for the fractional quantum Hall effect, Phys. Rev. Lett. **63**, 199 (1989). * Jain [2007]J. K. Jain, _Composite Fermions_ (Cambridge University Press, 2007). * Jain _et al._ [1990]J. K. Jain, S. A. Kivelson, and Nandini Trivedi, Scaling theory of the fractional quantum Hall effect, Phys. Rev. Lett **64**, 1297 (1990). * Hui _et al._ [2019]Aaron Hui, Eun-Ah Kim, and Michael Mulligan, Non-Abelian bosonization and modular transformation approach to superuniversality, Phys. Rev. B **99**, 125135 (2019). * Kumar _et al._ [2022]Prashant Kumar, P. A. Nosov, and S. Raghu, Interaction effects on quantum Hall transitions: Dynamical scaling laws and superuniversality, Phys. Rev. Research **4**, 033146 (2022). * Pu _et al._ [2022]Songyang Pu, G. J. Sreejith, and J. K. Jain, Anderson Localization in the Fractional Quantum Hall Effect, Phys. Rev. Lett. **128**, 116801 (2022). * Wei _et al._ [1988]H. P. Wei, D. C. Tsui, M. A. Paalanen, and A. M. M. Pruisken, Experiments on Delocalization and Universality in the Integral Quantum Hall Effect, Phys. Rev. Lett **61**, 1294 (1988). * Wei _et al._ [1994]H. P. Wei, L. W. Engel and D. C. Tsui, Current scaling in the integer quantum Hall effect, Phys. Rev. B **50**, 14609 (1994). * Koch _et al._ [1991]S. Koch, R. J. Haug, K. v. Klitzing, and K. Ploog, Size-dependent analysis of the metal-insulator transition in the integral quantum Hall effect, Phys. Rev. Lett. 67, **883** (1991). * Li _et al._ [2005]Wanli Li, G. A. Csathy, D. C. Tsui, L. N. Pfeiffer, and K.W. West, Scaling and Universality of Integer Quantum Hall Plateau-to-Plateau Transitions, Phys. Rev. Lett. **94**, 206807 (2005). * Li _et al._ [2009]Wanli Li, C. L. Vicente, J. S. Xia, W. Pan, D. C. Tsui, L. N. Pfeiffer, and K.W. West, Scaling in Plateau-to-Plateau Transition: A Direct Connection of Quantum Hall Systems with the Anderson Localization Model, Phys. Rev. Lett. **102**, 216801 (2009). * Engel _et al._ [1990]L. Engel, H. P. Wei, D. C. Tsui, M. Shayegan, Critical Exponent in the Fractional Quantum Hall Effect, Surface Science **229**, 13 (1990). * Chung _et al._ [2021]Yoon Jang Chung, K. A. Villegas Rosales, K. W. Baldwin, P. T. Madath, K. W. West, M. Shayegan, L. N. Pfeiffer, Ultra-high-quality two-dimensional electron systems, Nature Materials **20**, 632 (2021). * Chung _et al._ [2022]Yoon Jang Chung, A. Gupta, K. W. Baldwin, K. W. West, M. Shayegan, and L. N. Pfeiffer, Understanding limits to mobility in ultrahigh-mobility GaAs two-dimensional electron systems: 100 million cm\({}^{2}\)/Vs and beyond, Phys. Rev. B **106**, 075134 (2022). * Chung _et al._ [2020]Yoon Jang Chung, K. A. Villegas Rosales, K. W. Baldwin, K. W. West, M. Shayegan, and L. N. Pfeiffer, Working principles of doping-well structures for high-mobility two-dimensional electron systems, Phys. Rev. Mater. **4**, 044003 (2020). * Wojs _et al._ [2007]Arkadiusz Wojs, George Simion, and John J. Quinn, Spin phase diagram of the \(\nu_{c}=4/11\) composite fermion liquid, Phys. Rev. B **75**, 155318 (2007). * Mukherjee _et al._ [2014]Suitrtha Mukherjee, J. K. Jain, and Sudhansu S. Mandal, Possible realization of a chiral p-wave paired state in a two-component system, Phys. Rev. B **90**, 121305(R) (2014). * Pan _et al._ [2015]W. Pan, K. W. Baldwin, K. W. West, L. N. Pfeiffer, and D. C. Tsui, Fractional quantum Hall effect at Landau level filling \(\nu=4/11\), Phys. Rev. B **91**, 041301(R) (2015). * Samkharadze _et al._ [2015]N. Samkharadze, I. Arnold, L. N. Pfeiffer, K. W. West, and G. A. Csathy, Observation of incompressibility at \(\nu=4/11\) and \(\nu=5/13\), Phys. Rev. B **91**, 081109(R) (2015). * Savitzky and Golay [1964]A. Savitzky and M. J. E. Golay, Smoothing and differentiation of data by simplified least squares procedures, J. Anal. Chem. **36**, 1627 (1964). * [37]See Supplemental Material at xxx for \(R_{xy}\) data and analysis. * Arapov _et al._ [2014]Yu. G. Arapov, S. V. Gudina, E. V. Deryushkina, N. G. Shelushinina, and M. V. Yakunin, On the issue of universality of critical exponents in the quantum Hall effect mode, Low Temp. Phys. **45**, 181 (2019). * (39) J. Wakabayashi, A. Fukano, S. Kawaji, Y. Koike, and T. Fukase, Experiments on Localization in Landau Subbands with the Landau Quantum Number 0 and 1 of Si Inversion Layers, Surf. Sci. **229**, 60 (1990). * (40) S. Koch, R. J. Haug, K. v. Klitzing, and K. Ploog, Experiments on scaling in Al\({}_{x}\)Ga\({}_{1-x}\)As/GaAs heterostructures under quantum Hall conditions, Phys. Rev. B **43**, 6828 (1991). * (41) M. D'Iorio, V. M. Pudalov, and S. M. Semenchinsky, 1992, Full Localization of the 2D Electron Gas in Si MOSFETs at 30 mK and at High Magnetic Fields, in _High Magnetic Fields in Semiconductor Physics III_ (Springer, 1992) pp. 56-59. * (42) S.W. Hwang, H. P. Wei, L.W. Engel, and D. C. Tsui, Scaling in spin-degenerate Landau levels in the integer quantum Hall effect, Phys. Rev. B **48**, 11416 (1993). * (43) H. P. Wei, S. Y. Lin, D. C. Tsui, and A. M. M. Pruisken, Effect of long-range potential fluctuations on scaling in the integer quantum Hall effect, Phys. Rev. B **45**, 3926 (1992). * (44) G. M. Gusev, U. Gennser, X. Kleber, D. K. Maude, J. C. Portal, D. I. Lubyshev, P. Basmaji, M. de P. A. Silva, J. C. Rossi, Yu. V. Nastaushev, Percolation network in a smooth artificial potential, Phys. Rev. B **58**, 4636 (1998). * (45) K. Saeed, N. A. Dodo-Amoo, L. H. Li, S. P. Khanna, E. H. Linfield, A. G. Davies, and J. E. Cunningham, Impact of disorder on frequency scaling in the integer quantum Hall effect, Phys. Rev. B **84**, 155324 (2011). * (46) N. A. Dodo-Amoo, K. Saeed, D. Mistry, S. P. Khanna, L. Li, E. H. Linfield, A. G. Davies and J. E. Cunningham, Non-universality of scaling exponents in quantum Hall transitions, J. Phys. Condens. Matter **26**, 475801 (2014). * (47) We emphasize that it is not clear how disorder affects the exponents in the FQHS regime. In Ref. [28], \(\kappa\simeq 0.43\) was reported for the transition between the \(\nu=1/3\) and \(2/5\) FQHSs, very close to the expected value but much larger than \(\kappa\simeq 0.2\) that we observe in our study for the same transition (Fig. 2). Yet the 2DES used in Ref. [28] was also confined to a _modulation-doped_ GaAs QW, similar to the samples in our present study, implying that the disorder in their sample was also dominated by long-range potential fluctuations originating from ionized impurities. Supplemental Material for "Experiments on Delocalization and Universality of the Fractional Quantum Hall Plateau-to-Plateau Transitions" P. T. Madathil, K. A. Villegas Rosales, C. T. Tai, Y. J. Chung, L. N. Pfeiffer, K. W. West, K. W. Baldwin, and M. Shayegan Department of Electrical Engineering, Princeton University, Princeton, New Jersey 08544, USA November 3, 2021 ###### Abstract Extraction of \(\kappa\) from Hall data for the 50-nm-wide quantum well In the main text, we extracted the exponent (\(\kappa\)) from the temperature dependence of the inverse of the half-width of longitudinal magneto-resistance (\(R_{xx}\)) between two successive fractional quantum Hall states (FQHSs). \(\kappa\) can be extracted from the temperature dependence of the Hall resistance (\(R_{xy}\)) as well. The derivative of \(R_{xy}\) with respect to the magnetic field \(B\) exhibits a maximum at the critical magnetic field (\(B_{c}\)), and the value of this maximum should show a power-law divergence with temperature with the same critical exponent as \(R_{xx}\) data, i.e. \(\frac{dR_{xx}}{dB}|_{B=B_{c}}\propto T^{-\kappa}\)[1; 2; 3; 4]. Here in this Supplemental Material, we describe the procedure for extracting \(\kappa\) from the \(R_{xy}\) data and present the values of \(\kappa\) for the 50-nm-wide quantum well. Figures S1(a,b) show \(R_{xx}\) and \(R_{xy}\) vs. \(B\) traces for our two-dimensional electron system (2DES) confined to a 50-nm-wide, GaAs quantum well. Data are shown at temperatures \(T\simeq\) 65, 250, and 770 mK, represented by the blue, red and green traces, respectively. The \(R_{xx}\) data reveal the presence of intermediate features between \(\nu=1/3\) and \(2/5\), and \(2/5\) and \(3/7\). This is also reflected in \(R_{xy}\) where the plateau-to-plateau transitions between these fillings are disrupted by the presence of the classical Hall slope even at the lowest temperatures. In contrast, the transition between FQHSs of intermediate strength, e.g. between \(\nu=4/9\) and \(5/11\), is step-like, indicative of an ideal plateau-to-plateau transition. We discuss these features in detail in Figs. S2 and S3, respectively. pacs: 75.40.-a, 75.40.-b, 75.40.-b, 75.40.-b, 75.40.-b, 75.40.-b, 75.40.-b, 75.40.-b Figure S2 includes the \(R_{xy}\) data between \(\nu=1/3\) and 2/5. The blue trace is \(R_{xy}\) vs. \(B\) and the red trace is \(dR_{xy}/dB\) vs. \(B\). In the derivative of \(R_{xy}\), we see two local maxima, one closer to 2/5 at \(B\) = 10.32 T and another closer to 1/3 at \(B\) = 11.57 T. We also observe a broad, nearly constant value of \(dR_{xy}/dB\simeq\) 6.23 k\(\Omega\)/T between \(B\simeq\) 10.5 T and \(B\simeq\) 11.3 T, corresponding to the classical Hall slope. There are also small dips at \(B\) = 11.04 T and 10.44 T corresponding to the fractions 4/11 and 5/13. The transition between 1/3 and 2/5 is mediated by a classical Hall line and the conventional plateau-to-plateau transition is split into a plateau-to-classical and classical-to-plateau transition. This clearly indicates that in ultra-high-quality samples, the transitions between strong FQHSs, such as the transition between \(\nu=1/3\) and 2/5 is influenced by the presence of these additional states. We then proceed to do a temperature dependence of \(\frac{dR_{xy}}{dB}|_{B=10.32\text{T}}\) and \(\frac{dR_{xy}}{dB}|_{B=11.57\text{T}}\) to extract exponents (\(\kappa\)) for both maxima and report the values in Fig. S4(b). Figure S3 shows the \(R_{xy}\) data for the transition between \(\nu\) = 4/9 and 5/11. The blue trace is \(R_{xy}\) vs. \(B\) and the red trace is \(dR_{xy}/dB\) vs. \(B\). The vertical lines mark \(B\) at which \(dR_{xy}/dB\) has a maximum, at \(B\) = 10.32 T and \(B\) = 11.57 T. Figure S4(a) summarizes the values of \(\kappa\) obtained from \(R_{xx}\), according to the relation \(1/\Delta\propto T^{-\kappa}\), for the 50-nm-wide quantum well. The x-axis is defined as \(1/\nu^{*}=(1/\nu_{1}+1/\nu_{2})/2\), where \(\nu_{1}\) and \(\nu_{2}\) are the fillings of two consecutive FQHSs and the y-axis gives the \(\kappa\) values. The dashed lines connecting the data points are guides to the eye and the horizontal line at \(\kappa=0.42\) represents the expected exponent from theoretical calculations. In Fig. S4(b), we present the exponents obtained from \(R_{xy}\) data, according to the relation, \(\frac{dR_{xy}}{dB}|_{B=B_{c}}\propto T^{-\kappa}\). It is evident from Figs. S4(a) and S4(b) that \(\kappa\) values extracted from \(R_{xy}\) qualitatively follow the same trend, and are quantitatively close to the values extracted from the temperature dependence of \(R_{xx}\). This is clearly evident from Fig. S4(c) which presents \(\kappa\) extracted from \(R_{xy}\) and \(R_{xx}\) data in the same plot. It is noteworthy that, for the transition between the strongest FQHSs, namely the 1/3 to 2/5 transition, the \(R_{xy}\) data yield two different values of \(\kappa\) for the two local maxima in \(dR_{xy}/dB\) seen in Fig. S2. For the \(dR_{xy}/dB\) maximum at \(B\) = 11.75 T, which is closer to the stronger fraction (\(\nu=1/3\)), we find \(\kappa\simeq 0.38\), while the \(dR_{xy}/dB\) maximum at \(B\) = 10.32 T, which is closer to the weaker fraction (\(\nu=2/5\)), yields \(\kappa\simeq 0.13\). The reason for this difference is unclear. We note that the larger \(\kappa\simeq 0.38\) is closer to the theoretically-expected value. However, for the \(\nu=2/3\) to 3/5 transition, where we also see two \(dR_{xy}/dB\) maxima (data not shown), the two extracted \(\kappa\) values, 0.16 for the maximum near 2/3, and 0.12 for the maximum near 3/5, are both much smaller than the expected \(\kappa=0.42\), and close to the \(\kappa\) value extracted from \(R_{xx}\); see the data points at \(1/\nu^{*}\simeq 1.58\) in Fig. S4(c). These observations highlight the complexity of the transitions between strong FQHSs, brought about by the presence of many-body induced states in the transition regions.
2303.01915
Investigating the impact of spin effects at the high-energy neutrino-nucleon interactions while it crosses the Earth's core
In this work, we investigate the impact of assuming the polarization of the Earth's outer core on the propagation of neutrinos that cross the all the Earth. We taking into account the spin-dependent structure functions to describe the polarized neutrino-nucleon cross-section, and also on the neutrino absorption while it crosses all of the Earth. We found that adding spin information and simultaneously assuming polarization of Earth's outer core impacts the probability of neutrino absorption in the energy range of 10 - 100 TeV and for upward neutrino direction. However, the magnitude of the effect is small and should be comparable with the magnitude of the errors associated with the IceCube neutrino data.
R. Francener, D. R. Gratieri, G. Torrieri
2023-03-03T13:36:47Z
http://arxiv.org/abs/2303.01915v1
Investigating the impact of spin effects at the high-energy neutrino-nucleon interactions while it crosses the Earth's core ###### Abstract In this work, we investigate the impact of assuming the polarization of the Earth's outer core on the propagation of neutrinos that cross the all the Earth. We taking into account the spin-dependent structure functions to describe the polarized neutrino-nucleon cross-section, and also on the neutrino absorption while it crosses all of the Earth. We found that adding spin information and simultaneously assuming polarization of Earth's outer core impacts the probability of neutrino absorption in the energy range of 10 - 100 TeV and for upward neutrino direction. However, the magnitude of the effect is small and should be comparable with the magnitude of the errors associated with the IceCube neutrino data. Neutrino Absorption, Polarized Targets, Earth's Outer Core. ## I Introduction Interactions with polarized nuclei have gained great attention in theoretical and experimental physics community since the results of the _European Muon Collaboration (EMC)_ from the late eighties [1; 2]. Such results pointed out that _"the total quark spin constitutes only a small fraction \(\Delta\Sigma(Q^{2})\) of the proton's spin"_. This result became known as the _"Proton Spin Crisis" (PSC)_. Usually, the nucleon spin is assumed to be given in terms of the sum of the spin contributions from quarks, gluons and orbital magnetic moment from both quarks and gluons, \[\frac{1}{2}=\frac{1}{2}\Delta\Sigma(Q^{2})+\Delta G(Q^{2})+L_{q}(Q^{2})+L_{g}( Q^{2}), \tag{1}\] where \(\Delta G(Q^{2})\) is the gluon contribution to the nucleon spin, \(L_{q}(Q^{2})\) and \(L_{g}(Q^{2})\) are the _Orbital Angular Momentum (OAM)_ contributions from quarks and gluons, respectively. The quark contribution to nucleon spin, \(\Delta\Sigma(Q^{2})\), can also be understood in terms of the sum in all quarks of integral in \(x\) of helicity distributions (\(\Delta q^{i}(x,Q^{2})\)) [1; 2]. In all cases, in the _Naive Parton Model_, the quantity \(\Delta q^{i}(x,Q^{2})dx\) is the number of polarized (anti)quarks of type \(q\) carrying a momentum fraction between \(x\) and \(x+dx\). The index \(i=u,d,c,s,b,t\) stands for each quark flavor. Within the _Quantum Cromodynamics (QCD)_ there are both perturbative and non-perturbative corrections, in such way that the structure functions associated with the nucleons are obtained from the partonic density functions through the _Factorization Theorem_[3]. For a complete review, see [4]. For the theoretical formalism, we point to reference [5], which we follow closely. See also [6]. Explicitly, the result from the EMC collaboration pointed out that \(\Delta\Sigma(Q^{2})=0.14\pm 0.23\). Current analysis from COMPASS collaboration [7] report that about \(31\%\pm 11\%\) of proton spin comes from quarks, for \(Q^{2}=3\) GeV\({}^{2}\). Actually, the literature seems to converge for the longitudinally spin polarization scattering, but there still a puzzle for transversely polarized nuclear targets interacting with transversely polarized projectile nucleons (\(\approx 40\%\)) and charged leptons (\(\approx 5-10\%\)) [8]. To illustrate the actual scenario, in Fig. 1 we present the contribution to proton's spin due to quarks measured by several collaborations [1; 2; 7; 9; 10; 11; 12; 13; 14; 15; 16; 17] as function of the respective momentum scale, \(Q^{2}\). For comparison, our results assuming the predictions from [18; 19; 20; 21; 22] are also shown. At the present level of experimental accuracy, the scaling of polarized PDFs was still not clearly seen [4]. In fact, the structure functions associated with the quarks are relatively the most known ones for the both unpolarized and polarized cases. Moreover, a recent work from _JAM Collaboration_[23] based on the STAR data [24] in the range of \(0.01\leq x\leq 0.3\) and \(Q^{2}=10\) GeV \({}^{2}\), presents for the first time results favoring the nonzero helicity sea asymmetry, with \(\Delta\overline{X}^{2}/N_{dat}\approx 1\sigma\). Such results are from a global analysis of both unpolarized and polarized PDFs. Concerning the gluon spin, we know today [4] that at small \(x\) and large \(Q^{2}\) the _gluon density function_, \(g(x,Q^{2})\), is considerably larger than the density functions associated to the quarks, which implies that at the high energy regime the nucleon can be understood as a collective of gluons. Hence, it is straightforward to expect some degree of contribution from gluons to the nucleon spin. Indeed, in [8], which is a review of the topic, it is stated that while the contribution from the valence quarks saturates at the high-energy limit, the gluon contribution is expect to reach \(\approx 50\%\) at the present accelerator energies. This is in agreement with recent results from lattice QCD [25] and also and experimental analyzes [26]. Moreover, since \(\Delta\Sigma(Q^{2})\) and \(\Delta G(Q^{2})\) are observables, overall spin conservation (\(S_{p}=1/2\)) can be used, and implies in large contribution from OAM to the proton spin [27]. Indeed, in [28] it is shown that orbital angular momentum is generated in the partonic dynamical evolution as it is given by the _DGLAP_ equations [29; 30; 31]. To better describe the quark and gluon content and aspects of the three-dimensional structure of the nucleon, there are generalized parton distribution (GPDs) and transverse momentum dependent distributions (TMDs) [32; 33]. Such distributions are complementary and aim to describe the transverse plane of nucleon propagation. Another important class of parton distributions are the longitudinal spin-dependent ones, which describe the asymmetry between quarks with opposite spins in the nucleon. Recently, several experimental collaborations have focused their efforts on measuring the (longitudinal) spin-dependent structure functions in collisions of charged leptons with polarized hydrogen, deuterium and helium-3 nuclei [10; 11; 14; 17; 34]. Current measurements focus on modest kinematic ranges (\(x>10^{-3}\) and \(Q^{2}<10^{2}\) GeV). With existing data on spin asymmetry and spin-dependent structure functions, different authors have built fits that parameterize these data [18; 19; 20; 21; 22] and allow extrapolations beyond observed kinematic ranges. One of the main goals of the future Electron-Ion Collider (EIC) [35] is to improve our understanding of the helicity distributions of quarks and gluons inside nucleons and heavy nuclei. With this new collider it is intended to increase the current observation range of these distributions. While \(x\) should be decreased to approximately \(5\cdot 10^{-5}\), \(Q^{2}\) should be increased to approximately \(10^{3}\) GeV\({}^{2}\) (see Fig. 10 in [35]). Typical IceCube events occur with \(x\sim 10^{-2},\,^{-3}\) and \(Q^{2}\sim 10^{3}\). [4], very close to the future EIC data, making the necessary extrapolation much closer and with a high confidence level. In this work we present a study of the impact of polarization of hadronic targets on Deep Inelastic Scattering (DIS) of muonic neutrinos and antineutrinos, and we apply the obtained result to the neutrinos absorption by the Earth. Such a study is strongly motivated by a probable polarization of the Earth's outer core, whith is generated by a turbulent flow of liquid metal [36]. We will verify if through the interaction of neutrinos with the Earth it is possible to estimate the polarization of the outer core. This study is also motivated by the recent IceCube measurement of the cross section of muonic neutrinos by the Earth's absorption [37]. This measurement indicates that the cross section, in the observed energy range (\(6.3-980\) TeV), is about 1.3 times the cross section predicted by the standard model [38]. The interaction of neutrinos with the Earth can be measured through the attenuation of the incident neutrino flux that crosses the Earth and is measured by the IceCube, as illustrated in Fig. 2. The IceCube detector can measure High Energy Neutrino Sample, with energies above 60 TeV. In this energy range the predominant interaction is deeply inelastic scattering [39]. For the neutrino to reach the detector, it is necessary that it does not interact via charged current when crossing the Earth. Neutral current interaction only decreases the energy of the beam. Although in this work we focus on the analysis of the interaction of muon neutrino, the results are also very similarly applicable Figure 1: Compilation of experimental results for the quarks contribution to proton spin from [1; 2; 7; 9; 10; 11; 12; 13; 14; 15; 16; 17]. For comparison, the prediction from KATAO [18; 19; 20] and DSSV [21; 22] are also shown. for electron neutrino. In the energy limit of the neutrino much larger than the mass of the lepton produced, electron and muon neutrinos have the same cross section with hadronic targets. However, for the analysis of the absorption of electron neutrino it is necessary to also include the effects of interaction with electrons (Glashow resonance) [40; 41]. For recent work on neutrino absorption considering Glashow resonance, see Refs. [42; 43; 44]. ## II Formalism A correct description of the proton spin from quarks is of particular interest for neutrino physics. Neutrino and antineutrino are left-handed and right-handed chiral eigenstates, respectively. So, (anti)neutrino capturing the (right)left-hand component of the quark wavefunction, and any change between distribution of the quarks right and left-hand in the nucleons will impact specifically (anti)neutrino scattering and absorption cross section. At this point one must notice that besides the fact that weak neutrino-nucleon interactions are given in terms of chiral states. Thus, the information about how much of the nucleon spin is due to the quarks, _i. e._, the value of \(\Delta\Sigma(Q^{2})\), is not taken into account in the most common procedure of calculation of neutrino-nucleon cross-section at the deep inelastic regime 1. At sufficiently high energies, due to _asymptotic freedom_[45; 46], it is possible to describe the neutrino-nucleon interaction in terms of the neutrino scattering on free quarks that constitute the nucleon. The assumption of the equal distribution of left and right quark spins leads to the average of the initial polarization state. As both the _charged current (CC)_ and _neutral current (NC)_ DIS processes are inclusive reactions, they take into account all the possible final hadronic states, and a sum over all the final state polarization possibilities is also applied [47]. In [5], the formalism to include spin effects at (anti)neutrino-nucleon interaction is presented. Footnote 1: At the high neutrino energies, where the description of the nucleon target in terms of form factors is no longer available. In the unpolarized DIS (CC), the neutrino \(\nu_{l}\) (antineutrino \(\bar{\nu}_{l}\)) with energy \(E_{\nu}\) interacts by exchanging a virtual boson \(W^{\pm}\) of four - momentum \(q\) (\(q^{2}=-Q^{2}\)). The initial lepton becomes the associated charged lepton \(l^{\pm}(=e,\mu,\tau)\) with energy \(E^{\prime}\) and the hadron goes to an unknown state of invariant mass \(W\), characterized by \(W^{2}>m_{N}^{2}\), with \(m_{N}\) being the mass of the hadronic target. In terms of Bjorken's \(x\), the inelasticity \(y=(E_{\nu}-E^{\prime})/E_{\nu}\) and the virtuality of the exchanged boson \(Q^{2}\), the unpolarized double differential cross section of neutrino DIS is given by [48] \[\begin{split}\frac{\mathrm{d}\sigma^{\nu(E)}}{\mathrm{d}x\mathrm{ d}y}=\frac{G_{F}^{2}E_{\nu}m_{N}}{\pi}\left(\frac{M_{W}^{2}}{Q^{2}+M_{W}^{2}} \right)^{2}\left\{\left(y^{2}x+\frac{m_{l}^{2}y}{2E_{\nu}m_{N}}\right)F_{1}(x,Q^{2})+\,\left(1-y-\frac{m_{l}^{2}}{4E_{\nu}^{2}}-\frac{m_{N}xy}{2E_{\nu}} \right)F_{2}(x,Q^{2})+\right.\\ \left.+(-)\left(xy-\frac{xy^{2}}{2}-\frac{m_{l}^{2}y}{4E_{\nu}m_{ N}}\right)F_{3}(x,Q^{2})\,+\frac{m_{l}^{2}(m_{l}^{2}+Q^{2})}{E_{\nu}^{2}m_{\nu}^{ 2}x}F_{4}(x,Q^{2})-\frac{m_{l}^{2}}{E_{\nu}m_{N}}F_{5}(x,Q^{2})\right\}\ \,\end{split} \tag{2}\] where \(G_{F}\) is the Fermi's constant, \(M_{W}\) the \(W^{\pm}\) boson mass, \(m_{l}\) the mass of the lepton produced and \(F_{i}\) are the spin Figure 2: Attenuation of neutrino flux by Earth’s absorption. independent structure functions. In the parton model, \(F_{2}(x,Q^{2})\) is interpreted in terms of the sum of the helicity distributions of the quarks that have the flavor that can interact with the neutrino [49]. \(F_{1}(x,Q^{2})\) can be written in terms of \(F_{2}(x,Q^{2})\) with the Callan-Gross relation and \(F_{3}(x,Q^{2})\) is associated with quark - antiquark asymmetry. In this paper we use the CTEQ18 parameterization [50] for quark distributions, which uses the DGLAP evolution equations. In the high energy limit, which we are interested in, we also assume that the Albright - Jarlskog relations [51] hold. The standard variables of DIS are connected by \(Q^{2}=2E_{\nu}m_{N}xy\), in the rest frame of target. Moreover, when we consider the polarized hadronic target, the cross section of the Eq. 2 is modified by a factor dependent on the hadronic spin, given by [5; 6] \[\frac{\mathrm{d}\Delta\sigma^{\nu(\bar{\nu})}}{\mathrm{d}x \mathrm{d}y}=\frac{G_{F}^{2}ME_{\nu}}{\pi}\left(\frac{M_{W}^{2}}{Q^{2}+M_{W}^ {2}}\right)^{2}\lambda_{N}\left\{(-)\left[-yx(2-y)+\frac{2x^{3}y^{3}m_{l}^{2} }{Q^{2}}\right]\,g_{1}(x,Q^{2})+(-)\left(\frac{4x^{3}y^{2}m_{l}^{2}}{Q^{2}} \right)g_{2}(x,Q^{2})+\right.\] \[\left.+\frac{2xym_{l}^{2}}{Q^{2}}\left(1-y-\frac{x^{2}y^{2}m_{l}^ {2}}{Q^{2}}\right)g_{3}(x,Q^{2})+\left[-1+y-\frac{2x^{2}ym_{l}^{2}}{Q^{2}} \left(1-\frac{3y}{2}-\frac{x^{2}y^{2}m_{l}^{2}}{Q^{2}}\right)\right]g_{4}(x,Q^ {2})+\right. \tag{3}\] \[\left.+\left(-y^{2}x+\frac{2x^{4}y^{3}m_{l}^{2}}{Q^{2}}\right)g_{ 5}(x,Q^{2})\right\}\ \,\] where \(\lambda_{N}\) is the helicity of the hadronic target and \(g_{i}(x,Q^{2})\) are the spin - dependent structure functions. Unlike the \(F_{i}(x,Q^{2})\) functions, the \(g_{i}(x,Q^{2})\) functions in the parton model are written with the difference of the helicity distributions of the each quarks, describing the net amount of quarks with spin in a given direction [6]. In the limit of energies of interest in this work, we assume the validity of the Dicus relation [52]. Such a relation allows us to write \(g_{4}\) in terms of \(g_{5}\), similar to the Callan-Gross relation, \(g_{4}(x,Q^{2})=2xg_{5}(x,Q^{2})\). Both relationships emerged from the observation that, when we neglect the masses involved, helicity is conserved in the quark-gluon coupling. The polarized structure functions are described in [5], where it is shown that the contributions from \(g_{2}(x,Q^{2})\) and \(g_{3}(x,Q^{2})\) to the cross-sections are suppressed by factors like \(m_{l}^{2}/Q^{2}\), which could be measured in the neutrino factories. However, in this work we are initially interested in \(Q^{2}\geq\approx m_{W}^{2}\), which is the typical value for the neutrino-nucleon interactions at the IceCube neutrino observatory. Hence, we can disregard contributions from \(g_{2}(x,Q^{2})\) and \(g_{3}(x,Q^{2})\). Also, in the same limit, as the same structures appears in in both polarized and unpolarized cases, the polarized contribution to the neutrino-nucleon cross-section can be obtained from Eq. 2 replacing \(F_{1}(x,Q^{2})\rightarrow-g_{5}(x,Q^{2})\), \(F_{2}(x,Q^{2})\rightarrow-g_{4}(x,Q^{2})\), and \(F_{3}(x,Q^{2})\to 2g_{1}(x,Q^{2})\). An accurate description of the neutrino - nucleon cross section, as well as the distribution of matter in the interior of the Earth, is fundamental to estimate the absorption of neutrinos that cross the Earth, given that these are fundamental ingredients for the calculation. The probability of the neutrino crossing without being absorbed can be Figure 3: The thickness of the Earth in centimeters of the water for nucleons unpolarized and with outer core partially polarized. We use the PREM model [53]. quantified with [41] \[P_{Shad}(E_{\nu},\theta_{z})=\exp\left[-N_{A}\sigma(E_{\nu})\int_{0}^{r(\theta_{z })}\rho_{N}(r)\mathrm{d}r\right]\ \, \tag{4}\] where \(N_{A}\) is the Avogadro numbers, \(\rho(r)\) the Earth's density profile and \(r(\theta_{z})=-2R_{Earth}\)cos \(\theta_{z}\) is the total distance travelled by neutrino. In this work we use the PREM model [53] for the description of the Earth's profile density. In Fig. 3 we show the thickness of matter traversed by the neutrino as a function of the zenith angle. We see two distinct cases: in the continuous black line we show the thickness of matter crossed without considering polarization. While the dashed curves show the thickness of matter traversed unpolarized (red) and polarized (below) considering a hypothetical case of 30% polarization in the outer core (blue). The PREM model indicates that the outer core is located between 1221.5 km and 3480.0 km, which implies that the neutrino crosses this potentially polarized layer only if it hits with cos (\(\theta_{z}\)) less than \(-0.84\). ## III Results Initially, we present in Fig. 4 the cross section of (a) muon and (c) tau neutrino; and (b) muon and (d) tau antineutrino with a isoscalar target as a function of (anti)neutrino energy. We calculate the cross sections for unpolarized and polarized targets with helicity \(\lambda_{N}=-1\). For the calculation of the polarized cross sections, we use two different parameterizations of the spin-dependent structure functions, DSSV [21] and KATAO [18]. Both parameterizations lead to similar results. The impact of target polarization on the cross section becomes less significant Figure 4: Cross section for muon (a) neutrino and (b) antineutrino DIS with the isoscalar target. In (c) and (d) we present the same results for tau neutrino. We consider two distinct cases: unpolarized target and polarized with DSSV [21] and KATAO [18] parametrizations of spin - dependent structure functions. with increasing energy, practically disappearing for energies above \(10^{7}\) GeV. For lower energies of incoming neutrino, \(10^{3}-10^{4}\) GeV, the unpolarized and polarized cross sections differ by a multiplicative factors of \(0.7-1.3\), depending on the parameterization of the spin - dependent structure functions and whether the beam is neutrino or antineutrino. To better trace the origin of the polarization effect on the nutrino-nucleon interaction (CC), we present Fig. 5. In it we quantify the difference between the unpolarized (UU) and polarized (LL) differential cross section, normalized with the unpolarized differential cross section. \(\theta_{z}\) is the angle between the direction of arrival of the neutrino (antineutrino) with the spin of the isoscalar target, considering again \(\lambda_{N}=-1\). We clearly see that the difference between the differential cross sections is maximized when the spins are parallel or antiparallel to the direction of propagation of the incident neutrino. In \(Q^{2}\), this normalized difference is maximized in different regions for neutrino and antineutrino. While for the neutrino we have a maximum of \(\approx 80\) GeV\({}^{2}\), for the antineutrino the maximum goes beyond \(200\) GeV\({}^{2}\), the region with the smallest contribution to the total cross section. In Fig. 6 we present our result for the probability of crossing without being absorbed, \(P_{Shad}\), of muonic neutrinos by the Earth as a function of the energy and the cosine of the zenith angle of incidence of the neutrino. In the upper panel, we present \(P_{Shad}\) for antineutrinos and in the lower panel for neutrinos. In the Figs. (a)a and (c)c we do not consider any polarization on earth. In the Figs. (b)b and (d)d, \(P_{Shad}\) is calculated considering \(100\%\) polarization in the outer core of the Earth. The calculation of the cross sections disregarding polarization is performed with the structure functions constructed with the quark distributions parameterized by CTEQ18 [50]. For the cross section with polarization correction of the hadronic target (Figs. (b)b and (d)d), we used the KATAO parameterization [18] spin - dependent structure functions, besides, of course, CTEQ18 for the spin - independent structure functions. The choice of KATAO parameterization for this result is practically indifferent to the choice of DSSV, because, as previously discussed and illustrated in Fig. 4, both lead to very similar cross sections. In Fig. 7 we present the difference between the absorption probabilities of unpolarized and polarized Earth. In Figs. (a)a and (c)c ((b)b and (d)d) are the results for the absorption of antineutrino (neutrino) considering two cases: \(30\%\) and \(100\%\) of polarization in the outer core, respectively. The effect is restricted to the region where the neutrino impinges with \(\cos\left(\theta_{z}\right)<-0.84\), given that this is the necessary condition for it to cross the outer core. We can see that neutrino and antineutrino have opposite effects: while \(P_{Shad}\) of neutrinos decreases with the effect of polarization, \(P_{Shad}\) of antineutrinos increases with said effect. This fact hinders the experimental validation of the model, since the IceCube does not distinguish the charge of the produced lepton. Even considering \(100\%\) polarization the effect is small, however in the IceCube observation region (\(10^{3}-10^{6}\) GeV). To better quantify the difference described above and presented in Fig. 7, we calculate the percentage difference between unpolarized and polarized absorptions (\((P_{Shad}^{UNPOL.}-P_{Shad}^{POL.})/P_{Shad}^{UNPOL.}\)). We estimate that in the IceCube observation region where the absorption is significant, with \(30\%\) polarization the absorption changes between \(0\%\) and \(\pm 5\%\). For \(100\%\) polarization this percentage rises to about \(\pm 18\%\). It is still possible to observe that the mentioned effect is maximized with \(\cos\left(\theta_{z}\right)\rightarrow-0.98\), when the neutrino crosses the largest possible amount of the outer core. Despite being significant percentages in the change of absorption, it is very difficult to observe it. Because we use high percentages of polarization to get it, and the effects on neutrinos and antineutrinos are of similar but opposite Figure 5: Difference between unpolarized and polarized differential cross sections normalized by unpolarized differential cross sections for (a) neutrino and (b) antineutrino incident. We calculate using the KATAO [18] and CTEQ18 [50] parameterization, for neutrino and antineutrino incident with energies of \(10^{3}\) GeV. magnitudes. To observe this effect, a future detector capable of distinguishing between neutrinos and antineutrinos would be needed. ## IV Summary In this study we have investigated the impact of the polarization of the Earth's outer core on the absorption of neutrinos in the IceCube observation region. Our main motivation was to verify if a possible estimate of the polarization of the Earth's outer core can be made in the future by attenuating the flux of neutrinos that cross the Earth. Our results showed that the effects of polarization on absorption, although in the IceCube region, are small even considering 100% polarization. Added to this, while the absorption of neutrinos decreases, that of antineutrinos increases with the polarization of the hadronic targets, making it a difficult task to estimate the nuclear polarization by the attenuation of the neutrino flux. Given the magnitude of the impact of polarization on neutrino absorption, a more detailed analysis of the angular and energy distributions of the expected number of events in the IceCube detector becomes unfeasible. The future EIC may change the current view we have of the contribution of sea quarks to the proton spin, and motivate more detailed analyzes of neutrino absorption considering polarization on Earth. Furthermore, with the neutrino detectors of the future, such as IceCube-Gen2 [54] and GRAND [55], the prospect of measuring these smaller magnitude effects may become feasible. This work is far from a comprehensive estimate of possible spin QCD effects. Even if earth polarization is exactly zero, it does not mean spin dependent effects vanish: Figure 6: Probability of the antineutrino, (a) and (b), and neutrino, (c) and (d), crossing the Earth without interacting via charged current as a function of energy and \(\cos\) (\(\theta_{z}\)) considering the unpolarized Earth, (a) and (c), and with 100% polarization in the outer core, (b) and (d). It is known that correlations of spin in nuclei are strong, i.e. each nucleon's polarization depends on the others. "One level down", parton level correlations are both strong and relatively unknown (for instance, is the up quark more likely to be aligned or anti-aligned with the proton's spin?). Accounting for these effects would require convoluting into the earth's profile both the nuclear spin wavefunction of iron and medium-modified quark TMDs, and is currently beyond the scope of this work. We show the impact of polarization only on muon neutrinos absorption. However, the effect on electronic neutrinos is essentially the same, as they have the same cross section in the high energy limit. For tauonic neutrinos there is a greater difference in cross section at lower energies, due to the mass of the tau produced. However, a more detailed analysis of tau neutrinos absorption for future detectors presupposes the study of flux regeneration by tau decay, which is outside the scope of this work. ###### Acknowledgements. This work was partially financed by the Brazilian funding agencies CNPq and CAPES. G.T. acknowledges support from Bolsa de produtividade CNPQ 306152/2020-7, Bolsa de pesquisa FAPESP 2021/01700-2, Partecipation to Tematico FAPESP, 2017/05685-2 and the grant BPN/ULM/2021/1/00039 from the Polish National Agency for Academic Exchange. Figure 7: Difference between probability of the antineutrino, (a) and (b), and neutrino, (c) and (d), crossing the Earth without interacting via charged current as a function of energy and \(\cos\) (\(\theta_{z}\)) considering the unpolarized Earth, (a) and (c), and with 100% polarization in the outer core, (b) and (d).
2310.07284
Typing to Listen at the Cocktail Party: Text-Guided Target Speaker Extraction
Humans can easily isolate a single speaker from a complex acoustic environment, a capability referred to as the "Cocktail Party Effect." However, replicating this ability has been a significant challenge in the field of target speaker extraction (TSE). Traditional TSE approaches predominantly rely on voiceprints, which raise privacy concerns and face issues related to the quality and availability of enrollment samples, as well as intra-speaker variability. To address these issues, this work introduces a novel text-guided TSE paradigm named LLM-TSE. In this paradigm, a state-of-the-art large language model, LLaMA 2, processes typed text input from users to extract semantic cues. We demonstrate that textual descriptions alone can effectively serve as cues for extraction, thus addressing privacy concerns and reducing dependency on voiceprints. Furthermore, our approach offers flexibility by allowing the user to specify the extraction or suppression of a speaker and enhances robustness against intra-speaker variability by incorporating context-dependent textual information. Experimental results show competitive performance with text-based cues alone and demonstrate the effectiveness of using text as a task selector. Additionally, they achieve a new state-of-the-art when combining text-based cues with pre-registered cues. This work represents the first integration of LLMs with TSE, potentially establishing a new benchmark in solving the cocktail party problem and expanding the scope of TSE applications by providing a versatile, privacy-conscious solution.
Xiang Hao, Jibin Wu, Jianwei Yu, Chenglin Xu, Kay Chen Tan
2023-10-11T08:17:54Z
http://arxiv.org/abs/2310.07284v4
# Typing to Listen at the Cocktail Party: Text-Guided Target Speaker Extraction ###### Abstract Humans possess an extraordinary ability to selectively focus on the sound source of interest amidst complex acoustic environments, commonly referred to as cocktail party scenarios. In an attempt to replicate this remarkable auditory attention capability in machines, target speaker extraction (TSE) models have been developed. These models leverage the pre-registered cues of the target speaker to extract the sound source of interest. However, the effectiveness of these models is hindered in real-world scenarios due to the unreliable or even absence of pre-registered cues. To address this limitation, this study investigates the integration of natural language description to enhance the feasibility, controllability, and performance of existing TSE models. Specifically, we propose a model named LLM-TSE, wherein a large language model (LLM) extracts useful semantic cues from the user's typed text input. These cues can serve as independent extraction cues, task selectors to control the TSE process or complement the pre-registered cues. Our experimental results demonstrate competitive performance when only text-based cues are presented, the effectiveness of using input text as a task selector, and a new state-of-the-art when combining text-based cues with pre-registered cues. To our knowledge, this is the first study to successfully incorporate LLMs to guide target speaker extraction, which can be a cornerstone for cocktail party problem research. Demos are provided at [https://github.com/haoxiangsnr/llm-tse1](https://github.com/haoxiangsnr/llm-tse1) Footnote 1: Source code and datasets will be publicly available after review. ## 1 Introduction The "Cocktail Party Problem" (E. Colin, 1953) - a term coined to describe a scenario where multiple sound sources are engaged in simultaneous conversation, yet a listener can selectively concentrate on a single sound source. This scenario represents a complex challenge in auditory perception (Haykin and Chen, 2005; Mesgarani and Chang, 2012; Bizley and Cohen, 2013) and serves as a remarkable demonstration of the intricate sound processing that occurs within the human auditory system. The human auditory system manages this complexity with remarkable efficacy, seemingly with ease. However, machines, such as hearing-aid devices (Shinn-Cunningham and Best, 2008), teleconferencing systems (Chen et al., 2020; Raj et al., 2021; Yoshioka et al., 2018), and hands-free human-machine interfaces (e.g., TVs, smartphones) (Gannot et al., 2017), encounter significant challenges in the context where multiple speakers talk at the same time. Studies on computational auditory scene analysis (CASA) (Lyon, 1983; Meddis and Hewitt, 1991; Seltzer et al., 2003; Wang and Brown, 2006), non-negative matrix factorization (NMF) (Cichocki et al., 2006; Virtanen, 2007; Parry and Essa, 2007), and factorial Hidden Markov Models and Gaussian Mixture Models (HMM-GMM) (Virtanen, 2006; Stark et al., 2011) provide invaluable insights into solving the cocktail party problem. However, these methods are often limited by the representation power of their models, resulting in poor performance in complex acoustic environments. The advent of deep learning has paved the way for the application of deep neural networks (DNNs) in addressing this challenging problem. These existing DNN-based techniques can be broadly classified into two main categories: blind source separation (BSS) (Pal et al., 2013; Hershey et al., 2016; Yu et al., 2017; Luo and Mesgarani, 2019) and target speaker extraction (TSE) (Luo et al., 2018; Zmolkova et al., 2019; Xu et al., 2020; Ge et al., 2020; Pan et al., 2022; Zmolkova et al., 2023). BSS techniques usually adopts DNNs to estimate an auditory mask for each speaker. The mask is then leveraged to separate each speaker's voice into an individual stream from the mixture speech captured by a microphone. A difficulty in this process is the problem of global permutation ambiguity (Hershey et al., 2016), which occurs when attempting to accurately assign the output of a multi-source separation system to the correct source. To address this ambiguity problem, deep clustering (DC) techniques (Hershey et al., 2016; Isik et al., 2016; Wang et al., 2018) were proposed to group the spectro-temporal features belonging to the same speaker together through a clustering scheme. Permutation invariant training (PIT) (Yu et al., 2017; Kolback et al., 2017) was invented by finding the minimal loss over all the permutations between the extracted streams and the reference speeches. Typically, these methods require prior knowledge or estimation of the number of speakers in the mixture. However, in real-world scenarios, the number of speakers is hard to predict in advance. Target speaker extraction provides an alternative solution to address the challenges of the unknown number of speakers and global permutation ambiguity. This approach involves providing a cue that is related to the desired speaker, such as a pre-recorded speech describing the voice characteristics (Xu et al., 2020), a spatial cue indicating the speaker's direction (Ge et al., 2022), or synchronous lip movement (Pan et al., 2022). By using these specified cues, only the target speaker's voice is extracted, thereby avoiding the issue of the unknown number of speakers and global permutation ambiguity. However, these pre-registered cues may vary substantially or even be absent in real environments, limiting the effectiveness of these systems. To overcome the aforementioned limitation, as shown in Figure 1, we propose a novel text-guided TSE model, LLM-TSE, incorporating text descriptions as additional cues to enhance the feasibility, controllability, and performance of existing TSE models. Specifically, we leverage the power of large language models (LLMs) to extract meaningful semantic cues from the user's typed text input. These text descriptions encompass various aspects of human auditory perception, including speaker characteristics, language, conversation contents, room characteristics, etc. These cues can serve as independent extraction cues, task selectors to control the TSE process or complement the pre-registered cues. By incorporating text descriptions as additional cues, we demonstrate that the performance of TSE models is significantly enhanced in various scenarios. The contributions of this work can be summarised as follows: * To the best of our knowledge, this is the first study to utilize natural language description as extraction cues for target speaker extraction. We show these semantic cues possess high discriminative power and, therefore, can significantly enhance the feasibility of existing TSE methods. * Our system implements a control mechanism through the natural language description to facilitate the speaker extraction process. This approach enables us to selectively retain or remove the source of interest based on the semantic concepts expressed in the text. By using text as a control mechanism, our system becomes a unified and flexible approach that eliminates the need for training multiple systems. * Our system represents a significant advancement in TSE by integrating context-dependent information from typed descriptions with pre-registered cues. Unlike traditional cues, typically Figure 1: Comparison between the conventional TSE system and our proposed Text-Guided TSE system. The conventional systems rely on the pre-registered voiceprint of the target speaker as an extraction cue, while our system offers the flexibility to incorporate text-based cues to facilitate the target speaker extraction. pre-recorded and isolated from the current acoustic environment, Our system captures complement cues from human perception. By incorporating additional cues that align with human perception, our system achieves a more accurate and comprehensive representation of speech mixtures, thereby improving the effectiveness of TSE in practical scenarios. ## 2 Text-Guided Target Speaker Extraction The proposed LLM-TSE model opens up a plethora of novel application scenarios, surpassing the capabilities of traditional TSE techniques. As depicted in Figure 2, these application scenarios can be divided into the following four categories: **Use text as transcription snippets:** Humans utilize discernible cues in relatively clean speech segments to enhance the perception of highly corrupted speech segments. Analogously, the LLM-TSE model can leverage distinguishable acoustic cues, in the form of transcription snippets, to facilitate speaker extraction, surpassing the capabilities of current TSE models. **Use text as semantic description:** Apart from the above content-based cues, humans employ many other perceptual cues based on the distinguishing characteristics between competing speakers, such as gender, language, loudness level, and reverberation in the audio signal. The LLM-TSE model enables users to incorporate such perceptual cues as text-based semantic descriptions to exert control over the process of target speaker extraction. Notably, these perceptual cues can be considered as independent pre-registered cues. **Use text as a task selector:** During a conversation involving multiple speakers, humans often switch their focus from one speaker to another. In addition, the speaker of interest at one moment may become a distraction at a later moment. In contrast to existing TSE systems that can only concentrate on a pre-registered speaker, the proposed LLM-TSE model empowers users with the flexibility to decide whether to retain or exclude the pre-registered speaker from the audio mixture, expanding the capabilities beyond what is currently achievable. **Use text to complement the pre-registered cues.** In conventional TSE systems, the voice of the target speaker is typically pre-recorded in an acoustic environment that may differ substantially from the actual deployment environments. This discrepancy significantly affects the robustness of conventional TSE systems. In contrast, the proposed LLM-TTS model has the ability to compensate for these differences by providing complementary cues in addition to the pre-registered ones, such as the speaker's location, language, loudness level, etc. Consequently, it generates a more comprehensive and accurate representation of the target speaker that can significantly enhance the system's robustness. Figure 2: New application scenarios enabled by the proposed LLM-TSE model. ## 3 Llm-TSE Model As illustrated in Figure 3, the proposed LLM-TSE model follows a processing pipeline of Encoding-Fusion-Extraction-Decoding. In the encoding phase, three distinct encoders are employed to convert the pre-registered speech, text prompts, and input audio mixture into corresponding embeddings. Leveraging the fused embeddings representing the enrolled speech and text cues, the extractor then selectively extracts the desired sound source from the input audio mixture. Finally, the frequency-domain feature representation obtained from the extractor is transformed back into the time-domain and output as the extracted speech. **Mixture Encoder and Decoder:** The mixture encoder transforms the input audio mixture from the time domain into the feature representation, which can be more effectively handled by the extractor. This transformation is realized by convolving each audio frame of length \(L\) with a set of \(N\) 1-D convolution filters \(\{u_{n}(t)\}_{n=\{0..N-1\}}\), which can be expressed as follows: \[\mathbf{X}(k,n)=\sum_{t=0}^{L-1}x(t+kH)u_{n}(t),\quad n\in\{0,\dots,N-1\}, \tag{1}\] where \(x(t)\) is the input signal, \(k\in\{0,\dots,K-1\}\) is the frame index, \(H\) is the hop size, and \(\mathbf{X}(k,n)\) is the result of the convolution operation. Similarly, the decoder maps the extracted feature, denoted as \(\mathbf{Y}(k,n)\), back to the time domain via a transposed 1-D convolution operation with \(N\) synthesis filters \(\{v_{n}(t)\}_{n=\{0..N-1\}}\), and each has a length of \(L\): \[\hat{y}(t)=\sum_{k=0}^{K-1}\sum_{n=0}^{N-1}\mathbf{Y}(k,n)v_{n}(t-kH), \tag{2}\] where \(\hat{y}(t)\) is the extracted audio signal in time domain. **Text Cue Encoder:** We utilize the LLaMA-2 7B Chat LLM, a dialogue-fine-tuned version of the LLaMA-2 (Touvron et al., 2023), to obtain discriminative semantic embeddings from the user's text input. LLaMA-2 is pre-trained on a combination of natural language and programming language corpora in a self-supervised manner. LLaMA-2 7B Chat LLM is further fine-tuned from LLaMA-2 via instruction-tuning, which significantly enhances its performance on various reasoning and generation tasks. During our model training, instead of performing full fine-tuning on the adopted LLM text encoder, we adopt the parameter-efficient Low-Rank Adaptation (LoRA) technique (Hu et al., 2021). LoRA introduces a small set of parameters into the frozen LLaMA-2 7B Chat LLM, which are referred to as LoRA adapters. Specifically, one LoRA adapter is attached to each LLM layer, modifying its frozen parameter by adding a low-rank learnable matrix of the same size. In the proposed LLM-TSE model, we apply the LoRA adapters to only modify keys and queries in each self-attention layer. Ultimately, we only add 12% more trainable parameters. This approach Figure 3: Overview of the proposed LLM-TSE model architecture. not only helps to prevent the overfitting problem that is often encountered with a small fine-tuning dataset but also improves the training efficiency. **Audio Cue Encoder:** The primary role of the audio cue encoder is to encode the optional pre-registered speech into a discriminative speaker embedding. The first step in this process involves transforming the time domain input signal, using the above-mentioned learnable 1-D convolutional filters, into the frequency domain. Following this transformation, we utilize a series of Temporal Convolutional Network (TCN) blocks (Pandey & Wang, 2019; Luo & Mesgarani, 2019) to extract speaker-related feature representation. These TCN blocks are designed to capture the temporal dependencies in the speech signal, which are crucial for distinguishing different speakers. Finally, we take the average along the temporal dimension to generate a speaker embedding vector, which effectively captures the unique vocal attributes of the pre-registered speech that can differentiate one speaker from others. **Fusion Layer:** Here, we follow a simple concatenation approach to fuse the audio and text cues, which has shown to be effective in many other TSE systems (Zmolikova et al., 2019; Ge et al., 2020). Specifically, we transform the text cue and audio cue embeddings into the same dimensionality through two linear projection layers, and then directly concatenate them to form a multi-modal representation. **Extractor:** The last part of our model is the target extractor, which serves to estimate the target signal. We adopt the widely used time-frequency masking-based extractor (Luo & Mesgarani, 2019; Isik et al., 2016), whose operations can be summarized as follows: \[\mathbf{M} =\text{MaskNet}(\mathbf{Z};\mathbf{\phi}^{\text{Mask}}), \tag{3}\] \[\hat{\mathbf{X}} =\mathbf{M}\otimes\mathbf{X},\] where \(\mathbf{Z}\) is the fused embedding generated from the fusion layer, \(\text{MaskNet}(\cdot)\) is a TCN-based NN that estimates the time-frequency mask \(\mathbf{M}\in\mathbb{R}^{D\times N}\) for the target speaker, where \(D\) is the feature dimension of each time step. \(\mathbf{\theta}^{\text{Mask}}\) is the network parameter, and \(\otimes\) denotes the element-wise Hadamard product. \(\hat{\mathbf{X}}\) is the estimated target speech signal in the frequency domain. **Loss function:** The parameters of the proposed LLM-TSE model are optimized by minimizing the following Scale-Invariant Signal-to-Distortion Ratio (SI-SDR) (Roux et al., 2019) loss function: \[\mathcal{L}^{\text{SI-SDR}}=-10\log_{10}\left(\frac{\|\hat{\mathbf{y}}^{T} \mathbf{y}\|^{2}\mathbf{y}\|^{2}}{\|\hat{\mathbf{y}}^{T}\mathbf{y}\|^{2}} \mathbf{y}-\hat{\mathbf{y}}\|^{2}\right). \tag{4}\] The SI-SDR loss is computed directly in the time domain, which forces the model to learn how to precisely estimate the magnitude and the phase of the target speech signals. ## 4 Experimental Evaluation In this paper, our primary objective is to integrate text-based cues to enhance the target speaker extraction systems. In the following sections, we initially delve into the method of simulating the overlapped mixture of speech data. Subsequently, we will explore the generation of text questions. ### Overlapped Speech Simulation Our experiment uses two speech datasets: LibriSpeech (Panayotov et al., 2015) and Multilingual LibriSpeech (MLS) (Pratap et al., 2020). LibriSpeech, a 1000-hour corpus of English audiobook speech, is known for its diverse speaker identities. MLS, an extension of LibriSpeech, adds multiple languages, including French, German, Spanish, etc. Due to it having too much data, we randomly selected 400 speakers per language from MLS with up to 20 utterances each. We adhered to LibriSpeech's standard training, validation, and test set division. For MLS, we randomly assigned 5% of speakers from each language to validation and test sets, respectively, with the rest for training. Our experiments cover a variety of attributes, including transcription snippets, gender, language, loudness, and far-near. For transcription snippets extraction, we only use the LibriSpeech dataset and the corresponding pre-extracted forced alignment (Chodroff, 2023) data 2 to identify the word timestamps from LibriSpeech. The remainder of the data for simulation is randomly selected from the LibriSpeech and MLS datasets. For generating the mixture speech, we adopt online simulation, generating the data needed for each iteration beforehand. The number of speakers in the mixture of speech is limited to two, stipulating that the two speakers have different attributes for gender, language, loudness, or far-near. When generating a mixture of speech for the loudness task, our signal-to-noise ratio is randomly selected from -3 dB to -2 dB and 2 dB to 3 dB. The other tasks span from -3 dB to 3 dB. In the case of the distance task, we include both near (target speaker) - far (interference speaker) and far (interference speaker) - near (target speaker) scenarios. For the other tasks, near and far combinations are randomized. Room dimensions are randomly selected from lengths of 9 to 11 m, widths of 9 to 11 m, and heights of 2.6 to 3.5 m. The reverberation time ranges from 0.3 to 0.6 seconds. We use Pyroomacoustics 3 to generate Room Impulse Responses (RIRs), and the microphone's position is defaulted to the center of the room. The sound source distance from the microphone varies between 0.3 to 0.5 m and 1.5 to 2.5 m for near or far fields, respectively. The angle ranges from 0 to 180 degree, and the sound source's height varies between 1.6 to 1.9 m. Footnote 2: [https://github.com/CorentinJ/librspeech-alignments](https://github.com/CorentinJ/librspeech-alignments) Footnote 3: [https://github.com/LCAV/pyroomacoustics](https://github.com/LCAV/pyroomacoustics) The mixture and pre-registered speeches are set to a duration of 6 seconds, with a randomly determined overlap ratio between 40% and 70%. The pre-registered speech is randomly selected from the remaining target speaker's speech. If the training objective is to remove the target speaker, the other speaker's speech from the mixture serves as the training target. We assume that each generated mixture speech sample should exhibit a distinguishable attribute throughout the training. All experimental data is sampled at 16,000 Hz to ensure high-quality audio. ### Text Generation We include three types of texts to explore using LLMs to enrich target speaker extraction systems. We first create ten foundational question templates for each type of task. These templates will then be rephrased and expanded using ChatGPT-4-32K 4 to produce 100 diverse text prompts. We adopt a non-overlapped 80/10/10% partitioning for training, validation, and testing sets. The text prompts used in the testing set are unseen during the training. Footnote 4: [https://platform.openai.com/docs/models](https://platform.openai.com/docs/models) **Text as an independent extraction cue:** In this type, the text is used as an independent extraction cue. The texts of this task are like: "Extracting a voice with ( specific characteristic ) from a mixture of speech", e.g., scenarios 1&2 in Figure 2. The text description outlines the features of the voice to be extracted, including the transcription snippets of the mixture of speech, the speaker's language, gender, loudness, and far-near. For the transcription snippet task, we used 100% of the target speech text length as cues for training, testing with 50%, 80%, and 100% of the target speech text length to evaluate generalizability. This setup is highly functional, i.e., by informing the system about the audible part of the speech, the system can utilize both semantic and acoustic information to track and extract the desired speaker. It's crucial to note that the attributes utilized in this study are not exhaustive. In real-world situations, humans employ a variety of other cues, e.g., emotion, pitch, etc., to extract the sound source of interest (Haykin & Chen, 2005; Shinn-Cunningham & Best, 2008). However, exploring these additional cues extends beyond the scope of this current study and is reserved for future research. **Text as a task selector:** We propose one task type where texts can influence the system's output: target speaker extraction or removal. The text serves as a directive for the system to either extract a given speaker's voice or remove it from the mixture of audio. The generated texts are like "please remove the given voice from this audio." **Text as a complement to human perception in the audio-based extraction system:** We integrate the human understanding and interpretation of the mixture of speech into the extraction process, which can significantly enhance the system's performance. Here, we cover all semantic types mentioned above, i.e., transcription snippets, gender, language, loudness, and far-near. The generated questions are like "Extracting a speaker based on the given pre-registered speech, where the speaker possesses a (specific characteristic) within the mixture speech." ### Results **Efficacy of Using Input Text as Independent Cues:** Table 1 demonstrates a notable performance enhancement when text alone is employed as an extraction cue, compared to unprocessed mixture speech. The proposed LLM-TSE model is built on TD-SpeakerBeam (Delcroix et al., 2020), a state-of-the-art (SOTA) open-source target speaker extraction model. Compared to TD-SpeakerBeam, the only modification in the LLM-TSE model is the additional text-prompt encoder. This enhancement is further corroborated by Figure 4. These findings suggest that the LLM-TSE model effectively interprets the provided text descriptions, which fundamentally serve as human interpretations of auditory object differences within a speech mixture. This innovative strategy represents a significant leap in harnessing natural language processing techniques for complex auditory tasks, thereby enhancing the scope of potential applications for speaker extraction methodologies. **Efficacy of Using Input Text as Task Selector:** In this task, our objective is to control the training targets of the separation system using natural language. The corresponding textual queries could resemble "Is there a way to remove the given voice from this mixture audio?" In Figure 4, we illustrate the capacity of our system to determine whether to extract or suppress the sound source corresponding to the provided pre-registered speech when using text descriptions. Notably, the samples displayed in the third row exemplify this capability, as they successfully suppress the target sound source associated with the pre-registered speech. Our explorations in this area are somewhat limited at this stage. More broadly, these controls could be configured with greater flexibility. For instance, they could manipulate the degree of reverberation in the extracted speech (since individual preferences for reverberation vary) or dictate the impact range of the separation system (to avoid unnecessary non-linear-processing distortion). We intend to delve deeper into these aspects in our future work. **Efficacy of Using Input Text to Complement the Pre-registered Cues:** Pre-registered speech primarily only encodes the speaker's vocal characteristics regardless of any time or acoustic environmental context. We aim to introduce this contextual information into the target speaker extraction system utilizing text descriptions. For this purpose, a typical text description is like: "Separate the target speaker's audio based on the provided pre-registered speech as a reference, bearing in mind that I am the speaker who employs a louder tone in the mixed speech." The relevant experimental outcomes are presented in the middle section of Table 1. Upon integrating descriptions delineating auditory object differences, we noted a significant improvement in the system's performance. This enhancement was particularly prominent in the "loudness" task, where the dataset contained a pronounced loudness disparity between the two sound sources. The challenge posed by identifying the target speaker using only the pre-registered speech was substantially mitigated upon implementing our approach, producing the most substantial performance increase within this task. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{Entry} & \multicolumn{3}{c}{Inputs} & \multicolumn{3}{c}{Transcription Snippet} & \multirow{2}{*}{Gender} & \multirow{2}{*}{Language} & \multirow{2}{*}{Far-near} & \multirow{2}{*}{Loudness} \\ \cline{2-2} \cline{6-9} & Audio & Text & & & & & & & \\ \hline Unproc. & - & & -0.02 & -0.02 & -0.03 & -0.01 & -0.10 \\ \hline TD-SpeakerBeam & \multirow{2}{*}{✓} & \multirow{2}{*}{✓} & \multirow{2}{*}{7.21} & \multirow{2}{*}{10.15} & \multirow{2}{*}{8.38} & \multirow{2}{*}{9.38} & \multirow{2}{*}{7.57} \\ \cline{1-1} \cline{6-9} LLM-TSE & & & & & & & & & \\ (LoRA Adapters, & & & 2.70 & 3.97 & 7.48 & 10.40 & 9.38 & 10.57 & 8.89 \\ LLMa-2 7B Chat) & & & & 7.96 & 9.81 & 10.05 & 10.87 & 9.72 & 10.66 & 9.41 \\ \hline No LoRA Adapters & \multirow{2}{*}{✓} & \multirow{2}{*}{✓} & \multirow{2}{*}{1.66} & \multirow{2}{*}{3.38} & \multirow{2}{*}{5.38} & \multirow{2}{*}{8.76} & \multirow{2}{*}{7.38} & \multirow{2}{*}{8.45} & \multirow{2}{*}{5.46} \\ (only Linear Projection) & & & & & & & & & \\ \hline Use Vicuna-7b-v1.3 & \multirow{2}{*}{✓} & \multirow{2}{*}{2.23} & \multirow{2}{*}{3.31} & \multirow{2}{*}{8.79} & \multirow{2}{*}{9.44} & \multirow{2}{*}{8.29} & \multirow{2}{*}{9.27} & \multirow{2}{*}{5.75} \\ (Zheng et al. (2023)) & & & & & & & & \\ \hline \hline \end{tabular} \end{table} Table 1: Evaluation of SI-SDR (dB \(\uparrow\)) metric across different methods. For the transcription snippet task, we use 100% of the target speech text as cues during training and test the model with a different amount of text transcriptions, including 50%, 80%, and 100%. **Ablation Studies on Text Encoder Selection:** Here, we present the results of a sequence of ablation experiments executed on the text encoder component. The outcomes are summarized at the bottom of Table 1. At the outset, we assessed the functionality of the text cue encoder in the absence of the LoRA adaptors, where only the projection layer of the LLM model was permitted to train, effectively freezing all other parameters of the LLM. This configuration aimed to determine if the LLM's generic understanding of diverse text corpora could offer sufficient discriminative information. However, our findings suggest that relying solely on embeddings, which are derived from the LLM's interpretation of various text descriptions, is insufficient to accomplish the task whether an audio encoder was integrated into the system or not. In subsequent experiments, we employed the Vicuna 7B model (Zheng et al., 2023) as our text encoder. This model, which was fine-tuned on data from "shareGPT.com" and based on the LLaMA-v1 model, exhibited marginally inferior performance in natural language benchmark tasks compared to the LLAMA-2 7B Chat. Further, the Vicuna model underperformed in our target speaker separation task compared to the LLAMA-2 7B Chat. This observation supports the premise that employing a more powerful LLM as a text cue encoder can significantly enhance the discriminative capabilities of the overall system. ## 5 Related Works **Audio-Language Multimodal Model:** Audio-language multimodal currently represents a significant research area with many application scenarios (Huang et al., 2023; Zhang et al., 2023; Gong et al., 2023). The primary focus has revolved around audio events, with most tasks and datasets originating from automatic audio caption (Drossos et al., 2017; Wu et al., 2019; Mei et al., 2022), which aims to assign meaningful textual descriptions to audio content. Leveraging these datasets, related studies have been conducted on synthesizing audio based on text descriptions, which find applications in diverse scenarios such as film production, game design, and more. Among these, the Contrastive Language-Audio Pretraining (CLAP) (Elizalde et al., 2022) model is a large-scale Figure 4: Samples generated from the proposed LLM-TSE model. The text box contains information about the input audio mixture. The term “w/o” indicates the absence of a certain input. pre-training model that employs a contrastive learning approach similar to the Contrastive Language Pretraining (CLIP) (Radford et al., 2021) model for aligning text and audio modalities. This model has pushed the boundaries in tasks that involve synthesizing audio based on text descriptions (Huang et al., 2023; Kreuk et al., 2023; Liu et al., 2023;b). Furthermore, the works conducted by Wang et al. (2023); Zhang et al. (2023); Le et al. (2023) expands the input modality to encompass audio and text instead of text only for audio generation. However, it's important to note that the underlying logic is based on generative models that take audio and specific control inputs to handle various speech transformation tasks. These works are more like controlled speech/audio/music synthesis, not requiring the length of input and output to be strictly aligned. This is entirely different from the field of our study. **Audio-Language Multimodal Source Separation:** Among all these audio-language multimodal models, those most relevant to our research involve separating or detecting audio events based on text description (Kilgour et al., 2022; Liu et al., 2022; 2023c). These studies employ models like BERT (Devlin et al., 2019) (mini), CLAP, or other pre-trained models to comprehend descriptions of sound events, subsequently separating the sound sources consistent with the target description. However, they are not specifically designed for speech signals. In contrast to audio event classes, speech signals are considerably similar when observed from spectrograms, lacking clear acoustic spectral patterns to follow. Instead, they rely more on perceptual differences in auditory objects and semantic information. In addition to sound events, these models also focus on separating musical instruments (Chen et al., 2023; Huang et al., 2023; Chen et al., 2023). It's important to note that while these previous works have made significant strides in the field, the specific challenges and nuances of speech signal separation present a unique problem space that our work aims to address. Labels, particularly those implemented via one-hot vectors, can be seen as a distinctive type of human language. In the realm of label-based audio/music/speech extraction systems (Manilow et al., 2020; Delcroix et al., 2021; Tzinis et al., 2022; Delcroix et al., 2023), the works of Manilow et al. (2020) and Tzinis et al. (2022) are most closely aligned with ours. These systems, like ours, endeavor to integrate human subjective intentions into the separation process through attribute labels. Yet, they solely rely on one-hot vectors, resulting in a lack of flexibility within human-computer dialogue systems. In addition, they cannot understand the vast array of human language inputs and struggle significantly when dealing with open-ended queries. By contrast, we employ LLMs to understand cues that extend beyond human descriptions of auditory object differences, which offers increased flexibility in cue extraction. Furthermore, we've investigated control capabilities and made a connection between the perceptual differences of auditory objects in mixture speech and voiceprint systems. ## 6 Conclusion and future works In this study, we proposed a novel paradigm for target speaker extraction, namely LLM-TSE, a significant departure from previous methodologies. Our approach uniquely introduces text to provide useful speaker extraction cues, which is an innovation that has demonstrated notable success and improvement in our experimental results. Our investigations have illuminated the potential of natural language to provide a rich source of discriminative features. These features can be leveraged independently as extraction cues, showcasing the versatility and effectiveness of natural language in this context. Furthermore, natural language is useful for performing task selection, which represents a promising approach to achieving auditory attention switching. Moreover, our paradigm augments the performance of audio-only systems by integrating contextual information from the present acoustic environment, which is often overlooked in traditional methods. This addition provides a more comprehensive and accurate representation of the target speaker's context, further enhancing the extraction process. In summary, our proposed paradigm signifies an important advancement for target speaker extraction systems, extending accessibility and improving performance. Not only does it provide a fresh perspective on the extraction process, but it also lays the groundwork for potential future studies on the cocktail party problem. Moving forward, we plan to persist in this direction, enhancing machines' ability to understand the foundations of human perception of multiple auditory objects within complex acoustic environments using natural language cues. Specifically, we aim to incorporate a range of mutually exclusive or non-exclusive auditory attributes, label flexible and open-ended text descriptions, and develop the capability for multi-round separation.
2305.03254
Dirac series for complex $E_8$
In this paper, we classify all unitary representations with non-zero Dirac cohomology for complex Lie group of Type E8. This completes the classification of Dirac series for all complex simple Lie groups.
Dan Barbasch, Kayue Daniel Wong
2023-05-05T02:51:57Z
http://arxiv.org/abs/2305.03254v1
# Dirac series for complex \(E_{8}\) ###### Abstract. In this paper, we classify all unitary representations with non-zero Dirac cohomology for complex Lie group of Type \(E_{8}\). This completes the classification of Dirac series for all complex simple Lie groups. ## 1. Introduction The notion of Dirac operator plays an important role in representation theory of real reductive groups. In the 1970s, Parthasarathy [P1, P2] and Schmid used Dirac operators to give a geometric realization of the discrete series. Later, Vogan in [V2] introduced the notion of **Dirac cohomology** for irreducible representations in order to find sharper estimates for the spectral gap for locally symmetric spaces. He formulated a conjecture on its relationship with the infinitesimal character of the representation, and was subsequently proven by Huang and Pandzic in [HP1]. One application of Dirac cohomology is to have a better understanding the of unitary dual \(\widehat{G}\). Indeed, the set of unitary \((\mathfrak{g},K)-\)modules with non-zero Dirac cohomology (the **Dirac series**) \(\widehat{G}^{d}\) contains a large example of interesting unitary representations. For instance, it is not hard to show that \(\widehat{G}^{d}\) strictly contains all unitary modules with non-zero \((\mathfrak{g},K)-\)cohomology. By the work of [VZ], this implies \(\widehat{G}^{d}\) contains all \(A_{\mathfrak{q}}(\lambda)\) modules in good range. Later, it was shown in [SR] that these modules characterize all unitary modules with strongly regular infinitesimal characters. Also, [BP1] and [BP2] classified all unipotent representations appearing in \(\widehat{G}^{d}\) for all complex groups and some real reductive groups. On the other hand, for any fixed \(G\), [DD1] and [D3] reduced the study of \(\widehat{G}^{d}\) to a finite set called **scattered representations**\(\widehat{G}^{sc}\). More precisely, it is shown that every representation in \(\widehat{G}^{d}\) is either a scattered representation, or it is cohomologically induced from a representation in \(\widehat{M}^{d}\) for some proper theta-stable Levi subgroup \(M\) of \(G\) in the weakly good range. Using their results, \(\widehat{G}\) can be fully classified for all complex classical groups ([BDW], [DW1], [DW2]), \(GL(n,\mathbb{R})\) ([DW4]), and for all real and complex exceptional groups of type \(G_{2}\), \(F_{4}\), \(E_{6}\) and \(E_{7}\) ([DD1], [D2], [DW3] for complex groups, [D4], [DD2], [DDH], [DDL], [DDW] [DDY] for real groups). In this paper, we will finish the classification of \(\widehat{G}^{d}\) for all complex simple groups by studying \(\widehat{G}^{d}\) for Type \(E_{8}\). One reason why such a classification cannot be done is that the atlas software runs out of memory on a usual desktop computer upon running the command is_unitary for complex \(E_{8}\). Rather than directly applying atlas to representations of \(E_{8}\), we apply bottom layer \(K-\)type arguments so that the classification problem can be divided into various Levi subgroups \(M\) of \(G\). If all the simple factors of \(M\) are classical, one can apply the results in [BDW]. Otherwise, we use intertwining operators detailed in [B1] and [BC] to compute the signatures of Hermitian forms on a small set of \((M\cap K)-\)types. Such calculations can be implemented in mathematica, which will effectively reduce our study into very few parameters. As a consequence, we verify Conjecture 1.1 of [BP1] for complex \(E_{8}\), which says that all \(\pi\in\widehat{G}^{d}\) must be the lowest \(K-\) subquotient of an induced representation, whose inducing module is a unipotent representation with non-zero Dirac cohomology tensored with a unitary character (see Corollary 3.7, Corollary 4.2 and Corollary 5.2 below). Combined with the work of [BDW], this also implies that for all such \(\pi\), the \(K-\)type contributing to \(H_{D}(\pi)\) is unique and is of multiplicity one in \(\pi\). ## 2. Preliminaries ### Complex groups Let \(G\) be a connected complex simple Lie group viewed as a real Lie group. Fix a Borel subgroup \(B\) of \(G\) containing a Cartan subgroup \(H\). Fix a Cartan involution \(\theta\) of \(G\), and write \(K:=G^{\theta}\) for the maximal compact subgroup. Denote by \(\mathfrak{g}_{0}=\mathfrak{k}_{0}\oplus\mathfrak{p}_{0}\) the corresponding Cartan decomposition of the Lie algebra \(\mathfrak{g}_{0}=Lie(G)\), and \(\mathfrak{h}_{0}=\mathfrak{t}_{0}\oplus\mathfrak{a}_{0}\) be the Cartan decomposition of \(\mathfrak{h}_{0}\). We remove the subscripts of the Lie algebras to denote their complexifications, and fix the identifications: \[\mathfrak{g}\cong\mathfrak{g}_{0}\oplus\mathfrak{g}_{0},\quad\mathfrak{h} \cong\mathfrak{h}_{0}\oplus\mathfrak{h}_{0},\quad\mathfrak{t}\cong\{(x,-x):x \in\mathfrak{h}_{0}\},\quad\mathfrak{a}\cong\{(x,x):x\in\mathfrak{h}_{0}\}. \tag{1}\] Put \(\Delta^{+}(\mathfrak{g}_{0},\mathfrak{h}_{0})=\Delta(\mathfrak{h}_{0}, \mathfrak{h}_{0}).\) and let \(\rho\) be the half sum of roots in \(\Delta^{+}(\mathfrak{g}_{0},\mathfrak{h}_{0})\). Set \[\Delta^{+}(\mathfrak{g},\mathfrak{h})=\left(\Delta^{+}(\mathfrak{g}_{0}, \mathfrak{h}_{0})\times\{0\}\right)\cup\left(\{0\}\times(-\Delta^{+}( \mathfrak{g}_{0},\mathfrak{h}_{0}))\right),\] and let \(\rho_{\mathfrak{g}}\) be the half sum of roots in \(\Delta^{+}(\mathfrak{g},\mathfrak{h})\). The restrictions to \(\mathfrak{t}\) of these positive roots are \[\Delta^{+}(\mathfrak{g},\mathfrak{t})=\Delta^{+}(\mathfrak{t},\mathfrak{t}) \cup\Delta^{+}(\mathfrak{p},\mathfrak{t}).\] Denote by \(\rho_{c}\) (resp., \(\rho_{n}\)) the half-sum of roots in \(\Delta^{+}(\mathfrak{k},\mathfrak{t})\) (resp., \(\Delta^{+}(\mathfrak{p},\mathfrak{t})\)). Using the identifications in (1), we have that \[\rho_{\mathfrak{g}}=(\rho,-\rho),\quad\rho_{c}=\rho_{n}=\rho. \tag{2}\] We may and we will identify a \(K-\)type (\(\widetilde{K}-\)type, \(\mathfrak{k}-\)type, etc) \(V_{\mathfrak{k}}(\eta)\) with its highest weight \(\eta\in\Delta^{+}(\mathfrak{k},\mathfrak{t})\). Let \((\lambda_{L},\lambda_{R})\in\mathfrak{h}_{0}^{*}\times\mathfrak{h}_{0}^{*}\) be such that \(\lambda_{L}-\lambda_{R}\) is a weight of a finite dimensional holomorphic representation of \(G\). We view \((\lambda_{L},\lambda_{R})\) as a real-linear functional on \(\mathfrak{h}\) by (1), and write \(\mathbb{C}_{(\lambda_{L},\lambda_{R})}\) as the character of \(H\) with differential \((\lambda_{L},\lambda_{R})\). By (1) again, we have \[\mathbb{C}_{(\lambda_{L},\lambda_{R})}|_{T}=\mathbb{C}_{\lambda_{L}-\lambda_{R}},\quad\mathbb{C}_{(\lambda_{L},\lambda_{R})}|_{A}=\mathbb{C}_{\lambda_{L}+ \lambda_{R}}.\] Extend \(\mathbb{C}_{(\lambda_{L},\lambda_{R})}\) to a character of \(B\), and put \[X(\lambda_{L},\lambda_{R}):=K\text{-finite part of }\mathrm{Ind}_{B}^{G}( \mathbb{C}_{(\lambda_{L},\lambda_{R})}\otimes\mathbf{1})\text{ (normalized induction)}.\] Using Frobenius reciprocity, the \(K-\)type with extremal weight \(\lambda_{L}-\lambda_{R}\) occurs with multiplicity one in \(X(\lambda_{L},\lambda_{R})\). Let \(J(\lambda_{L},\lambda_{R})\) be the unique subquotient of \(X(\lambda_{L},\lambda_{R})\) containing the \(K-\)type \(V_{\mathfrak{k}}(\{\lambda_{L}-\lambda_{R}\})\) (here \(\{\xi\}\) is the unique dominant weight to which \(\xi\) is conjugate under the Weyl group action). **Theorem 2.1**.: (Zhelobenko [Zh]) _In the above setting, we have that_ * _Every irreducible admissible (_\(\mathfrak{g}\)_,_ \(K\)_)-module is of the form_ \(J(\lambda_{L},\lambda_{R})\)_._ * _Two such modules_ \(J(\lambda_{L},\lambda_{R})\) _and_ \(J(\lambda_{L}^{\prime},\lambda_{R}^{\prime})\) _are equivalent if and only if there exists_ \(w\in W\) _such that_ \(w\lambda_{L}=\lambda_{L}^{\prime}\) _and_ \(w\lambda_{R}=\lambda_{R}^{\prime}\)_._ * \(J(\lambda_{L},\lambda_{R})\) _admits a nondegenerate Hermitian form if and only if there exists_ \(w\in W\) _such that_ \(w(\lambda_{L}-\lambda_{R})=\lambda_{L}-\lambda_{R},w(\lambda_{L}+\lambda_{R})= -(\overline{\lambda_{L}+\lambda_{R}})\)_._ * _The representation_ \(X(\lambda_{L},\lambda_{R})\) _is tempered if and only if_ \(\lambda_{L}+\lambda_{R}\in i\mathfrak{h}_{0}^{\star}\)_. In this case,_ \(X(\lambda_{L},\lambda_{R})=J(\lambda_{L},\lambda_{R})\)_._ Note that \(J(\lambda_{L},\lambda_{R})\) has lowest \(K-\)type \(V_{\mathfrak{k}}(\{\lambda_{L}-\lambda_{R}\})\) and infinitesimal character the \(W\times W\) orbit of \((\lambda_{L},\lambda_{R})\). ### Dirac cohomology We recall the construction of Dirac operator and Dirac cohomology. Let \(\langle\,\ \rangle\) be an invariant nondegenerate form such that \(\langle\,\ \rangle\mid_{\mathfrak{p}_{0}}\) is positive definite, and \(\langle\,\ \rangle\mid_{\mathfrak{k}_{0}}\) is negative definite. Fix an orthonormal basis \(Z_{1},\ldots,Z_{n}\) of \(\mathfrak{p}_{0}\). Let \(U(\mathfrak{g})\) be the universal enveloping algebra of \(\mathfrak{g}\), and \(C(\mathfrak{p})\) be the Clifford algebra of \(\mathfrak{p}\) with respect to \(\langle\,\ \rangle\). The **Dirac operator**\(D\in U(\mathfrak{g})\otimes C(\mathfrak{p})\) is defined as \[D=\sum_{i=1}^{n}\,Z_{i}\otimes Z_{i}.\] The operator \(D\) does not depend on the choice of the orthonormal basis \(Z_{i}\) and is \(K-\)invariant for the diagonal action of \(K\) induced by the adjoint actions on both factors. Define \(\Delta:\mathfrak{k}\to U(\mathfrak{g})\otimes C(\mathfrak{p})\) by \(\Delta(X)=X\otimes 1+1\otimes\alpha(X)\), where \(\alpha:\mathfrak{k}\to C(\mathfrak{p})\) is the composition of \(\mathrm{ad}:\mathfrak{k}\longrightarrow\mathfrak{so}(\mathfrak{p})\) with the embedding \(\mathfrak{so}(\mathfrak{p})\cong\wedge^{2}(\mathfrak{p})\hookrightarrow C( \mathfrak{p})\). Write \(\mathfrak{k}_{\Delta}:=\alpha(\mathfrak{k})\), and denote by \(\Omega_{\mathfrak{g}}\) (resp. \(\Omega_{\mathfrak{k}}\)) the Casimir operator of \(\mathfrak{g}\) (resp. \(\mathfrak{k}\)). Let \(\Omega_{\mathfrak{k}_{\Delta}}\) be the image of \(\Omega_{\mathfrak{k}}\) under \(\Delta\). Then ([P1]) \[D^{2}=-\Omega_{\mathfrak{g}}\otimes 1+\Omega_{\mathfrak{k}_{\Delta}}+(\|\rho_{c}\|^ {2}-\|\rho_{\mathfrak{g}}\|^{2})1\otimes 1, \tag{3}\] where \(\rho_{\mathfrak{g}}\) and \(\rho_{c}\) are the corresponding half sums of positive roots of \(\mathfrak{g}\) and \(\mathfrak{k}\). Let \[\widetilde{K}:=\{(k,s)\in K\times\mathrm{Spin}(\mathfrak{p}_{0})\ :\ \mathrm{Ad}(k)=p(s)\},\] where \(p:\mathrm{Spin}(\mathfrak{p}_{0})\to\mathrm{SO}(\mathfrak{p}_{0})\) is the spin double covering map. If \(\pi\) is a (\(\mathfrak{g}\), \(K\))-module, and if \(S_{G}\) denotes a spin module for \(C(\mathfrak{p})\), then \(\pi\otimes S_{G}\) is a \((U(\mathfrak{g})\otimes C(\mathfrak{p}),\widetilde{K})\) module. The action of \(U(\mathfrak{g})\otimes C(\mathfrak{p})\) is the obvious one, and \(\widetilde{K}\) acts on both factors; on \(\pi\) through \(K\) and on \(S_{G}\) through the spin group \(\operatorname{Spin}\mathfrak{p}_{0}\). The Dirac operator acts on \(\pi\otimes S_{G}\). The Dirac cohomology of \(\pi\) is defined as the \(\widetilde{K}-\)module \[H_{D}(\pi)=\operatorname{Ker}D/(\operatorname{Im}D\cap\operatorname{Ker}D). \tag{4}\] The following foundational result on Dirac cohomology, conjectured by Vogan, was proven by Huang and Pandzic in 2002: **Theorem 2.2** ([1] Theorem 2.3).: _Let \(\pi\) be an irreducible (\(\mathfrak{g}\), \(K\))-module. Assume that the Dirac cohomology of \(\pi\) is nonzero, and that it contains the \(\widetilde{K}-\)type with highest weight \(\gamma\in\mathfrak{t}^{*}\subset\mathfrak{h}^{*}\). Then the infinitesimal character of \(\pi\) is conjugate to \(\gamma+\rho_{c}\) under \(W(\mathfrak{g},\mathfrak{h})\)._ ### Unitary modules with Dirac cohomology Let \(\pi\) be an irreducible \((\mathfrak{g},K)-\)module for a complex Lie group \(G\). By Theorem 2.2 and (1), \(\pi\) has Dirac cohomology if and only if its Zhelobenko parameter \((w_{1}\lambda_{L},w_{2}\lambda_{R})\) satisfies \[\begin{cases}w_{1}\lambda_{L}-w_{2}\lambda_{R}=\tau+\rho\\ w_{1}\lambda_{L}+w_{2}\lambda_{R}=0,\end{cases} \tag{5}\] where \(V_{\mathfrak{k}}(\tau)\) is a \(\widetilde{K}-\)type in \(H_{D}(\pi)\). The second equation implies \(\lambda_{R}=-w_{2}^{-1}w_{1}\lambda_{L}.\) Since \(\tau+\rho\) is regular integral, the first equation implies that \[2\lambda:=2w_{1}\lambda_{L}=\tau+\rho \tag{6}\] is regular integral. Consequently, the module can be written as \(\pi=J(\lambda,-s\lambda)\) with \(2\lambda\) regular integral, and the first equation of (5) implies that the _only_\(\widetilde{K}-\)type that can appear in \(H_{D}(\pi)\) is \(V_{\mathfrak{k}}(2\lambda-\rho)\). Furthermore, if \(J(\lambda,-s\lambda)\) is Hermitian (e.g. if \(J(\lambda,-s\lambda)\) is unitary), it follows as in [1] that \(s\) is an involution. ### Bottom Layer \(K-\)types For the rest of this paper, we use the following Dynkin diagram and simple roots of \(E_{8}\): (7) By the discussions in Section 2.3, one only focuses on \(\pi=J(\lambda,-s\lambda)\) where \(2\lambda\) is regular integral, and \(s\in W\) is an involution. Conjugate \(\lambda+s\lambda\) such that \[\eta:=\{\lambda+s\lambda\}=[k_{1},k_{2},\dots,k_{8}]:=k_{1}\omega_{1}+k_{2} \omega_{2}+\dots k_{8}\omega_{8}, \tag{8}\] is a dominant weight (here \(k_{i}\in\mathbb{N}\), and \(\omega_{i}\) are the fundamental weights of \(E_{8}\)). Let \(M\) be the Levi subgroup of \(G\) determined by the nodes \[I(M):=\{i\ |\ \langle\alpha_{i},\eta\rangle=k_{i}=0\} \tag{9}\] of the Dynkin diagram (7). Suppose \(M=\mathcal{F}_{1}\times\cdots\times\mathcal{F}_{r}\times(\mathbb{C}^{*})^{s}\), where each \(\mathcal{F}_{\ell}\) is a simple group, then one can choose \(\nu_{\ell}\in\mathfrak{h}_{\ell}^{*}\) (here \(\mathfrak{h}_{\ell}\) is the Cartan subalgebra of \(\mathcal{F}_{\ell}\)), and a unitary character \(\mathbb{C}_{\tau(\eta)}\) of \((\mathbb{C}^{*})^{s}\) such that the induced module \[\operatorname{Ind}_{MN}^{G}\left(\bigotimes_{\ell=1}^{r}J_{\mathcal{F}_{\ell} }(\nu_{\ell},\nu_{\ell})\otimes\mathbb{C}_{\tau(\eta)}\otimes\mathbf{1}\right), \tag{10}\] has the same infinitesimal character and lowest \(K-\)type as \(\pi\). Consequently, \(\pi\) appears as the lowest \(K-\)type subquotient of (10). By Theorem 2.1(b), one can further assume that \(2\nu_{\ell}\) is regular and integral for each spherical module \(J_{\mathcal{F}_{\ell}}(\nu_{\ell},\nu_{\ell})\) in (10). **Proposition 2.3**.: _Let \(\pi=J(\lambda,-s\lambda)\) be an irreducible Hermitian \((\mathfrak{g},K)-\)module with \(2\lambda\) regular and integral. Consider the induced module (10) corresponding to \(\pi\), where \(2\nu_{\ell}\) are chosen to be regular and integral for all \(1\leq\ell\leq r\). Suppose_ \[\langle 2\nu_{\ell},\alpha\rangle\notin\{1,2\},\] _for some simple root \(\alpha\) corresponding to \(\mathcal{F}_{\ell}\), then \(\pi\) is not unitary._ Proof.: By the hypothesis of \(2\nu_{\ell}\), and the deformation arguments in [BDW] (which can be generalized to all complex groups), \(J_{\mathcal{F}_{\ell}}(\nu_{\ell},\nu_{\ell})\) is not unitary on the level of adjoint \((\mathcal{F}_{\ell}\cap K)-\)type \(V_{\mathfrak{f}_{\ell}\cap\mathfrak{k}}(\delta)\). We claim that for all possible simple components \(\mathcal{F}_{\ell}\) of a Levi subgroup of \(E_{8}\), \(\eta+\delta\) is \(M-\)**bottom layer** for all \(\eta\) of the form (8). Indeed, the adjoint representation for each \(\mathcal{F}_{\ell}\) can be written as a sum of simple roots by: \begin{tabular}{|c|c|c|c|} \hline Type & Dynkin diagram & Highest weight \(\delta\) of adjoint representation \\ \hline \hline \(A_{p}\) & \(\begin{CD}\includegraphics[width=142.26378pt]{images/2010.eps}\\ \end{CD}\) & \(\begin{CD}\includegraphics[width=142.26378pt]{images/2010.eps}\\ \end{CD}\) & \(\begin{CD}\includegraphics[width=142.26378pt]{images/2010.eps}\\ \end{CD}\) & \(\begin{CD}\includegraphics[width=142.26378pt]{images/2010.eps}\\ \end{CD}\) & \(\begin{CD}\includegraphics[width=142.26378pt]{images/2010.eps}\\ \end{CD}\) & \(\begin{CD}\includegraphics[width=142.26378pt]{images/2010.eps}\\ \end{CD}\) \\ \(D_{p}\) & \(\begin{CD}\includegraphics[width=142.26378pt]{images/2010.eps}\\ \end{CD}\) & \(\begin{CD}\includegraphics[width=142.26378pt]{images/2010.eps}\\ \end{CD}\) & \(\begin{CD}\includegraphics[width=142.26378pt]{images/2010.eps}\\ \end{CD}\) & \(\begin{CD}\includegraphics[width=142.26378pt]{images/2010.eps}\\ \end{CD}\) & \(\begin{CD}\includegraphics[width=142.26378pt]{images/2010.eps}\\ \end{CD}\) \\ \(E_{6}\) & \(\begin{CD}\includegraphics[width=142.26378pt]{images/2010.eps}\\ \end{CD}\) & \(\begin{CD}\includegraphics[width=142.26378pt]{images/2010.eps}\\ \end{CD}\) & \(\begin{CD}\includegraphics[width=142.26378pt]{images/2010.eps}\\ \end{CD}\) & \(\begin{CD}\includegraphics[width=142.26378pt]{images/2010.eps}\\ \end{CD}\) \\ \(E_{7}\) & \(\begin{CD}\includegraphics[width=142.26378pt]{images/2010.eps}\\ \end{CD}\) & \(\begin{CD}\includegraphics[width=142.26378pt]{images/2010.eps}\\ \end{CD}\) & \(\begin{CD}\includegraphics[width=142.26378pt]{images/2010.eps}\\ \end{CD}\) \\ \hline \end{tabular} It is clear that * \(\left|\langle\eta+\delta,\alpha_{i}\rangle=\langle\delta,\alpha_{i}\rangle\geq 0\right.\) for all \(i\in I(M)\), since \(\eta\) does not contain any fundamental \(K-\)types \(\omega_{i}\), and \(\delta\) is \(M-\)dominant weight; * \(\left|\langle\eta+\delta,\alpha_{j}\rangle=k_{j}+\langle\delta,\alpha_{j} \rangle=k_{j}-1\geq 0\right.\) for all \(j\notin I(M)\), since the coefficient of \(\alpha_{l}\) in \(\delta\) for all \(\alpha_{l}\) satisfying \(\langle\alpha_{i},\alpha_{j}\rangle=-1\) (i.e. \(l\) and \(j\) are linked in the Dynkin diagram of \(E_{8}\)) is always equal to \(1\). For example, let \(M\) be of Type \(E_{7}\), so that the \(\beta_{i}\) in the above table match with \(\alpha_{i}\) in (7). Then \(\eta=k\omega_{8}\) for \(k\geq 1\), and \[\eta+\delta=k\omega_{8}+2\alpha_{1}+2\alpha_{2}+3\alpha_{3}+4\alpha_{4}+3 \alpha_{5}+2\alpha_{6}+\alpha_{7}.\] Therefore, \(\langle\eta+\delta,\alpha_{8}\rangle=\langle k\omega_{8}+\alpha_{7},\alpha_{8 }\rangle=k+\langle\alpha_{7},\alpha_{8}\rangle=k-1\geq 0\) for all \(k\). In other words, \(\eta+\delta\) is always \(M-\)bottom layer. Consequently, this implies the induced module (10) as well as \(\pi\) have indefinite forms on these two \(K-\)types, and the result follows. ### General strategy To conclude, our strategy of determining \(\widehat{G}^{d}\) for complex \(E_{8}\) is as follows: 1. List all \(K-\)types \(\eta=[k_{1},k_{2},\ldots,k_{8}]\) of \(E_{8}\), which determines a Levi subgroup \(M=\mathcal{F}_{1}\times\cdots\times\mathcal{F}_{r}\times(\mathbb{C}^{*})^{s}\) of \(G\) as described in the previous section. Consider the induced module \[\operatorname{Ind}_{MN}^{G}\left(\pi_{sph}\otimes\mathbb{C}_{\tau(\eta)} \otimes\mathbf{1}\right),\quad\pi_{sph}:=\bigotimes_{\ell=1}^{r}J_{\mathcal{F} _{\ell}}(\nu_{\ell},\nu_{\ell}),\] where each \(J_{\mathcal{F}_{\ell}}(\nu_{\ell},\nu_{\ell})\) is spherical, and \(\tau(\eta)\) is a unitary character in \((\mathbb{C}^{*})^{s}\) such that the lowest \(K-\)type subquotient of the above module is \(V_{\mathfrak{k}}(\eta)\). 2. By Theorem 2.2, we focus on the lowest \(K-\)type subquotient \(\pi\) of the induced module in (i) such that its infinitesimal character \((\lambda_{L},\lambda_{R})\) satisfies \(2\lambda_{L}\) is regular and integral. 3. By Proposition 2.3, in order for \(\pi\) to be unitary, one must have \[\langle\nu_{\ell},\alpha\rangle=\tfrac{1}{2}\text{ or }1\] for all simple roots \(\alpha\) of \(\mathcal{F}_{\ell}\). 4. If \(\pi_{sph}\) is unitary, then \(\pi\) is automatically unitary. Otherwise, find the'smallest' \((\mathcal{F}_{\ell}\cap K)-\)type \(V_{\mathfrak{k}\cap\mathfrak{k}}(\sigma)\) of \(J_{\mathcal{F}_{\ell}}(\nu_{\ell},\nu_{\ell})\) having opposite signature with the lowest (i.e. trivial) \((\mathcal{F}_{\ell}\cap K)-\)type. These are called the _non-unitarity certificates_. 5. For each certificate \(\sigma\) obtained in (iii), check which \(\eta\) such that \(\eta+\sigma\) is **NOT**\(M-\)bottom layer, so that the arguments in Proposition 2.3**CANNOT** carry over to conclude \(\pi\) is non-unitary. After (v), there are very few cases one needs to study, and we will make a case-by-case analysis for these exceptions to see if the parameters give a unitary representation with non-zero Dirac cohomology. ## 3. Type \(A\) Levi subgroups This section is devoted to dealing with the cases when \(M\) consists only of Type \(A\) simple factors \(\mathcal{F}_{1}\),..., \(\mathcal{F}_{r}\). We will study each \(J_{\mathcal{F}_{\ell}}(\nu_{\ell},\nu_{\ell})\) in Step (i) of Section 2.5 individually, where \(\mathfrak{f}_{\ell}=\mathfrak{sl}(p+1,\mathbb{C})\) is of Type \(A_{p}\) with simple roots: By [V1], the \((\mathcal{F}_{\ell}\cap K)-\)types used to detect non-unitarity of \((\mathfrak{f}_{\ell},\mathcal{F}_{\ell}\cap K)-\)modules (non-unitarity certificates) have highest weights \[\begin{split}\sigma_{1}&:=\beta_{1}+\beta_{2}+ \ldots+\beta_{p-1}+\beta_{p},\\ \sigma_{2}&:=\beta_{1}+2(\beta_{2}+\ldots+\beta_{p- 1})+\beta_{p},\\ \sigma_{3}&:=\beta_{1}+2\beta_{2}+3(\beta_{3}+ \ldots+\beta_{p-2})+2\beta_{p-1}+\beta_{p},\quad\cdots\end{split} \tag{11}\] Suppose the Levi subgroup \(M\) is such that for all \(j\notin I(M)\), the node \(\alpha_{j}\) in the Dynkin diagram (7) is _not_ connected to the middle nodes of the Dynkin diagrams corresponding to \(\mathcal{F}_{1}\),..., \(\mathcal{F}_{r}\). One example of such Levi subgroup is \(I(M)=\{1,3,4,6,7\}\): (from now on, we mark the nodes of \(I(M)\) for any Levi subgroup \(M\) by \(\Delta\)). By the proof of Proposition 2.3, the \(K-\)types with highest weights \(\eta+\sigma_{i}\) (recall \(\eta\) is the lowest \(K-\)type of \(\pi\)) are \(M-\)bottom layer for all non-unitarity certificates \(\sigma_{i}\) in (11). In other words, \(\pi\) is unitary if and only if \(\pi_{sph}\) in Step (i) of 2.5 is unitary in these cases. Furthermore, by the main result of [BDW], \(\pi\in\widehat{G}^{d}\) if and only if \(\pi_{sph}\) must be parabolically induced from the trivial module. By induction in stages, this implies \(\pi\) is a subquotient of a parabolically induced module from a unitary character. **Remark 3.1**.: _The above arguments can be generalized to any Levi subgroup \(M\) having a Type \(A\) factor \(\mathcal{F}_{\ell}\) such that if \(j\notin I(M)\) is connected to \(\mathcal{F}_{\ell}\), it must be connected to the left or right end of \(\mathcal{F}_{\ell}\) (for instance, \(I(M)=\{2,3,4,5,7,8\}\), and \(M\) has two simple factors of Type \(D_{4}\) and Type \(A_{2}\)). In such a case, the spherical parameter corresponding to \(\mathcal{F}_{\ell}\) must parabolically induced from the trivial module._ Consequently, one can focus on the cases when \(M\) has a factor \(\mathcal{F}_{\ell}\) of Type \(A_{p}\) such that the middle of its Dynkin diagram is connected to a node \(\alpha_{j}\) with \(j\notin I(\mathcal{F}_{\ell})\). Here is the list of all such \(I(\mathcal{F}_{\ell})\) (marked by \(\Delta\)) and \(\alpha_{j}\) with \(j\notin I(\mathcal{F}_{\ell})\) (marked in \(\times\)). We also give the values of \(k_{j}\) in (8) such that \(\eta+\sigma\) is \(M-\)bottom layer for each \(\sigma\) in (11): \begin{tabular}{|c|c|c|} \hline Type & \(I(\mathcal{F}_{\ell})\) (in \(\Delta\)) and \(j\) (in \(\times\)) & Bottom layer \(\sigma+\eta\) \\ \hline \end{tabular} \begin{tabular}{|c|c c c c c|c c c|c c c|} \hline & & & & \(\alpha_{2}\) & & & & & & & & \\ (i) & & & & & & & & & & & \(\sigma_{1}+k_{2}\omega_{2}\) & \((k_{2}\geq 1)\), \\ & & & & & & & & & & \(\sigma_{2}+k_{2}\omega_{2}\) & \((k_{2}\geq 2)\), \\ & & & & & & & & & \(\sigma_{3}+k_{2}\omega_{2}\) & \((k_{2}\geq 3)\), \\ & & & & & & & & & \(\sigma_{4}+k_{2}\omega_{2}\) & \((k_{2}\geq 3)\) \\ \hline & & & & & & & & & & \\ (ii) & & & & & & & & & & \\ (iii) & & & & & & & & & & \\ (iii) & & & & & & & & & & \\ (iii) & & & & & & & & & & \\ (iii) & & & & & & & & & & \\ (iii) & & & & & & & & & & \\ (iii) & & & & & & & & & & \\ (iii) & & & & & & & & & & \\ (iii) & & & & & & & & & & \\ (iii) & & & & & & & & & & \\ (iv) & & & & & & & & & & \\ (iv) & & & & & & & & & & \\ (iv) & & & & & & & & & & \\ (iv) & & & & & & & & & & \\ (iv) & & & & & & & & & & \\ (iv) & & & & & & & & & & \\ (iv) & & & & & & & & & & \\ (v) & & & & & & & & & & \\ (v) & & & & & & & & & & \\ \hline \end{tabular} \begin{tabular}{|c|c c c|c c|c c|} \hline & & \(\alpha_{2}\) & & & & & \\ (v)’ & \(\alpha_{1}\) & \(\alpha_{3}\) & \(\alpha_{4}\) & \(\alpha_{5}\) & \(\alpha_{6}\) & \(\alpha_{7}\) & \(\alpha_{8}\) & \(\sigma_{1}+k_{2}\omega_{2}\) & \((k_{2}\geq 1)\), \\ & & & & & & \(\sigma_{2}+k_{2}\omega_{2}\) & \((k_{2}\geq 2)\), \\ & & & & & & \(\sigma_{3}+k_{2}\omega_{2}\) & \((k_{2}\geq 2)\) \\ \hline & & & & & & \\ (vi)’ & \(\alpha_{1}\) & \(\alpha_{3}\) & \(\alpha_{4}\) & \(\alpha_{5}\) & \(\alpha_{6}\) & \(\alpha_{7}\) & \(\alpha_{8}\) & \(\sigma_{1}+k_{2}\omega_{2}\) & \((k_{2}\geq 1)\), \\ & & & & & & \(\sigma_{2}+k_{2}\omega_{2}\) & \((k_{2}\geq 2)\) \\ \hline & & & & & & \\ (vii)’ & \(\alpha_{1}\) & \(\alpha_{3}\) & \(\alpha_{4}\) & \(\alpha_{5}\) & \(\alpha_{6}\) & \(\alpha_{7}\) & \(\alpha_{8}\) & \(\sigma_{1}+k_{2}\omega_{2}\) & \((k_{2}\geq 1)\), \\ & & & & & & \(\sigma_{2}+k_{2}\omega_{2}\) & \((k_{2}\geq 2)\) \\ \hline & & & & & & \\ (vii)’ & \(\alpha_{1}\) & \(\alpha_{3}\) & \(\alpha_{4}\) & \(\alpha_{5}\) & \(\alpha_{6}\) & \(\alpha_{7}\) & \(\alpha_{8}\) & \(\sigma_{1}+k_{3}\omega_{3}\) & \((k_{3}\geq 1)\), \\ & & & & & & \(\sigma_{2}+k_{3}\omega_{3}\) & \((k_{3}\geq 2)\) \\ \hline & & & & & & \\ (vii)’ & \(\alpha_{1}\) & \(\alpha_{3}\) & \(\alpha_{4}\) & \(\alpha_{5}\) & \(\alpha_{6}\) & \(\alpha_{7}\) & \(\alpha_{8}\) & \(\sigma_{1}+k_{5}\omega_{2}\) & \((k_{5}\geq 1)\), \\ & & & & & & \(\sigma_{2}+k_{5}\omega_{2}\) & \((k_{5}\geq 2)\) \\ \hline \end{tabular} \begin{tabular}{|c|c c c|c c|c c|} \hline & & & & & & & \\ (vii)’ & \(\alpha_{1}\) & \(\alpha_{3}\) & \(\alpha_{4}\) & \(\alpha_{5}\) & \(\alpha_{6}\) & \(\alpha_{7}\) & \(\alpha_{8}\) & \(\sigma_{1}+k_{3}\omega_{3}\) & \((k_{3}\geq 1)\), \\ & & & & & & \(\sigma_{2}+k_{3}\omega_{3}\) & \((k_{3}\geq 2)\) \\ \hline & & & & & & \\ (vii)’ & \(\alpha_{1}\) & \(\alpha_{3}\) & \(\alpha_{4}\) & \(\alpha_{5}\) & \(\alpha_{6}\) & \(\alpha_{7}\) & \(\alpha_{8}\) & \(\sigma_{1}+k_{5}\omega_{2}\) & \((k_{5}\geq 1)\), \\ & & & & & & \(\sigma_{2}+k_{5}\omega_{2}\) & \((k_{5}\geq 2)\) \\ \hline \end{tabular} \begin{tabular}{|c|c c c|c c|c c|} \hline & & & & & & & \\ (vii)’ & \(\alpha_{1}\) & \(\alpha_{3}\) & \(\alpha_{4}\) & \(\alpha_{5}\) & \(\alpha_{6}\) & \(\alpha_{7}\) & \(\alpha_{8}\) & \(\sigma_{1}+k_{3}\omega_{3}\) & \((k_{3}\geq 1)\), \\ & & & & & & & \(\sigma_{2}+k_{3}\omega_{3}\) & \((k_{3}\geq 2)\) \\ \hline \end{tabular} \begin{tabular}{|c|c c c|c c|c c|} \hline & & & & & & & \\ (vii)’ & \(\alpha_{1}\) & \(\alpha_{3}\) & \(\alpha_{4}\) & \(\alpha_{5}\) & \(\alpha_{6}\) & \(\alpha_{7}\) & \(\alpha_{8}\) & \(\sigma_{1}+k_{5}\omega_{2}\) & \((k_{5}\geq 1)\), \\ & & & & & & & \(\sigma_{2}+k_{5}\omega_{2}\) & \((k_{5}\geq 2)\) \\ \hline \end{tabular} \begin{tabular}{|c|c c c|c c|c c|} \hline & & & & & & & & \\ (vii)’ & \(\alpha_{1}\) & \(\alpha_{3}\) & \(\alpha_{4}\) & \(\alpha_{5}\) & \(\alpha_{6}\) & \(\alpha_{7}\) & \(\alpha_{8}\) & \(\sigma_{1}+k_{5}\omega_{2}\) & \((k_{5}\geq 2)\) \\ \hline \end{tabular} \begin{tabular}{|c|c c c|c c|c c|} \hline & & & & & & & & \\ (vii)’ & \(\alpha_{1}\) & \(\alpha_{3}\) & \(\alpha_{4}\) & \(\alpha_{5}\) & \(\alpha_{6}\) & \(\alpha_{7}\) & \(\alpha_{8}\) & \(\sigma_{1}+k_{5}\omega_{2}\) & \((k_{5}\geq 2)\) \\ \hline \end{tabular} \begin{tabular}{|c|c c c|c c|} \hline & & & & & & & & \\ (vii)’ & \(\alpha_{1}\) & \(\alpha_{3}\) & \(\alpha_{4}\) & \(\alpha_{5}\) & \(\alpha_{6}\) & \(\alpha_{7}\) & \(\alpha_{8}\) & \(\sigma_{1}+k_{5}\omega_{2}\) & \((k_{5}\geq 2)\) \\ \hline \end{tabular} \begin{tabular}{|c|c c c|c c|} \hline & & & & & & & & \\ (vii)’ & \(\alpha_{1}\) & \(\alpha_{3}\) & \(\alpha_{4}\) & \(\alpha_{5}\) & \(\alpha_{6}\) & \(\alpha_{7}\) & \(\alpha_{8}\) & \(\sigma_{1}+k_{5}\omega_{2}\) & \((k_{5}\geq 2)\) \\ \hline \end{tabular} \begin{tabular}{|c|c c c|c c|} \hline & & & & & & & & \\ (vii)’ & \(\alpha_{1}\) & \(\alpha_{3}\) & \(\alpha_{4}\) & \(\alpha_{5}\) & \(\alpha_{6}\) & \(\alpha_{7}\) & \(\alpha_{8}\) & \(\sigma_{1}+k_{5}\omega_{2}\) & \((k_{5}\geq 2)\) \\ \hline \end{tabular} \begin{tabular}{|c|c c c|c c|} \hline & & & & & & & & \\ (vii)’ & \(\alpha_{1}\) & \(\alpha_{3}\) & \(\alpha_{4}\) & \(\alpha_{5}\) & \(\alpha_{6}\) & \(\alpha_{7}\) & \(\alpha_{8}\) & \(\sigma_{1}+k_{3}\omega_{3}\) & \((k_{3}\geq 1)\), \\ & & & & & & & & \\ (vii)’ & \(\alpha_{1}\) & \(\alpha_{3}\) & \(\alpha_{4}\) & \(\alpha_{5}\) & \(\alpha_{6}\) & \(\alpha_{7}\) & \(\alpha_{8}\) & \(\sigma_{1}+k_{3}\omega_{3}\) & \((k_{3}\geq 2)\) \\ \hline \end{tabular} \begin{tabular}{|c|c c c|c c|} \hline & & & & & & & & \\ (vii)’ & \(\alpha_{1}\) & \(\alpha_{3}\) & \(\alpha_{4}\) & \(\alpha_{5}\) & \(\alpha_{6}\) & \(\alpha_{7}\) & \( To apply the above table, note that if \(J_{{\mathcal{F}}_{\ell}}(\nu_{\ell},\nu_{\ell})\) has indefinite form in the adjoint representation \(V_{\mathfrak{f}\cap\mathfrak{k}}(\sigma_{1})\), then \(\sigma_{1}+\eta\) is always \(M-\)bottom layer for all lowest \(K-\)types \(\eta\). Therefore, we are left to study spherical representations \(J_{{\mathcal{F}}_{\ell}}(\nu_{\ell},\nu_{\ell})\) of Type \(A_{p}\) (\(3\leq p\leq 7\)) such that \(2\nu_{\ell}\) is regular integral by (6), and the Hermitian form on \(J_{{\mathcal{F}}_{\ell}}(\nu_{\ell},\nu_{\ell})\) is positive definite on the adjoint representation. By the classification of the unitary dual of \(GL(n,{\mathbb{C}})\) in [V1], \(2\nu_{\ell}\) must obtained from concatenating the following string of integers (in usual \(\mathfrak{sl}(n)\) coordinates): \[(7,5,3,1,-1,-3,-5,-7),\quad(6,4,2,0,-2,-4,-6),\quad,(5,3,1,-1,-3,- 5),\] \[(4,2,0,-2,-4),\quad(3,1,-1,-3),\quad(2,0,-2),\quad(1,-1),\quad(0);\] corresponding to the trivial representation, and/or \[\left(\frac{7}{2},\frac{3}{2},-\frac{1}{2},-\frac{5}{2}\ ;\ \frac{5}{2},\frac{1}{2},- \frac{3}{2},-\frac{7}{2}\right),\left(\frac{5}{2},\frac{1}{2},-\frac{3}{2}\ ;\ \frac{3}{2},- \frac{1}{2},-\frac{5}{2}\right),\left(\frac{3}{2},-\frac{1}{2}\ ;\ \frac{1}{2},- \frac{3}{2}\right),\left(\frac{1}{2}\ ;\ -\frac{1}{2}\right) \tag{12}\] corresponding to the mid-point of Stein's complementary series, and/or the following strings: \[m=4:\ \left(\frac{9}{2},\frac{5}{2},\frac{1}{2},-\frac{3}{2}\ ;\ \frac{3}{2},-\frac{1}{2},-\frac{5}{2},-\frac{9}{2}\right),\] \[m=3:\ \left(\frac{11}{2},\frac{7}{2},\frac{3}{2},-\frac{1}{2}\ ;\ \frac{1}{2},- \frac{3}{2},-\frac{7}{2},-\frac{11}{2}\right),\left(\frac{7}{2},\frac{1}{2},- \frac{3}{2}\ ;\ \frac{3}{2},-\frac{1}{2},-\frac{5}{2}\right)\] \[m=2:\ \left(\frac{13}{2},\frac{9}{2},\frac{5}{2},\frac{1}{2}\ ;\ - \frac{1}{2},-\frac{5}{2},-\frac{9}{2},-\frac{1}{2}\right),\left(\frac{9}{2}, \frac{5}{2},\frac{1}{2}\ ;\ -\frac{1}{2},-\frac{5}{2},-\frac{9}{2}\right),\left(\frac{5}{2}, \frac{1}{2}\ ;\ -\frac{1}{2},-\frac{5}{2}\right) \tag{13}\] corresponding to a non-unitary representation whose first occurrence of indefinite forms is on \(V_{\mathfrak{f}\cap\mathfrak{k}}(\sigma_{m})\). Since \(2\nu_{\ell}\) is regular integral, this immediately implies that when \(p\) is even, the only choice of \(2\nu_{\ell}\) is \((p,p-2,\ldots,-p+2,-p)\), i.e. \(J_{\mathcal{F}_{\ell}}(\nu_{\ell},\nu_{\ell})\) is the trivial representation. When \(p\) is odd, then the regularity condition implies that apart from the trivial representation, one only needs to study the parameters (12) and (13). **Example 3.2**.: _Consider Case (v), where \(M\) has a factor of Type \(A_{5}\) and \(\eta=k_{1}\omega_{1}+k_{2}\omega_{2}+k_{8}\omega_{8}\). As discussed above, we only need to study \(J_{A_{5}}(\nu,\nu)\) with \(2\nu\) equal to:_ \[\begin{split} 2\nu_{1}&:=\left(\frac{5}{2},\frac{1}{2 },-\frac{3}{2}\ ;\ \frac{3}{2},-\frac{1}{2},-\frac{5}{2}\right)\sim[1,1,1,1,1],\\ 2\nu_{2}&:=\left(\frac{7}{2},\frac{3}{2},-\frac{1}{ 2}\ ;\ \frac{1}{2},-\frac{3}{2},-\frac{7}{2}\right)\sim[2,1,1,1,2],\\ 2\nu_{3}&:=\left(\frac{9}{2},\frac{5}{2},\frac{1}{ 2}\ ;\ -\frac{1}{2},-\frac{5}{2},-\frac{9}{2}\right)\sim[2,2,1,2,2]\end{split} \tag{14}\] _Note that \(J_{A_{5}}(\nu_{1},\nu_{1})\) is unitary, while \(J_{A_{5}}(\nu_{2},\nu_{2})\), \(J_{A_{5}}(\nu_{3},\nu_{3})\) are not. The non-unitarity of the last two modules are detected in \(V_{\mathfrak{su}(6)}(\sigma_{2})\) and \(V_{\mathfrak{su}(6)}(\sigma_{3})\) respectively._ _Consider the induced module in Step (i) of Section 2.5:_ \[\operatorname{Ind}_{MN}^{E_{8}}\left(J_{A_{5}}(\nu_{i};\nu_{i})\otimes \mathbb{C}_{\tau(\eta)}\otimes\mathbf{1}\right) \tag{15}\] _with lowest \(K-\)type \(V_{\mathfrak{k}}(\eta)\) and \(\pi\) being its lowest \(K-\)type subquotient. The infinitesimal character of the above induced module (and \(\pi\)) is equal to \((\lambda_{L},\lambda_{R})\), where_ \[2\lambda_{L}=2\widetilde{\nu_{i}}+\eta=2\widetilde{\nu_{i}}+[k_{1},k_{2},0,0,0,0,0,k_{8}]\] _with \(\widetilde{\nu_{i}}\) satisfying_ \[\widetilde{\nu_{i}}=[n_{i,1},n_{i,2};\nu_{i};n_{i,3}]\quad\text{and}\quad \langle 2\widetilde{\nu_{i}},\omega_{1}\rangle,\ \langle 2\widetilde{\nu_{i}},\omega_{2}\rangle,\ \langle 2 \widetilde{\nu_{i}},\omega_{8}\rangle=0.\] _Solving the above equations, the induced representation (15) has \(2\lambda_{L}-\)value equal to:_ \[2\lambda_{L}=2\widetilde{\nu_{i}}+\eta=\begin{cases}\begin{bmatrix}k_{1}-\frac {5}{2},k_{2}-4,1,1,1,1,1,k_{8}-\frac{5}{2}\end{bmatrix}&\text{if}\ \ i=1\\ \begin{bmatrix}k_{1}-\frac{7}{2},k_{2}-5,2,1,1,1,2,k_{8}-\frac{7}{2}\end{bmatrix} &\text{if}\ \ i=2\;,\\ \begin{bmatrix}k_{1}-\frac{9}{2},k_{2}-7,2,2,1,2,2,k_{8}-\frac{9}{2}\end{bmatrix} &\text{if}\ \ i=3\end{cases}\end{cases}\] _none of which is integral. Therefore, all of them must have zero Dirac cohomology._ Example 3.2 can be generalized to any Levi subgroup \(M\) on the table above consisting a single simple factor \(\mathcal{F}\) of Type \(A_{p}\), such that one can enlarge \(\mathcal{F}\) to \(\mathcal{F}^{\prime}\) of Type \(D_{p+1}\) by adjoining a node \(j\notin I(M)\) to \(I(M)\). In all such cases, \(J_{F}(\nu,\nu)\) in Step (i) of Section 2.5 must be parabolically induced from the trivial module in order for \(\pi\) to be in \(\widehat{G}^{d}\). **Remark 3.3**.: _Although we have proved that the irreducible subquotients \(\pi\) of the induced modules (15) are not in \(\widehat{G}^{d}\), it is still of interest to see if they are in \(\widehat{G}\). Since \(J_{A_{5}}\left(\nu_{1};\nu_{1}\right)\) is in the complementary series, this implies (15) is unitary when \(i=1\)._ _For \(i=3\), from the last column of the table above, \(\eta+\sigma_{3}\) is \(M-\)bottom layer for all \(\eta\) with \(k_{2}\geq 2\) and \(k_{1},k_{8}\geq 1\). In other words, \(\pi\) is not unitary in all such cases._ _We are now left with the case when \(k_{2}=1\). By enlarging the Levi subgroup \(M\) to \(M^{\prime}\) of Type \(D_{6}\) by including the node \(\alpha_{2}\), \(\pi\) is also a subquotient of the induced module_ \[\operatorname{Ind}_{M^{\prime}N^{\prime}}^{E_{8}}\left(J_{D_{6}}\left(\nu_{i}+ \frac{k_{2}}{4}(1,1,1,1,1,1);\nu_{i}-\frac{k_{2}}{4}(1,1,1,1,1,1)\right)\otimes \mathbb{C}_{\tau^{\prime}(\eta)}\otimes\mathbf{1}\right).\] _When \(k_{2}=1\), this is equal to_ \[\operatorname{Ind}_{M^{\prime}N^{\prime}}^{E_{8}}\left(J_{D_{6}}\left(\frac{5} {2},\frac{3}{2},\frac{1}{2},0,-1,-2;2,1,0,\frac{-1}{2},\frac{-3}{2},\frac{-5}{ 2}\right)\otimes\mathbb{C}_{\tau^{\prime}(\eta)}\otimes\mathbf{1}\right),\] _By [Br], the \(Spin(12,\mathbb{C})-\)module above is unitary. Consequently, \(\pi\) is also unitary in this case for all \(k_{1},k_{8}\geq 1\)._ So we are left to study Cases (i), (ii) and (iv), where \(\mathcal{F}^{\prime}\) is of Type \(E_{8}\), \(E_{7}\) and \(E_{6}\) by adjoining the node \(\alpha_{2}\) to \(\mathcal{F}\). Since \(\mathcal{F}\) is of Type \(A_{6}\) for Case (ii), the only possibility of \(J_{A_{6}}(\nu_{\ell},\nu_{\ell})\) must be the trivial representation. The study of Case (i) and Case (iv) are given by the two examples below: **Example 3.4**.: _In Case (i), \(M\) consists of a single simple factor \(\mathcal{F}_{\ell}\) of Type \(A_{7}\). By the last column of the table and the arguments in Example 3.2, one only needs to consider the case of \(J_{A_{7}}(\nu,\nu)\) with_ \[2\nu=\begin{cases}[2,2,2,1,2,2,2]&\text{and}\quad\eta=\omega_{2}\\ [2,2,1,1,1,2,2]&\text{and}\quad\eta=\omega_{2},2\omega_{2}\\ [2,1,1,1,1,1,2]&\text{and}\quad\eta=\omega_{2},2\omega_{2},3\omega_{2}\end{cases} \tag{16}\] _other than the trivial representation. As in Example 3.2, the infinitesimal character of the induced module_ \[\operatorname{Ind}_{MN}^{E_{8}}\left(J_{A_{7}}(\nu,\nu)\otimes\mathbb{C}_{ \tau(\eta)}\otimes\mathbf{1}\right) \tag{17}\] _does not satisfy (6) for all possible \(\eta\) in (16). More precisely, its \(2\lambda_{L}-\)value is equal to:_ \[2\lambda_{L}=2\nu+\eta=\begin{cases}[2,k_{2}-\frac{27}{2},1,1,1,1,1,2]&\text{ if }\ 2\nu_{\ell}=[2,2,2,1,2,2,2]\\ [2,k_{2}-\frac{21}{2},2,1,1,1,2,2]&\text{if }\ 2\nu_{\ell}=[2,2,1,1,1,2,2]\\ [2,k_{2}-\frac{17}{2},1,1,1,1,1,2]&\text{if }\ 2\nu_{\ell}=[2,1,1,1,1,1,2]\end{cases}\] _for \(\eta=k_{2}\omega_{2}\), none of which is integral. Consequently, the only possible \(J_{A_{7}}(\nu,\nu)\) in this case is again the trivial representation._ **Example 3.5**.: _Now we study Case (iv), where \(M\) consists of a single simple factor \(\mathcal{F}\) of Type \(A_{5}\). As in the previous example, consider the case of \(J_{A_{5}}(\nu,\nu)\) with_ \[2\nu=\begin{cases}[2,2,1,2,2]&\text{and}\quad\eta=\omega_{2}+k_{7}\omega_{7}+ k_{8}\omega_{8}\\ [2,1,1,1,2]&\text{and}\quad\eta=\omega_{2}+k_{7}\omega_{7}+k_{8}\omega_{8},2 \omega_{2}+k_{7}\omega_{7}+k_{8}\omega_{8}\end{cases} \tag{18}\] _Then the infinitesimal character \((\lambda_{L},\lambda_{R})\) of the induced module:_ \[\operatorname{Ind}_{MN}^{E_{8}}\left(J_{A_{5}}(\nu,\nu)\otimes\mathbb{C}_{ \tau(\eta)}\otimes\mathbf{1}\right) \tag{19}\] _has \(2\lambda_{L}\)-value equal to:_ \[2\lambda_{L}=2\nu+\eta=\begin{cases}\left[2,k_{2}-\frac{15}{2},2,1,2,2,k_{7}- \frac{9}{2},k_{8}\right]&\text{if}\ \ 2\nu_{\ell}=[2,2,1,2,2]\\ \left[2,k_{2}-\frac{11}{2},2,1,2,2,k_{7}-\frac{7}{2},k_{8}\right]&\text{if} \ \ 2\nu_{\ell}=[2,1,1,1,2]\end{cases},\] _for \(\eta=k_{2}\omega_{2}+k_{7}\omega_{7}+k_{8}\omega_{8}\). Hence it is not integral for all possible \(\eta\)'s._ **Remark 3.6**.: _The same argument can also be applied to the Levi subgroup with more than one simple factor of Type \(A\). For instance, consider_ \[\widetilde{M}=\mathcal{F}\times\mathcal{F}^{\prime}\times(\mathbb{C}^{*})^{2},\] _where \(\mathcal{F}\) is of Type \(A_{5}\) as given in Example 3.5, and \(\mathcal{F}^{\prime}\) is of Type \(A_{1}\) corresponding to the node \(\alpha_{8}\) in the Dynkin diagram. By Remark 3.1, one only needs to consider the subquotient of the induced module_ \[\operatorname{Ind}_{\widetilde{M}\widetilde{N}}^{E_{8}}\left(J_{A_{5}}(\nu, \nu)\otimes J_{A_{1}}(\nu^{\prime},\nu^{\prime})\otimes\mathbb{C}_{\widetilde {\tau}(\eta)}\otimes\mathbf{1}\right), \tag{20}\] _where \(J_{A_{5}}(\nu,\nu)\) and \(\eta\) is as given in (18) with \(k_{8}=0\), and \(J_{A_{1}}(\nu^{\prime},\nu^{\prime})=\operatorname{triv}\) is the trivial representation. As before, the infinitesimal character of the above induced module does not satisfy (6) for all possible \(k_{2},k_{7}\geq 1\)._ The arguments in this section can therefore be concluded as follows: **Corollary 3.7**.: _Let \(\pi=J(\lambda,-s\lambda)\in\widehat{G}\) be such that \(2\lambda\) is regular integral (e.g. \(\pi\in\widehat{G}^{d}\)), and the highest weight of its lowest \(K-\)type \(\eta=\{\lambda+s\lambda\}\) defines a Levi subgroup \(M\) (c.f. (9)) consisting only of Type \(A\) simple factors. Then \(\pi\) must be the lowest \(K-\)type subquotient of the unitarily induced module_ \[\operatorname{Ind}_{M^{\prime}N^{\prime}}^{E_{8}}\left(\operatorname{triv}_{ A}\otimes\mathbb{C}_{\tau^{\prime}(\eta)}\otimes\mathbf{1}\right)\] _for some Levi subgroup \(M^{\prime}\leq M\), and \(\operatorname{triv}_{A}\) is the trivial representation of the Type \(A\) simple factors in \(M^{\prime}\)._ ## 4. Type \(D\) Levi subgroups We now deal with the case when \(M\) contains a Type \(D\) factor \(\mathcal{F}\). By Remark 3.1, if there is another simple factor in \(M\) (which is necessarily of Type \(A\)), the spherical representation corresponding to this Type \(A\) factor must be parabolically induced from a trivial module. Therefore, one can only focus on the sperhical representation corresponding to \(\mathcal{F}\). Fix the simple roots of \(\mathcal{F}\) by: By [BDW, Section 6], the non-unitarity certificates of \((\mathfrak{f},\mathcal{F}\cap K)-\)modules with half-integral regular infinitesimal character have highest weights: * Type \(D_{7}\): \[\sigma_{0}^{7} :=2\beta_{1}+2\beta_{2}+2\beta_{3}+2\beta_{4}+2\beta_{5}+\beta_{6}+ \beta_{7},\] \[\sigma_{1}^{7} :=\beta_{1}+2\beta_{2}+2\beta_{3}+2\beta_{4}+2\beta_{5}+\beta_{6}+ \beta_{7},\] \[\sigma_{2}^{7} :=\beta_{1}+2\beta_{2}+3\beta_{3}+4\beta_{4}+4\beta_{5}+2\beta_{6}+ 2\beta_{7},\] \[\sigma_{3}^{7} :=\beta_{1}+2\beta_{2}+3\beta_{3}+4\beta_{4}+5\beta_{5}+3\beta_{6}+ 3\beta_{7}\] * Type \(D_{6}\): \[\sigma_{0}^{6} :=2\beta_{1}+2\beta_{2}+2\beta_{3}+2\beta_{4}+\beta_{5}+\beta_{6},\] \[\sigma_{1}^{6} :=\beta_{1}+2\beta_{2}+2\beta_{3}+2\beta_{4}+\beta_{5}+\beta_{6},\] \[\sigma_{2}^{6} :=\beta_{1}+2\beta_{2}+3\beta_{3}+4\beta_{4}+2\beta_{5}+2\beta_{6},\] \[\sigma_{3}^{6} :=\beta_{1}+2\beta_{2}+3\beta_{3}+4\beta_{4}+3\beta_{5}+2\beta_{6}\] * Type \(D_{5}\): \[\sigma_{0}^{5} :=2\beta_{1}+2\beta_{2}+2\beta_{3}+\beta_{4}+\beta_{5},\] \[\sigma_{1}^{5} :=\beta_{1}+2\beta_{2}+2\beta_{3}+\beta_{4}+\beta_{5},\] \[\sigma_{2}^{5} :=\beta_{1}+2\beta_{2}+3\beta_{3}+2\beta_{4}+2\beta_{5}\] * Type \(D_{4}\): \[\sigma_{0}^{4} :=2\beta_{1}+2\beta_{2}+\beta_{3}+\beta_{4},\] \[\sigma_{1}^{4} :=\beta_{1}+2\beta_{2}+\beta_{3}+\beta_{4},\] \[\sigma_{2}^{4} :=\beta_{1}+2\beta_{2}+2\beta_{3}+\beta_{4}\] As in the previous section, we list the possibilities of the Type \(D_{p}\) factor and the possibilities of \(\eta\) so that \(\sigma+\eta\) is bottom layer: \begin{tabular}{|c|c c c c c|c c c c|} \hline Type & \multicolumn{3}{c|}{\(I(F)\) (in \(\Delta\)) and \(j\) (in \(\times\))} & \multicolumn{3}{c|}{Bottom layer \(\sigma+\eta\)} \\ \hline \hline & & & \(\alpha_{2}\) & & & & & & \\ & & & & & & & & & \(\sigma_{0}^{7}+k_{1}\omega_{1}\) & \((k_{1}\geq 1)\), \\ & & & & & & & & \(\sigma_{1}^{7}+k_{1}\omega_{1}\) & \((k_{1}\geq 1)\), \\ \(D_{7}\) & \(\alpha_{1}\) & \(\alpha_{3}\) & \(\alpha_{4}\) & \(\alpha_{5}\) & \(\alpha_{6}\) & \(\alpha_{7}\) & \(\alpha_{8}\) & \(\sigma_{2}^{7}+k_{1}\omega_{1}\) & \((k_{1}\geq 2)\), \\ & & & & & & & & & \(\sigma_{3}^{7}+k_{1}\omega_{1}\) & \((k_{1}\geq 3)\) \\ \hline \end{tabular} As in the previous section, we only study the cases where the non-unitarity of the spherical module \(J_{D_{p}}(\nu,\nu)\) occurs at \(\sigma_{i}^{p}\), while \(\sigma_{i}^{p}+\eta\) is not \(M-\)bottom layer. ### Type \(D_{7}\) By the above table, one only considers the spherical modules \(J_{D_{7}}(\nu_{7},\nu_{7})\) such that the first occurrence of opposite signatures at the \(Spin(14)-\)types with highest weight \(\sigma_{2}^{7}\) and \(\sigma_{3}^{7}\) only. By the results of [BDW, Section 6], they are \[2\nu_{7}=\begin{cases}[2,2,2,1,1,1,1]&\text{and}\ \ \eta=\omega_{1},\\ [2,1,1,1,1,1,1]&\text{and}\ \ \eta=\omega_{1},2\omega_{1}\end{cases} \tag{21}\] In all the above cases, the infinitesimal character \((\lambda_{L},\lambda_{R})\) of the induced modules \[\operatorname{Ind}_{MN}^{E_{8}}\left(J_{D_{7}}(\nu_{7},\nu_{7})\otimes \mathbb{C}_{\tau(\eta)}\otimes\mathbf{1}\right) \tag{22}\] have \(2\lambda_{L}\)-value equal to: \[2\lambda_{L}=\begin{cases}[-\frac{25}{2},1,1,1,1,2,2,2]&\text{if}\ \ 2\nu_{\ell}=[2,2,2,1,1,1,1]\ \text{and}\ \eta=\omega_{1},\\ [-10,1,1,1,1,1,2]\sim[0,0,1,0,1,0,1,0]&\text{if}\ \ 2\nu_{\ell}=[2,1,1,1,1,1,1]\ \text{and}\ \eta=\omega_{1},\\ [-9,1,1,1,1,1,2]\sim[0,1,1,0,0,1,0,1]&\text{if}\ \ 2\nu_{\ell}=[2,1,1,1,1,1,1]\ \text{and}\ \eta=2\omega_{1},\end{cases}\] none of which is integral. **Remark 4.1**.: _If \(2\nu_{7}=[1,1,1,1,1,1,1]\), \([2,2,1,1,1,1,1]\), \([2,2,2,2,1,1,1]\) (\(=(6,5,4,3,2,1,0)\), \((8,6,4,3,2,1,0)\) and \((10,8,6,4,2,1,0)\) in usual coordinates of \(D_{7}\)), then \(J_{D_{7}}(\nu_{7},\nu_{7})\) are spherical unipotent representations obtained by theta-lift of the spherical metaplectic representations of \(Sp(6,\mathbb{C})\), \(Sp(4,\mathbb{C})\) and \(Sp(2,\mathbb{C})\) to \(SO(14,\mathbb{C})\) respectively. By [13], only the middle parameter gives a representation with nonzero Dirac cohomology._ _On the other hand, the infinitesimal characters \((\lambda_{L},\lambda_{R})\) of (22) corresponding to these three spherical unipotent representations have \(2\lambda_{L}-\)value equal to:_ \[[-\frac{21}{2}+k_{1},1,1,1,1,1,1,1],\quad[-12+k_{1},1,1,1,1,1,2,2],\quad[- \frac{31}{2}+k_{1},1,1,1,2,2,2,2]\] _respectively. Coincidentally, only the middle parameter can possibly satisfy (6). This observation also holds for the other spherical (as well as nonspherical) unipotent representations of Type \(D\) and \(E\) which we are going to study below (see the paragraphs after (24), (25), (27), (29) and Remark 5.1)._ ### Type \(D_{6}\) We carry the same analysis as in the previous section. Suppose \(J_{D_{6}}(\nu_{6},\nu_{6})\) is such that the signatures of the Hermitian form are **both** negative at the \(Spin(12)-\)types \(V_{\mathfrak{so}(12)}(\sigma_{6}^{0})\) and \(V_{\mathfrak{so}(12)}(\sigma_{6}^{2})\) (or \(V_{\mathfrak{so}(12)}(\sigma_{6}^{0})\) and \(V_{\mathfrak{so}(12)}(\sigma_{6}^{3})\)), then for any \(\eta=k_{1}\omega_{8}+k_{8}\omega_{8}\), at least one of \(\eta+\sigma_{6}^{0}\) and \(\eta+\sigma_{6}^{2}\) (or \(\sigma_{6}^{3}\)) is \(M-\)bottom layer, which implies \(\pi\) is not unitary. Consequently, we only consider \(J_{D_{6}}(\nu_{6},\nu_{6})\) such that the signature is indefinite at **exactly** one of \(V_{\mathfrak{so}(12)}(\sigma_{6}^{0})\), \(V_{\mathfrak{so}(12)}(\sigma_{6}^{2})\) or \(V_{\mathfrak{so}(12)}(\sigma_{6}^{3})\). By the results in [12], we are reduced to studying the following cases: * \(\sigma_{6}^{2}:2\nu_{6}=[2,2,2,2,1,1]\) and \(\eta=\omega_{1}+k_{8}\omega_{8}\); * \(\sigma_{6}^{3}:2\nu_{6}=[2,2,1,1,1,1]\) and \(\eta=\omega_{1}+k_{8}\omega_{8},2\omega_{1}+k_{8}\omega_{8}\); * \(\sigma_{6}^{0}:2\nu_{6}=\) \[[1,2,2,2,2,2],\ [1,1,2,2,2],\ [2,1,1,2,2],\ [2,2,2,2,1,1],\] \[[1,1,1,2,2],\ [1,2,1,1,2,2],\ [1,2,2,1,1,1],\ [1,1,2,1,1,1],\ [1,1,1,1,2,2]\] and \(\eta=k_{1}\omega_{1}+\omega_{8}\) (c.f. [13], Section 6.3(c)]). In the first two cases, the infinitesimal character \((\lambda_{L},\lambda_{R})\) of the induced module \[\operatorname{Ind}_{MN}^{E_{8}}\left(J_{D_{6}}(\nu_{6},\nu_{6})\otimes\mathbb{ C}_{\tau(\eta)}\otimes\mathbf{1}\right)\] has \(2\lambda_{L}\)-value equal to: \[2\lambda_{L}=\begin{cases}[-\frac{25}{2},1,1,2,2,2,2,k_{8}-8]&\text{if}\ \ 2\nu_{\ell}=[2,2,2,2,1,1]\ \text{and}\ \eta=\omega_{1}+k_{8}\omega_{8},\\ [-8,1,1,1,1,2,2,k_{8}-7]&\text{if}\ \ 2\nu_{\ell}=[2,2,1,1,1,1]\ \text{and}\ \eta=\omega_{1}+k_{8}\omega_{8},\\ [-7,1,1,1,1,2,2,k_{8}-7]&\text{if}\ \ 2\nu_{\ell}=[2,2,1,1,1,1]\ \text{and}\ \eta=2\omega_{1}+k_{8}\omega_{8},\end{cases}.\] One can check that none of the above \(2\lambda_{L}\) is regular integral. More precisely, the first parameter is not integral. As for the second parameter, \([-8,1,1,1,1,2,2,k_{8}-7]=(0,1,2,3,5,7,k_{8},k_{8}+2)\) is singular for all \(k_{1}>0\) since \[\langle(0,1,2,3,5,7,k_{8},k_{8}+2),\frac{1}{2}(1,1,-1,-1,-1,1,-1,1)\rangle=0.\] Similarly, the last parameter \([-7,1,1,1,1,2,2,k_{8}-7]=(0,1,2,3,5,7,k_{8},k_{8}+4)\) is singular for all \(k_{8}>0\), since \[\langle(0,1,2,3,5,7,k_{8},k_{8}+4),\frac{1}{2}(1,-1,1,-1,1,-1,1)\rangle=0.\] We are left to study the last case when the non-unitarity certificate is \(V_{\mathfrak{so}(12)}(\sigma_{6}^{0})\). We extend the Levi subgroup \(M\) to \(M^{\prime}\) of Type \(D_{7}\), and consider the induced module \[\operatorname{Ind}_{M^{\prime}N^{\prime}}^{E_{8}}\left(J_{D_{7}}(\nu_{6}^{+}; \nu_{6}^{-})\otimes\mathbb{C}_{\tau^{\prime}(\eta)}\otimes\mathbf{1}\right) \tag{23}\] where \(\nu_{6}^{\pm}=\left(\pm\frac{1}{2},\nu_{6}\right)\) in the usual coordinates of \(D_{7}\), and \(\tau^{\prime}(\eta)\) is chosen such that its lowest \(K-\)type is \(V_{\mathfrak{k}}(k_{1}\omega_{1}+\omega_{8})=V_{\mathfrak{k}}(\eta)\) (note that \(\eta=(0,0,0,0,0,0,1,2k_{1}+1)\) in usual Bourbaki coordinates). We begin by considering the following two choices of \(2\nu_{6}\): \[2\nu_{6}=[1,1,1,1,2,2]=(6,5,4,3,2,0)\quad(\text{or }[2,2,1,1,2,2]=(8,6,4,3,2,0)),\] corresponding to the module \(J_{D_{7}}(\nu_{6}^{+};\nu_{6}^{-})\) in (23) with \[\begin{split} J_{D_{7}}\left(\frac{1}{2}(6,5,4,3,2,1,0);\frac{1 }{2}(6,5,4,3,2,-1,0)\right)\\ (\text{or }J_{D_{7}}\left(\frac{1}{2}(8,6,4,3,2,1,0);\frac{1}{2}(8,6,4,3,2,-1,0)\right)).\end{split} \tag{24}\] Both of them are (non-spherical) unipotent representations obtained by the theta-lift of the non-spherical metaplectic representations in \(Sp(4,\mathbb{C})\) (or \(Sp(6,\mathbb{C})\)) to \(SO(14,\mathbb{C})\). However, in the second case, \(2\lambda_{L}=2\nu_{6}+\eta=[1,1,1,1,2,2]+k_{1}\omega_{1}+\omega_{8}\) is not integral for all \(k_{1}\geq 1\). Therefore, only the first module of (24) (which is in \(\widehat{M^{\prime}}^{d}\) by [1]) contributes to the Dirac series. For the other choices of \(2\nu_{6}\), the module \(J_{D_{7}}(\nu_{6}^{+};\nu_{6}^{-})\) in (23) has indefinite form on \(V_{\mathfrak{so}(14)}(1,0,0,0,0,0,0)\) and \(V_{\mathfrak{so}(14)}(2,1,0,0,0,0,0)\) by [BDW, Section 6.4]. These \(M^{\prime}\cap K-\)type is \(M^{\prime}-\)bottom layer for all \(k_{1}\geq 1\), since the weight \((0,0,0,0,0,1,2,2k_{1}+1)\) is dominant in \(E_{8}\) for all \(k_{1}\geq 1\). Consequently, the induced module (23) and its lowest \(K-\)type subquotient \(\pi\) are both nonunitary. ### Type \(D_{5}\) As in the previous section, we only study \(J_{D_{5}}(\nu_{5},\nu_{5})\) such that the signatures of the Hermitian form the signature is indefinite at **exactly** one of \(V_{\mathfrak{so}(10)}(\sigma_{5}^{0})\) or \(V_{\mathfrak{so}(10)}(\sigma_{5}^{2})\): * \(\sigma_{5}^{2}:2\nu_{5}=[2,2,2,1,1]\) and \(\eta=\omega_{1}+k_{7}\omega_{7}+k_{8}\omega_{8}\); * \(\sigma_{5}^{0}:2\nu_{5}=\) \[[1,2,2,2,2],\ [1,1,2,2,2],\ [2,1,1,2,2],\ [1,1,1,2,2],\ [1,2,2,1,1,1],\] and \(\eta=k_{1}\omega_{1}+\omega_{7}+k_{8}\omega_{8}\). The analysis is the same as in the Type \(D_{6}\) case, where the \(2\lambda_{L}\) parameter is not regular integral, or the irreducible subquotient is not unitary by bottom layer arguments. The interesting cases are \(2\nu_{6}=[2,1,1,2,2]=(6,4,3,2,0)\) and \([1,1,1,2,2]=(5,4,3,2,0)\), where the induced modules \[\begin{split}&\operatorname{Ind}_{M^{\prime}N^{\prime}}^{E_{8}} \left(J_{D_{6}}\left(\frac{1}{2}(6,4,3,2,1,0);\frac{1}{2}(6,4,3,2,-1,0)\right) \otimes\mathbb{C}_{\tau^{\prime}(\eta)}\otimes\mathbf{1}\right);\\ &\operatorname{Ind}_{M^{\prime}N^{\prime}}^{E_{8}}\left(J_{D_{6}} \left(\frac{1}{2}(5,4,3,2,1,0);\frac{1}{2}(5,4,3,2,-1,0)\right)\otimes\mathbb{ C}_{\tau^{\prime}(\eta)}\otimes\mathbf{1}\right)\end{split} \tag{25}\] (here \(M^{\prime}\) has a single simple factor of Type \(D_{6}\)) are both unitary. Indeed, the \(D_{6}-\)factors in the above equation are theta lifts from the non-spherical metaplectic representation of \(Sp(4,\mathbb{C})\) and \(Sp(6,\mathbb{C})\) to \(SO(12,\mathbb{C})\). However, in the first case, \(2\lambda_{L}=[2,1,1,2,2]+k_{1}\omega_{1}+\omega_{7}+k_{8}\omega_{8}\) is not integral for all \(k_{1},k_{8}\geq 1\). Consequently, only the second module in (25) (whose inducing module \(J_{D_{6}}\left(\frac{1}{2}(5,4,3,2,1,0);\frac{1}{2}(5,4,3,2,-1,0)\right)\) is in \(\widehat{M^{\prime}}^{d}\) by [1]) contributes to \(\widehat{G}^{d}\). As in Remark 3.6, the above arguments also works for \(k_{8}=0\), i.e. the Levi subgroup \(M\) has an extra simple factor of Type \(A_{1}\) corresponding to the simple root \(\alpha_{8}\). In such a case, the extra \(A_{1}\) factor must be the trivial module. ### Type \(D_{5}^{\prime}\) As before, one only considers the spherical module \(J_{D_{5}^{\prime}}(\nu_{5},\nu_{5})\) such that the first occurrence of opposite signature is at the \(Spin(10)-\)type with highest weight \(\sigma_{0}^{5}\). By the results of [BDW], there is only one possibility: \[2\nu_{5}=[2,1,1,1,1]\quad\text{and}\quad\eta=\omega_{6}+k_{7}\omega_{7}+k_{8} \omega_{8}. \tag{26}\] In this case, the infinitesimal character \((\lambda_{L},\lambda_{R})\) of the induced module \[\operatorname{Ind}_{MN}^{E_{8}}\left(J_{D_{5}^{\prime}}(\nu_{5},\nu_{5})\otimes \mathbb{C}_{\tau(\eta)}\otimes\mathbf{1}\right)\] has \(2\lambda_{L}\) equal to a non-integral weight \(2\lambda_{L}=[2,1,1,1,1,-\frac{9}{2},k_{7},k_{8}]\) for all \(k_{7},k_{8}\geq 0\). ### Type \(D_{4}\) We study \(J_{D_{4}}(\nu_{4},\nu_{4})\) such that the signatures of the Hermitian form the signature is indefinite at **exactly** one of \(V_{\mathfrak{so}(8)}(\sigma_{4}^{0})\) or \(V_{\mathfrak{so}(8)}(\sigma_{4}^{2})\): * \(\sigma_{4}^{2}:2\nu_{4}=[2,2,1,1]\) and \(\eta=\omega_{1}+k_{6}\omega_{6}+k_{7}\omega_{7}+k_{8}\omega_{8}\); * \(\sigma_{4}^{0}:2\nu_{4}=[1,2,2,2],[1,1,2,2]\) and \(\eta=k_{1}\omega_{1}+\omega_{6}+k_{7}\omega_{7}+k_{8}\omega_{8}\). As in the analysis in Type \(D_{5}\) and \(D_{6}\) above, the only possibility of \(\pi\) being unitary is when \(2\nu_{4}=[1,1,2,2]=(4,3,2,0)\), where the induced module \[\operatorname{Ind}_{M^{\prime}N^{\prime}}^{E_{8}}\left(J_{D_{5}}\left(\frac{1} {2}(4,3,2,1,0);\frac{1}{2}(4,3,2,-1,0)\right)\otimes\mathbb{C}_{\tau^{\prime} (\eta)}\otimes\mathbf{1}\right) \tag{27}\] (here \(M^{\prime}\) has a single simple factor of Type \(D_{5}\)) is unitary with no Dirac cohomology: the \(D_{5}-\)factor in the above equation is the theta lift from the non-spherical metaplectic representation of \(Sp(4,\mathbb{C})\) to \(SO(10,\mathbb{C})\). However, \(2\lambda_{L}=[1,1,2,2]+k_{1}\omega_{1}+\omega_{6}+k_{7}\omega_{7}+k_{8}\omega_{8}\) is not integral, so it does not contribute to \(\widehat{G}^{d}\). As in Remark 3.6, the same conclusion as in the previous section holds if the Levi subgroup has an extra Type \(A\) factor coming from the simple roots \(\alpha_{7}\) and/or \(\alpha_{8}\). So one can conclude the following: **Corollary 4.2**.: _Let \(\pi=J(\lambda,-s\lambda)\in\widehat{G}\) be such that \(2\lambda\) is regular integral (e.g. \(\pi\in\widehat{G}^{d}\)), and the highest weight of its lowest \(K-\)type \(\eta=\{\lambda+s\lambda\}\) defines a Levi subgroup \(M\) (c.f. (9)) consisting only of a Type \(D\) simple factor and possibility a Type \(A\) simple factor. Then \(\pi\) must be the lowest \(K-\)type subquotient of the unitarily induced module_ \[\operatorname{Ind}_{M^{\prime}N^{\prime}}^{E_{8}}\left(\pi_{D}^{unip,d}\otimes \operatorname{triv}_{A}\otimes\mathbb{C}_{\tau^{\prime}(\eta)}\otimes\mathbf{1 }\right),\] _where \(M^{\prime}\) is a Levi subgroup consisting of a Type \(D\) simple factor and possibility a Type \(A\) simple factor, \(\pi_{D}^{unip,d}\) is a unipotent representation of Type \(D\) with nonzero Dirac cohomology, and \(\operatorname{triv}_{A}\) is the trivial representation of Type \(A\)._ ## 5. Type \(E\) Levi subgroups We now study the case when there is a Type \(E_{6}\) or \(E_{7}\) factor in the Levi subgroup \(M\) of \(G\). Unlike the other Levi subgroups, one does not have the non-unitarity certificates for the spherical modules of Type \(E\). Fix the simple roots of \(E_{i}\) by \(\beta_{i}=\alpha_{i}\) (\(1\leq i\leq 8\)), where \(\alpha_{i}\) are the simple roots of \(E_{8}\) given in (7). We proceed as follows: 1. Using mathematica, we check the positive definiteness of the Hermitian, spherical modules \(J_{E_{i}}(\nu_{i},\nu_{i})\) with \(2\langle\nu_{i},\beta_{i}\rangle=1\) or \(2\) on the \(K-\)types appearing in \(\mathsf{adj}_{i}\otimes\mathsf{adj}_{i}\), where \(\mathsf{adj}_{i}\) is the adjoint \(K-\)type of Type \(E_{i}\). More precisely, the highest weights of the \(K-\)types appearing in \(\mathsf{adj}_{i}\otimes\mathsf{adj}_{i}\) for \(i=6,7\) are: * Type \(E_{6}\): \[\omega_{2} =\beta_{1}+2\beta_{2}+2\beta_{3}+3\beta_{4}+2\beta_{5}+\beta_{6},\] \[\omega_{1}+\omega_{6} =2\beta_{1}+2\beta_{2}+3\beta_{3}+4\beta_{4}+3\beta_{5}+2\beta_{6},\] \[2\omega_{2} =2\beta_{1}+4\beta_{2}+4\beta_{3}+6\beta_{4}+4\beta_{5}+2\beta_{6},\] \[\omega_{4} =\beta_{1}+2\beta_{2}+3\beta_{3}+4\beta_{4}+3\beta_{5}+2\beta_{6}\] * Type \(E_{7}\): \[\omega_{1} =2\beta_{1}+2\beta_{2}+3\beta_{3}+4\beta_{4}+3\beta_{5}+2\beta_{6} +\beta_{7},\] \[\omega_{3} =2\beta_{1}+3\beta_{2}+4\beta_{3}+6\beta_{4}+5\beta_{5}+4\beta_{6 }+2\beta_{7},\] \[\omega_{6} =3\beta_{1}+4\beta_{2}+6\beta_{3}+8\beta_{4}+6\beta_{5}+4\beta_{6 }+2\beta_{7},\] \[2\omega_{1} =4\beta_{1}+4\beta_{2}+6\beta_{3}+8\beta_{4}+6\beta_{5}+4\beta_{6 }+2\beta_{7},\] \[\omega_{1} =2\beta_{1}+2\beta_{2}+3\beta_{3}+4\beta_{4}+3\beta_{5}+2\beta_{6 }+\beta_{7},\] \[\omega_{2} =2\beta_{1}+4\beta_{2}+4\beta_{3}+6\beta_{4}+4\beta_{5}+2\beta_{6 }+2\beta_{7},\] \[\omega_{3} =2\beta_{1}+4\beta_{2}+4\beta_{3}+6\beta_{4}+4\beta_{5}+2\beta_{6 }+2\beta_{7},\] \[\omega_{4} =2\beta_{1}+4\beta_{2}+4\beta_{3}+6\beta_{4}+4\beta_{5}+2\beta_{6 }+2\beta_{7},\] \[\omega_{5} =2\beta_{1}+4\beta_{2}+4\beta_{3}+6\beta_{4}+4\beta_{5}+2\beta_{6 }+2\beta_{7},\] \[\omega_{6} =2\beta_{1}+4\beta_{2}+4\beta_{3}+6\beta_{4}+4\beta_{5}+2\beta_{6 }+2\beta_{7},\] \[\omega_{7} =2\beta_{1}+4\beta_{2}+4\beta_{3}+6\beta_{4}+4\beta_{5}+2\beta_{6 }+2\beta_{7},\] \[\omega_{8} =2\beta_{1}+4\beta_{2}+4\beta_{3}+6\beta_{4}+4\beta_{5}+2\beta_{6 }+2\beta_{7},\] \[\omega_{9} =2\beta_{1}+4\beta_{2}+4\beta_{3}+6\beta_{4}+4\beta_{5}+2\beta_{6 }+2\beta_{7},\] \[\omega_{10} =2\beta_{1}+4\beta_{2}+4\beta_{3}+6\beta_{4}+4\beta_{5}+2\beta_{6 }+2\beta_{7},\] \[\omega_{11} =2\beta_{1}+4\beta_{2}+4\beta_{3}+6\beta_{4}+4\beta_{5}+2\beta_{6 }+2\beta_{7},\] \[\omega_{12} =2\beta_{1}+4\beta_{2}+4\beta_{3}+6\beta_{4}+4\beta_{5}+2\beta_{6 }+2\beta_{7},\] \[\omega_{13} =2\beta_{1}+4\beta_{2}+4\beta_{3}+6\beta_{4}+4\beta_{5}+2\beta_{6 }+2\beta_{7},\] \[\omega_{14} =2\beta_{1}+4\beta_{2}+4\beta_{3}+6\beta_{4}+4\beta_{5}+2\beta_{6 }+2\beta_{7},\] \[\omega_{15} =2\beta_{1}+4\beta_{2}+4\beta_{3}+6\beta_{4}+4\beta_{5}+2\beta_{6 }+2\beta_{7},\] \[\omega_{16} =2\beta_{1}+4\beta_{2}+4\beta_{3}+6\beta_{4}+4\beta_{5}+2\beta_{6 }+2\beta_{7},\] \[\omega_{17} =2\beta_{1}+4\beta_{2}+4\beta_{3}+6\beta_{4}+4\beta_{5}+2\beta_{6 }+2\beta_{7},\] \[\omega_{18} =2\beta_{1}+4\beta_{2}+4\beta_{3}+6\beta_{4}+4\beta_{5}+2\beta_{6 }+2\beta_{7},\] \[\omega_{19} =2\beta_{1}+4\beta_{2}+4\beta_{3}+6\beta_{4}+4\beta_{5}+2\beta_{6 }+2\beta_{7},\] \[\omega_{20} =2\beta_{1}+4\beta_{2}+4\beta_{3}+6\beta_{4}+4\beta_{5}+2\beta_{6 }+2\beta_{7},\] \[\omega_{4} =2\beta_{1}+4\beta_{2}+4\beta_{3}+6\beta_{4}+4\beta_{5}+2\beta_{6 }+2\beta_{7},\] \[\omega_{4} =2\beta_{1}+4\beta_{2}+4\beta_{3}+6\beta_{4}+4\beta_{5}+2\beta_{6 }+2\beta_{7},\] \[\omega_{6} =2\beta_{1}+4\beta_{2}+4\beta_{3}+6\beta_{4}+4\beta_{5}+2\beta_{6 }+2\beta_{7},\] \[\omega_{7} =2\beta_{1}+4\beta_{2}+4\beta_{3}+6\beta_{4}+4\beta_{5}+2\beta_{6 }+2\beta_{7},\] \[\omega_{8} =2\beta_{1}+4\beta_{2}+4\beta_{3}+6\beta_{4}+4\beta_{5}+2\beta_{6 }+2\beta_{7},\] \[\omega_{1} =2\beta_{1}+4\beta_{2}+4\beta_{3}+6\beta_{4}+4\beta_{5}+2\beta_{6 }+2\beta_{7},\] \[\omega_{2} =2\beta_{1}+4\beta_{2}+4\beta_{3}+6\beta_{4}+4\beta_{5}+2\beta_{6 }+2\beta_{7},\] \[\omega_{4} =2\beta_{1}+4\beta_{2}+4\beta_{3}+6\beta_{4}+4\beta_{5}+2\beta_{6 }+2\beta_{7} 2. Most \(J_{E_{i}}(\nu_{i},\nu_{i})\) has indefinite forms on the level of \(\operatorname{\mathsf{adj}}_{i}\otimes\operatorname{\mathsf{adj}}_{i}\). These modules are automatically nonunitary for \(i=8\). As for \(i=6,7\), consider the lowest \(K-\)type subquotient \(\pi\) of \[\operatorname{Ind}_{MN}^{E_{8}}\left(J_{E_{i}}(\nu_{i},\nu_{i})\otimes\mathbb{ C}_{\tau(\eta)}\otimes\mathbf{1}\right),\quad\eta=\begin{cases}k\omega_{7}+l \omega_{8}&\text{if $i=6$}\\ k\omega_{8}&\text{if $i=7$}\end{cases}\] Since the coefficients of \(\beta_{i}\) in (i) are equal to \(2\) for both \(i=6\) and \(7\), all the \(M\cap K-\)types in \(\operatorname{\mathsf{adj}}_{i}\otimes\operatorname{\mathsf{adj}}_{i}\) are \(M-\)bottom layer for \(k\geq 2\), and hence \(\pi\) must also be nonunitary. 3. By (ii), one only needs to consider \(\eta_{1}=\omega_{7}+l\omega_{8}\) (for \(i=6\)) or \(\omega_{8}\) (for \(i=7\)). We check whether \(2\lambda_{L}=2\nu_{i}+\eta_{1}\) satisfies (6) (we will see below that there is no such \(\nu_{i}\) for \(i=7\)). If not, it does not appear in \(\widehat{G}^{d}\). Otherwise, we will study these modules on a case-by-case basis. 4. For the remaining (very few) representations with positive Hermitian form on the level of \(\operatorname{\mathsf{adj}}_{i}\otimes\operatorname{\mathsf{adj}}_{i}\) in (i), we either identify them as unipotent representations, or we apply \(\operatorname{\mathsf{atlas}}\) to conclude that they are not in the Dirac series using similar arguments described in (ii) - (iii). ### Type \(E_{8}\) We use \(\operatorname{\mathtt{mathematica}}\) to compute the signatures of the \(2^{8}=256\) choices of Hermitian, spherical modules \(J_{E_{8}}(\nu_{8},\nu_{8})\) on the \(K-\)types appearing in \(\operatorname{\mathsf{adj}}_{8}\otimes\operatorname{\mathsf{adj}}_{8}\). It turns out that only \(3\) of them have definite Hermitian form: \[2\nu_{8}=[1,1,1,1,1,1,1,1],\quad 2\nu_{8}=[1,1,1,1,1,1,2,2],\quad 2\nu_{8}=[2,2,2,2,2, 2,2,2].\] Indeed, these \(\nu_{8}\)-parameters correspond to spherical unipotent representations attached to the nilpotent orbits \(4A_{1}\) and \(3A_{1}\), (using the Bala-Carter notations) along with the trivial representation respectively. Although we cannot use \(\operatorname{\mathtt{atlas}}\) to verify their unitarity, they are widely believed to be unitary. Assuming the unitarity of these modules, we now check all these modules are in \(\widehat{G}^{d}\). The first module \(J_{E_{8}}(\frac{1}{2}[1,1,1,1,1,1,1,1],\frac{1}{2}[1,1,1,1,1,1,1])\) is the model representation, whose \(K-\)spectrum is given in [AHV] by \[J_{E_{8}}(\frac{1}{2}[1,1,1,1,1,1,1,1],\frac{1}{2}[1,1,1,1,1,1,1,1])|_{K}= \bigoplus_{a,b,c,d,e,f,g,h\geq 0}V_{\mathfrak{k}}([a,b,c,d,e,f,g,h]).\] By (6), \(V_{\mathfrak{k}}(\tau)\) contributes to Dirac cohomology if and only if \(\tau=2\lambda_{L}-\rho=0\). Note that \(V_{\mathfrak{k}}([a,b,c,d,e,f,g,h])\otimes S_{G}=V_{\mathfrak{k}}([a,b,c,d,e,f,g,h])\otimes V_{\mathfrak{k}}(\rho)\) contains \(V_{\mathfrak{k}}(0)\) if and only if \[[a,b,c,d,e,f,g,h]=[1,1,1,1,1,1,1]=\rho.\] So \(V_{\mathfrak{k}}([1,1,1,1,1,1,1,1])\) is the only possible \(K-\)type contributing to its Dirac cohomology. As for \(J_{E_{8}}(\frac{1}{2}[1,1,1,1,1,1,2,2],\frac{1}{2}[1,1,1,1,1,1,2,2])\), the work of [MG] implies that \[J_{E_{8}}(\frac{1}{2}[1,1,1,1,1,1,1],\frac{1}{2}[1,1,1,1,1,1,1,1])|_{K}= \bigoplus_{a,b,c,d\geq 0}V_{\mathfrak{k}}([a,0,0,0,0,b,c,d]).\] As above, the only possible \(\widetilde{K}-\)type contributing to Dirac cohomology is \(V_{\mathfrak{k}}(2\lambda_{L}-\rho)=V_{\mathfrak{k}}([0,0,0,0,0,0,1,1])\). One can check (through mathematica for instance) that the only possibility for \(V_{\mathfrak{k}}([a,0,0,0,0,b,c,d])\otimes S_{G}\) containing \(V_{\mathfrak{k}}([0,0,0,0,0,0,1,1])\) is when \[[a,0,0,0,0,b,c,d]=[4,0,0,0,0,4,1,1].\] Hence this representation is also in \(\widehat{G}^{d}\), and \(V_{\mathfrak{k}}([4,0,0,0,0,4,1,1])\) is the only \(K-\)type contributing to its Dirac cohomology. ### Type \(E_{7}\) In this case, there are \(2^{7}=128\) choices of Hermitian, spherical modules \(J_{E_{7}}(\nu_{7},\nu_{7})\). Among them, there are \(9\) of them having definite Hermitian form on the level of \(\mathtt{adj}_{7}\otimes\mathtt{adj}_{7}\). Furthermore, only \(3\) of them gives integral \(2\lambda_{L}=2\nu_{7}+k\omega_{8}\) for \(k\geq 1\): \[2\nu_{7}=[1,1,1,1,1,1,2],\quad[2,1,1,1,1,1,2],\quad[2,2,2,2,2,2,2].\] Note that the first and third parameter above gives the spherical unipotent representation corresponding to the orbit \(3A_{1}^{\prime}\) (the \(21^{st}\)-entry in [1, Table 4]) and the trivial representation respectively. So the the lowest \(K-\)type subquotient \(\pi\) of \(\operatorname{Ind}_{MN}^{E_{8}}(J_{E_{7}}(\nu_{7};\nu_{7})\otimes\mathbb{C}_{ \tau(\eta)}\otimes\mathbf{1})\) contributes to \(\widehat{G}^{d}\) whenever \(2\lambda_{L}\) is regular. Meanwhile, one can check by atlas that the module \[J_{E_{7}}\left(\frac{1}{2}[2,1,1,1,1,1,2],\frac{1}{2}[2,1,1,1,1,1,2]\right)\] has indefinite form on \(V_{\mathfrak{k}\tau\cap\mathfrak{k}}(2\omega_{7})\) (this \(K-\)type does not appear in \(\mathtt{adj}_{7}\otimes\mathtt{adj}_{7}\)). So the lowest \(K-\)type subquotient of \(\operatorname{Ind}_{MN}^{E_{8}}(J_{E_{7}}(\frac{1}{2}[2,1,1,1,1,1,2],\frac{1}{2 }[2,1,1,1,1,1,2])\otimes\mathbb{C}_{\tau(\eta)}\otimes\mathbf{1})\) cannot be in \(\widehat{G}^{d}\) for all \(k\geq 1\), since 1. for \(k=1\), \(2\lambda_{L}=[2,1,1,1,1,1,2,-7]\) is singular, so it does not satisfy (6); 2. for \(k\geq 2\), \(V_{\mathfrak{k}\tau\cap\mathfrak{k}}(2\omega_{7})\) is \(M-\)bottom layer in the above induced module, so its subquotient is not unitary. As for the \(128-9=119\) remaining modules, one can check that \(2\lambda_{L}=2\nu_{7}+\omega_{8}\) is either non-integral, or it is not regular. So none of them contributes to the Dirac series. So none of them would contribute to \(\widehat{G}^{d}\). **Remark 5.1**.: _When \(2\nu_{7}=[1,1,1,1,1,1,1]\) or \([2,1,2,1,1,1,1]\), our calculations above imply that \(2\lambda_{L}=2\nu_{7}+k\omega_{8}\) is not regular integral for all \(k\geq 1\). Note that the spherical module \(J_{E_{7}}(\nu_{7},\nu_{7})\) corresponding to these parameters are unipotent representations attached to the nilpotent orbits \(4A_{1}\) and \(3A_{1}^{\prime\prime}\) respectively (see [1, Section 6] for details). It is shown in [1] that these unipotent representations have no Dirac cohomology in \(E_{7}\)._ ### Type \(E_{6}\) As above, there are \(2^{4}=16\) choices of Hermitian, spherical modules \(J_{E_{6}}(\nu_{6},\nu_{6})\), namely \[2\nu_{6}=[a,b,c,d,c,a],\quad a,b,c,d\in\{1,2\}.\] Among them, \(J_{E_{6}}(\nu_{6},\nu_{6})\) has definite Hermitian form on the level of \(\mathtt{adj}_{6}\otimes\mathtt{adj}_{6}\) only when \(2\nu_{6}=[1,1,1,1,1,1]\) or \([2,2,2,2,2,2]\). They are the unipotent representation corresponding to the model orbit \(3A_{1}\) and the trivial representation respectively. For the remaining \(16-2=14\) choices of \(\nu_{6}\), only the following makes \(2\lambda_{L}=2\nu_{6}+\omega_{7}+l\omega_{8}\) integral: \[\begin{split} 2\nu_{6}&=[2,1,2,1,2,2],\quad 2 \lambda_{L}=[1,1,1,1,1,1,1,l-12];\\ 2\nu_{6}&=[2,1,2,2,2,2],\quad 2\lambda_{L}=[1,1,2,1,1,1,1, l-14];\\ 2\nu_{6}&=[2,2,2,1,2,2],\quad 2\lambda_{L}=[2,1,1,1,1, 1,1,l-13]\end{split} \tag{28}\] As in the Levi Type \(D_{6}\) case, we extend the Levi subgroup to \(M^{\prime}\) of Type \(E_{7}\) and consider the induced module \[\operatorname{Ind}_{M^{\prime}N^{\prime}}^{E_{8}}\left(J_{E_{7}}(\nu_{6}^{+}; \nu_{6}^{-})\otimes\mathbb{C}_{\tau^{\prime}(\eta)}\otimes\mathbf{1}\right) \tag{29}\] with \(\nu_{6}^{\pm}=\lambda_{L}\pm\frac{1}{2}\omega_{7}\), whose lowest \(K-\)type is equal to \(V_{\mathfrak{k}}(\omega_{7}+l\omega_{8})=V_{\mathfrak{k}}(\eta)\). Note that \(\eta=(0,0,0,0,0,1,l+1,l+2)\) in usual coordinates. For our choices of \(l\) above, the \(M^{\prime}\cap K-\)type \(V_{\mathfrak{k}}(\omega_{1}+\omega_{7})\) in \(J_{E_{7}}(\nu_{6}^{+};\nu_{6}^{-})\) is \(M^{\prime}-\)bottom layer. More precisely, the \(K-\)type of highest weight \((0,0,0,0,0,1,l,l+3)\) is always dominant for \(l\geq 1\). So it occurs with the same multiplicities and signatures as the \(M^{\prime}\cap K-\)type \(V_{\mathfrak{k}}(\omega_{1}+\omega_{7})\) in \(J_{E_{7}}(\nu_{6}^{+};\nu_{6}^{-})\). We now study (29) for the three parameters of \(\nu_{6}\) in (28). For the first parameter, the module \(J_{E_{7}}(\nu_{6}^{+};\nu_{6}^{-})\) in (29) with \(2\nu_{6}^{+}=[1,1,1,1,1,1,1]\) is equal to the \(19^{th}\) entry of [13, Table 4], which is a (nonspherical) unipotent representation attached to the nilpotent orbit \(4A_{1}\) with nonzero Dirac cohomology. Consequently, the first parameter yields a representation in the Dirac series of \(E_{8}\) whenever \(2\lambda_{L}\) satisfies (6). As for the last two parameters, one can check directly in atlas that the module \(J_{E_{7}}(\nu_{6}^{+};\nu_{6}^{-})\) has indefinite signatures on the \(M^{\prime}\cap K-\)types \(V_{\mathfrak{k}}(\omega_{7})\) and \(V_{\mathfrak{k}}(\omega_{1}+\omega_{7})\). So the discussions in the previous paragraph implies that the irreducible lowest \(K-\)type subquotients of (29) corresponding to these two parameters are not unitary. As in the case of Type \(A\) and Type \(D\), we have the following: **Corollary 5.2**.: _Let \(\pi=J(\lambda,-s\lambda)\in\widehat{G}\) be such that \(2\lambda\) is regular integral (e.g. \(\pi\in\widehat{G}^{d}\)), and the highest weight of its lowest \(K-\)type \(\eta=\{\lambda+s\lambda\}\) defines a Levi subgroup \(M\) (c.f. (9)) consisting only of a Type \(E\) simple factor and possibility a Type \(A\) simple factor. Then \(\pi\) must be a subquotient of the unitarily induced module_ \[\operatorname{Ind}_{M^{\prime}N^{\prime}}^{E_{8}}\left(\pi_{E}^{\text{unip},d} \otimes\operatorname{triv}_{A}\otimes\mathbb{C}_{\tau^{\prime}(\eta)}\otimes \mathbf{1}\right),\] _where \(M^{\prime}\) is a Levi subgroup consisting of a Type \(E\) simple factor and possibility a Type \(A\) simple factor, \(\pi_{E}^{\text{unip},d}\) is a unipotent representation of Type \(E\) with nonzero Dirac cohomology, and \(\operatorname{triv}_{A}\) is the trivial representation of Type \(A\)._ ## Acknowledgements The second author would like to thank Chao-Ping Dong for several atlas calculations in \(E_{7}\), which verifies some results of our work. Barbasch is supported by NSF grant 2000254. Wong is supported by Shenzhen Science and Technology Innovation Committee grant (no. 20220818094918001).
2307.12520
Lost In Translation: Generating Adversarial Examples Robust to Round-Trip Translation
Language Models today provide a high accuracy across a large number of downstream tasks. However, they remain susceptible to adversarial attacks, particularly against those where the adversarial examples maintain considerable similarity to the original text. Given the multilingual nature of text, the effectiveness of adversarial examples across translations and how machine translations can improve the robustness of adversarial examples remain largely unexplored. In this paper, we present a comprehensive study on the robustness of current text adversarial attacks to round-trip translation. We demonstrate that 6 state-of-the-art text-based adversarial attacks do not maintain their efficacy after round-trip translation. Furthermore, we introduce an intervention-based solution to this problem, by integrating Machine Translation into the process of adversarial example generation and demonstrating increased robustness to round-trip translation. Our results indicate that finding adversarial examples robust to translation can help identify the insufficiency of language models that is common across languages, and motivate further research into multilingual adversarial attacks.
Neel Bhandari, Pin-Yu Chen
2023-07-24T04:29:43Z
http://arxiv.org/abs/2307.12520v1
# Lost In Translation: Generating Adversarial Examples Robust to ###### Abstract Language Models today provide a high accuracy across a large number of downstream tasks. However, they remain susceptible to adversarial attacks, particularly against those where the adversarial examples maintain considerable similarity to the original text. Given the multilingual nature of text, the effectiveness of adversarial examples across translations and how machine translations can improve the robustness of adversarial examples remain largely unexplored. In this paper, we present a comprehensive study on the robustness of current text adversarial attacks to round-trip translation. We demonstrate that 6 state-of-the-art text-based adversarial attacks do not maintain their efficacy after round-trip translation. Furthermore, we introduce an intervention-based solution 1 to this problem, by integrating Machine Translation into the process of adversarial example generation and demonstrating increased robustness to round-trip translation. Our results indicate that finding adversarial examples robust to translation can help identify the insufficiency of language models that is common across languages, and motivate further research into multilingual adversarial attacks. Footnote 1: Code for the paper: [https://github.com/neelbhandari6/NMT_Text_Attack](https://github.com/neelbhandari6/NMT_Text_Attack). Emails: Neel Bhandari: [email protected] Pin-Yu Chen: [email protected] ## 1 Introduction Language models, despite their remarkable success across tasks, have shown to be vulnerable to adversarial examples, which are inputs designed to be similar to the model's native data inputs, but crafted with small modifications to fool the model during inference. These examples can be classified correctly by a human observer, but often mislead a target model, providing an insight into their robustness to adversarial inputs (Chen and Liu, 2023; Chen and Hsieh, 2023). They are essential in understanding key vulnerabilities in models across a variety of applications (Chen and Das, 2023). ML models are being increasingly deployed commercially for translation. A special form of translation is round trip translation, which focuses on translating a given text from one language to the second and back to the first. Round trip translation has been increasingly used in several research areas, including correcting grammatical errors (Lichtarge et al., 2019; Madnani et al., 2012), evaluating machine translation models (Crone et al., 2021; Cao et al., 2020; Moon et al., 2020), paraphrasing (Guo et al., 2021) and rewriting questions (Chu et al., 2020). It is also used extensively as part of the quality assurance process in critical domains such as medical, legal and market search domains. The use of ML models in these critical domains means that they have to be tested by robust adversarial attacks to make for safe and reliable commercial deployment. Given the importance of round trip translation, we are motivated to study its effects on current adversarial attacks. We summarise our contributions as follows: * We demonstrate that round trip translation can be used as a cheap and effective defence against _current_ textual adversarial attacks. We show that 6 state-of-the-art adversarial text attacks suffer an average performance loss of 66%, rendering most examples generated non-adversarial. * However, we find that round-trip translation defensive capabilities can be bypassed by our proposed _attack-agnostic_ algorithm that provides machine translation intervention to increase robustness against round-trip translation.We find it provides minimal difference in quantification metrics to the original, which shows our method finds a new set of robust and high-quality text adversarial examples against neural machine translation (NMT). ## 2 Related Works (Papernot et al., 2017) proposed a white box adversarial attack that repeatedly modified the input text till the generated text fooled the classifier. This method, although effective in principle, did not maintain semantic meaning of the sentence. (Ebrahimi et al., 2018) and (Samanta and Mehta, 2017) proposed gradient-based solutions involving token based changes and searching for important words. These methods, however, did not prove to be scalable and lacked robust performance. It was followed by methods such as character replacement (Ribeiro et al., 2018), phrase replacement and word scrambling. These techniques, however, fail to maintain semantic consistency with the original input. (Jia et al., 2019) introduced adding distracting sentences to the reading comprehension task. (Jin et al., 2020) propose TextFooler which generates adversaries using token-level similarity and is bound by axiomatic constraints. (Lei et al., 2019) propose paraphrasing attacks using discrete optimization. (Garg and Ramakrishnan, 2020) introduce BAE, which uses masked-language modelling to generate natural adversarial examples for the text. Recent works in adversarial attacks on NMT include (Cheng et al., 2019) using gradient based adversarial inputs to improve robustness of NMT models, and (Zhang et al., 2021) proposed a novel black-box attack algorithm for NMT systems. However, none of these works target round-trip translation, and do not demonstrate attack agnostic capabilities. ## 3 NMT-Text-Attack In order to generate adversarial examples robust to round-trip translation, we propose an intervention-based attack-agnostic method that only requires access to a neural machine translation(NMT) model, shown in Algorithm 1. We employ a generic template used by standard state-of-the-art adversarial attack examples in order to showcase the attack-agnostic capabilities. From (Li et al., 2019; Jin et al., 2020; Ren et al., 2019; Garg and Ramakrishnan, 2020; Gao et al., 2018) it can be seen that the attacks follow a two section split. The first section is word importance ranking, and the second section deals with word replacement and constraint evaluation, where NMT-Text-Attack is introduced along with the original algorithm's constraints. ``` Input :Sentence \(S=[w_{1},w_{2},..,w_{n}]\), Ground truth label \(Y\), Victim Model \(V\), Machine Translation model \(M\), User-Specific Constraints \(C\), Attack \(A\) Output :Adversarial Example \(X_{adv}\) 1Phase I - Word Importance Ranking 2Call attack A 3Initialize edge weights 4foreach word \(w_{i}\) in \(S\)do 5 Compute Importance score \(I_{i}\) from \(A\) 6Sort words in descending order into list \(W\) 7Phase 2 - Word Replacement 8# Word Replacement Strategy 9foreach word \(w_{i}\) in \(W\)do 10 Predict Top-K replacements for \(w_{i}\) using \(A\) and store in \(R=[r_{1},r_{2},..,r_{k}]\) 11foreach word \(w_{i}\) in \(W\)do 12 Replace \(w_{i}\) with \(r_{j}\) in \(S\) to make \(X_{adv}\) 13 Round-Trip-Translate \(X_{adv}\) with \(k\) language(s) using \(M\) to make \(T=[t_{1},t_{2},..,t_{p}]\) where \(t_{i}\) is \(X_{adv}\) translated through language \(i\) 14 Evaluate classification scores for \(T=[t_{1},t_{2},..,t_{p}]\) using \(V\), removing examples that do not maintain adversarial sentiment 15foreach \(c_{i}\in C\)do 16 Apply constraint \(c_{i}\) to each \(t_{i}\in T\) 17 Select best \(t_{i}\in T\) w.r.t constraints \(C\) and store as \(X_{adv}\) return\(X_{adv}\) ``` **Algorithm 1**NMT-Text-Attack **I. Word Importance Selection.** This section initially involves pre-processing the input sentence with techniques such as removing stop words etc. This is followed by analysing the most important keywords in the target sentence using several techniques, ranging from the input deletion method, to probability weighted word saliency. These methods are specific to the adversarial attack chosen to be integrated with NMT-Text-Attack. For example, TextFooler uses the input deletion method. Once the most important words are learnt, attack algorithms look for replacements through synonym search or by replacing individual characters of the original input word to make an adversarial candidate. **II. Constraint Evaluation.** We introduce the machine translation task in this section. First, we predict the Top-K replacements for each word \(w_{i}\) in word importance ranking list \(W\) and substitute them in the sentence \(S\) iteratively (Step 12). We then implement round-trip translation on these sentences for \(k\) languages, where \(k\) is specified by the user (Step 13). On collecting the candidate sentences, we evaluate them on the sentiment classification model \(V\) and remove all examples that do not maintain the adversarial sentiment post round-trip translation (Step 14). Finally, we apply the algorithm-specific constraints \(C\) on the collected final sentences \(T\), and select the best candidate based on their similarity score with respect to the original sentence. This is followed by applying algorithm-specific constraints \(C\) such as semantic similarity to original input on replacement, POS tag preservation etc. ## 4 Evaluation For performance evaluation, we consider using a range of algorithms from the TextAttack library Morris et al. (2020). ### Dataset and Victim Model We use the Rotten Tomatoes Movie Reviews and Yelp Polarity datasets to perform sentiment analysis. We sample 1000 random examples from the test set of each of these mentioned datasets and run our experiments on them. For our Victim Model, we use the Bidirectional Encoder Representations from Transformers (BERT) model Devlin et al. (2019). ### Current Attacks are not Robust to Round Trip Translation We run 6 adversarial attacks on the Movie Reviews Dataset and analyse their robustness to round-trip translation, as shown in Figure 1. We analyse them against 3 languages - Spanish, German and French through the EasyNMT library (see Appendix for more details). On round-trip translating the adversarial examples, we test the resultant examples against the classification model. On the y-axis, we provide the percentage of non-robust examples to at least \(k\) out of \(m=3\) languages. Formally, if \(k\) is the number of languages used in tandem, \(N\) is the number of examples in total, \(y_{a}\) is the original prediction before round trip translation and \(\hat{y_{a}}\) is the prediction after round-trip translation by translation model \(M\) and victim model \(V\), then the y-axis is defined as \(Y=\frac{1}{N}\sum_{a=1}^{N}\mathbbm{1}\)[at least k languages have \(y_{a}\neq\hat{y_{a}}\)], where \(\mathbbm{1}\{E\}\) is an indicator function such that it is one when the event \(E\) is true and zero otherwise. We see that on average, over 66% of the examples generated originally by the attack are rendered non-adversarial on round-trip translation with at least one language (\(k=1\)). BAE remains the most robust to translations, while TextFooler remains the least robust. On increasing the number of language combinations taken (\(k>1\)), we see that there is a decrease in effectiveness of round trip translation as a defense against the adversarial examples, however there is still significant loss in attack success rate. This is because when you add more languages as a constraint, there is an increased chance that at least one of the constrained languages is robust to round-trip translation for any example. This provides considerable evidence that round trip translation can be used as a cheap and effective defense, and motivates the question of whether there exists text adversarial examples robust to round-trip translation. In the following sections, We evaluate the robustness of our proposed NMT-Text-Attack as shown in Algorithm 1. ### NMT-Text-Attack Results We analyse the results of incorporating NMT-Text-Attack into existing attacks across the mentioned datasets. We evaluate the attack on its success rate with respect to the attacks' native success rate Figure 1: Percentage of non-robust examples flagged by at least \(k\) language combination without NMT-Text-Attack. Note that, through our novel intervention-based algorithm, we are able to guarantee 100% robustness to back-translation on the user's selected language(s). This is because our algorithm (line 14) introduces a strict constraint to only allow examples that are robust to back-translation to be selected as candidates for the attack, which leads to significant increase over the original algorithm's robustness to round-trip translation. This guarantee is important as it helps achieve high-quality robustness in multilingual settings, which no existing adversarial attack can provide. Table 1 shows that to meet this criteria, NMT-Text-Attack is successful on average 30% less examples than it's original counterpart. While this loss may seem significant, we believe this is justified for two reasons. First, this loss comes with a 100% success in robustness to round-trip translation coupled with attack success. This is critical in commercial settings where deployed models need to have confident outputs in the face of several language translations. Secondly, in Figure 2, we see that there is considerable scope to increase the number of robust examples available simply by increasing the replacement limit. We set our replacement limit at 40 for our experiments, and Figure 2 demonstrates that scaling the number of replacements significantly increases number of available robust examples. We also provide a quantitative analysis of our model by analysing the adversarial examples generated against the original attack in Table 2. Universal Sentence Encoder Cer et al. (2018) with cosine similarity, along with Jaccard Similarity are used as similarity metrics, while BERT Score Zhang et al. (2020) is used to analyse meaning preservation. We notice that there is little variation in the effectiveness of the algorithms when it comes to meaning preservation and similarity, which shows that our proposed intervention, while increasing robustness significantly, maintains the quality of the original attack. Examples of adversarial examples on sentences have been mentioned in the Appendix. ### Ablation Study In this section, we provide an ablation study to substantiate the performance of our algorithm. In this study, we provide TextFooler with NMT-Text-Attack with 2'seen' languages and test its performance with an 'unseen' language. A'seen' language is defined as one which model is provided with as constraints for adversarial examples to satisfy,as shown in Algorithm 1. An 'unseen' language, consequently, is one which the model has not added as a constraint, hence does not guarantee 100% robustness against. The three languages we use are French, German, and Spanish. We alternate between using two of the languages as'seen', and one as 'unseen'. We compare this with the performance of TextFooler without NMT-Text-Attack on the unseen languages in Table 3. We observe that TextFooler with NMT-Text-Attack outperforms TextFooler without NMT-Text-Attack on \begin{table} \begin{tabular}{l|l l l l} \hline Dataset & \multicolumn{1}{l}{TextFooler+NMT} & \multicolumn{1}{l}{TextBugger+NMT} & \multicolumn{1}{l}{PWWS+NMT} \\ \hline MR & 70.7 & 74.7 & 69.4 & \\ \hline Yelp & 60.0 & 71.4 & 68.8 & \\ \hline \end{tabular} \end{table} Table 1: Success Rate (%) of NMT-Text-Attack Relative to when Original Attack Success Rate is 100% (Replacement generation limit=40) \begin{table} \begin{tabular}{l|l|l l l} \hline Dataset & Attack & USE & Jaccard & BERT \\ \hline \multirow{2}{*}{Yelp} & TextBugger & 0.93 & 0.79 & 0.95 \\ \cline{2-5} & TextFooler & 0.93 & 0.81 & 0.97 \\ \cline{2-5} & PWWS & 0.93 & 0.85 & 0.97 \\ \cline{2-5} & TextBugger + NMT & 0.94 & 0.848 & 0.9715 \\ \cline{2-5} & TextFooler + NMT & 0.82 & 0.724 & 0.956 \\ \cline{2-5} & PWWS + NMT & 0.83 & 0.645 & 0.9265 \\ \hline \multirow{2}{*}{MR} & TextBugger & 0.93 & 0.79 & 0.95 \\ \cline{2-5} & TextFooler & 0.813 & 0.715 & 0.953 \\ \cline{2-5} & PWWS & 0.85 & 0.77 & 0.96 \\ \cline{2-5} & TextBugger + NMT & 0.91 & 0.68 & 0.92 \\ \cline{2-5} & TextFooler + NMT & 0.82 & 0.724 & 0.956 \\ \cline{2-5} & PWWS + NMT & 0.83 & 0.645 & 0.9256 \\ \hline \end{tabular} \end{table} Table 2: Sentence similarity analysis on Yelp and Movie Reviews (MR) Datasets Figure 2: Replacement vs. Robust Examples average by 20%. This shows that the integration of our attack-agnostic algorithm provides significant performance increase even in situations where the attack is facing unseen languages. To provide further substantiate the performance of NMT-Text-Attack, we provide a detailed set of results in Table 4. Here, we see that algorithms with NMT-Text-Attack consistently provide higher BLEU scores than their original versions by a significant margin. We see that the percentage of words perturbed remains lower in the original algorithm. However, given the combination of higher performance across the mentioned metrics and generalisation to unseen languages, we believe that this result justifies itself. ## 5 Conclusion In this paper, we demonstrate the ineffectiveness of current text adversarial attack algorithms to round-trip translation, and provide an intervention-based method to improve robustness to round-trip translation in these algorithms. We show that this intervention (NMT-Text-Attack) has minimal effect on the actual semantic metrics but can significantly improve the attack success rate against back-translation, suggesting that there exist a new set of robust text adversarial examples. The attack-agnostic nature of the algorithm along with its high-quality performance makes it an effective error diagnosing tool with any existing text attack for inspecting model robustness. ## 6 Appendix ### Ethical Concerns Our paper discusses the potential weakness of NLP models to round-trip translation, and describes an algorithm that can make the weakness more robust. However, we believe that we give new insights in studying text adversarial examples and will spur more robust machine learning models in the future. We are also the first individuals to introduce the vulnerability to round-trip translation, which provides opportunity to develop robust models in a novel setting. ### Computational Resources For the implementation of our algorithm and experiments, we use Google Colab as our base GPU provider. The GPU typically provided is is Tesla - P100. We use 190 GPU hours to run all our experiments. We use a pre-trained BERT model with 12-head attention and 110 million parameters, which is typical of BERT models. ### Machine Translation Setup We use the Opus-MT set of models through the EasyNMT library (Tang et al., 2020). Opus-MT consists of 1200 models trained on several languages for open translation. The architecture for the Opus-MT models is based on a standard transformer setup with 6 self-attentive layers in both, the encoder and decoder network with 8 attention heads in each layer. This architecture is used to back-translate the target reviews from English to French, German and Spanish, and back to English. ### Adversarial Attack Settings Algorithm 1 details a general template of several state of the art adversarial attacks we have used in the paper. In this section we detail the exact settings used for each adversarial attack when integrated with NMT-Text-Attack. These are standard approaches used directly from the TextAttack Library with no changes in standard settings. #### 6.4.1 Textfooler * Word Importance Selection * Max allowable replacement candidate generation for synonyms: 40. \begin{table} \begin{tabular}{|l|l|l|l|} \hline **Dataset** & **Algorithm** & **BLEU Score** & **\% Words Perturbed** \\ \hline MR & TextFooler & 0.37 & 16.07 \\ \hline & TextFooler + NMT & 0.48 & 19.33 \\ \hline & TextBugger & 0.47 & 5.17 \\ \hline & TextBugger + NMT & 0.62 & 11.75 \\ \hline & PWWS & 0.43 & 11.57 \\ \hline & PWWS + NMT & 0.57 & 15.19 \\ \hline & TextFooler & 0.50 & 41.43 \\ \hline & TextFooler + NMT & 0.68 & 56.27 \\ \hline & TextBugger & 0.50 & 34.56 \\ \hline & TextBugger + NMT & 0.53 & 34.60 \\ \hline & PWWS & 0.53 & 35.90 \\ \hline & PWWS + NMT & 0.73 & 50.91 \\ \hline \end{tabular} \end{table} Table 4: BLEU and % words perturbed results of NMT-Text-Attack on Yelp and Movie Reviews(MR) Datasets \begin{table} \begin{tabular}{l|l|l|l|l} \hline **Dataset** & **Algorithm** & **BLEU Score** & **\% Words Perturbed** \\ \hline MR & TextFooler & 0.37 & 16.07 \\ \hline & TextFooler + NMT & 0.48 & 19.33 \\ \hline & TextBugger & 0.47 & 5.17 \\ \hline & TextBugger + NMT & 0.62 & 11.75 \\ \hline & PWWS & 0.43 & 11.57 \\ \hline & PWWS + NMT & 0.57 & 15.19 \\ \hline & TextFooler & 0.50 & 41.43 \\ \hline & TextFooler + NMT & 0.68 & 56.27 \\ \hline & TextBugger & 0.50 & 34.56 \\ \hline & TextBugger + NMT & 0.53 & 34.60 \\ \hline & PWWS & 0.53 & 35.90 \\ \hline & PWWS + NMT & 0.73 & 50.91 \\ \hline \end{tabular} \end{table} Table 3: Performance of NMT-Text-Attack on unseen language * Transformation Embedding Mechanism: Counterfitted Glove Embeddings (Mrksic et al., 2016a) * Word Replacement: * Pre-transformation constraints: * RepeatModification: A constraint disallowing the modification of words which have already been modified * StopwordModification: A constraint disallowing the modification of stopwords * Minimum cosine distance between word embeddings = 0.5 * Part of Speech : Only replace words with the same part of speech (or nouns with verbs) * Universal Sentence Encoder with a minimum angular similarity of = 0.5. * Word Swapping Technique: Greedy Word Swap with Word Importance Ranking with word importance ranking conducted using input deletion method. #### 6.4.2 TextBugger * Word Importance Selection * Max allowable replacement candidate generation for synonyms: 40. * Transformation Embedding Mechanism: Counterfitted Glove Embeddings (Mrksic et al., 2016a) * Allowable Swap Mechanisms: Character Insertion, Character Deletion, Adjacent Character Swap, Homoglyph Swap. * Word Replacement: * Pre-transformation constraints: * RepeatModification: A constraint disallowing the modification of words which have already been modified * StopwordModification: A constraint disallowing the modification of stopwords * Universal Sentence Encoder with a minimum angular similarity of = 0.84 * Word Swapping Technique: Greedy Word Swap with Word Importance Ranking with word importance ranking conducted using input deletion method. #### 6.4.3 Pwws * Word Importance Selection * Max allowable replacement candidate generation for synonyms: 40. * Transformation Embedding Mechanism: Word Swap by swapping synonyms in WordNet (Miller, 1998) * Allowable Swap Mechanisms: Character Insertion, Character Deletion, Adjacent Character Swap, Homoglyph Swap. * Word Replacement: * Word Replacement: * Program-transformation constraints: * RepeatModification: A constraint disallowing the modification of words which have already been modified * StopwordModification: A constraint disallowing the modification of stopwords * Max words perturbed = 50 * Maximum thought vector Euclidean distance = 0.2 * Maximum language model log-probability difference = 2 * Word Swapping Technique: Greedy Word Search. #### 6.4.5 DeepWordBug * Word Importance Selection * Max allowable replacement candidate generation for synonyms: 40 * Embedding Transformation Mechanism: Counterfitted Glove Embeddings (Mrksic et al., 2016) * Allowable Swap Mechanisms: Character Insertion, Character Deletion, Adjacent Character Swap, Random Character Substitution. * Word Replacement: * Pre-transformation constraints: * RepeatModification: A constraint disallowing the modification of words which have already been modified * StopwordModification: A constraint disallowing the modification of stopwords * Maximum Levenshtien Edit Distance= 30. * Word Swapping Technique: Greedy Word Swap with Word Importance Ranking with word importance ranking conducted using input deletion method. #### 6.4.6 Bae * Word Importance Selection * Max allowable replacement candidate generation for synonyms: 40 * Transformation Embedding Mechanism: Transformer AutoTokenizer and word replacement using Masked Language Modelling. (Mrksic et al., 2016) * Word Replacement: * Pre-transformation constraints: * RepeatModification: A constraint disallowing the modification of words which have already been modified * StopwordModification: A constraint disallowing the modification of stopwords * Part of Speech : Only replace words with the same part of speech (or nouns with verbs) * Universal Sentence Encoder with a minimum angular similarity = 0.93. * Word Swapping Technique: Greedy Word Swap with Word Importance Ranking with word importance ranking conducted using input deletion method. ### Examples of NMT-TextAttack 1. **Original** : drawing on an irresistible, languid romanticism, byler reveals the ways in which aultry evening or a beer-fueled afternoon in the sun can inspire even the most retiring heart to venture forth. **(Sentiment: Positive)** **Adversarial (TextFooler)**: drawing on an gargantuan, lolling melodrama, byler betrays the ways in which a sultry evening or a beer-fueled afternoon in the sun can inspire even the most retiring heart to venture forth. **(Sentiment: Negative)** **Adversarial (TextFooler+NMT-Text-Attack)**: drawing on an inexorable, crooning melodrama byler reveals the ways in which a sultry evening or a beer-fueled afternoon in the sun can inspire even the most retiring heart to venture forth. **(Sentiment: Negative)** **Back-Translated (TextFooler)**: drawing on a giant melodrama, melodrama lolling, Byler betrays the ways in which a sensual afternoon or an afternoon of beer fed in the sun can inspire even the most outgoing heart to venture forward. **(Sentiment: Positive)** **Back-Translated (TextFooler+NMT-Text-Attack)**: drawing on a melodrama byler inexorable betrays the ways in which a sensual afternoon or an afternoon of beer fed in the sun can inspire even the most outgoing heart to venture forward **(Sentiment: Negative)** 2. **Original** : Exceptionally well acted by Diane Lane and Richard Gere. **(Sentiment: Positive)** **Adversarial (TextFooler)**: Exceptionally opportune acted by Diane Lane and Richard Gere.**(Sentiment: Negative)** **Adversarial (TextFooler+NMT-Text-Attack)**: Exceptionally better acted by Diane Lane and Richard Gere **(Sentiment: Negative)** **Back-Translated (TextFooler)**: exceptionally timely performed by Diane Lane and Richard Gere **(Sentiment: Positive)** **Back-Translated (TextFooler+NMT-Text-Attack)**: exceptionally better performed by Diane Lane and Richard Gere **(Sentiment: Negative)** 3.**Original** : this kind of hands-on storytelling is ultimately what makes shanghai ghetto move beyond a good, dry, reliable textbook and what allows it to rank with its worthy predecessors. **(Sentiment: Positive)** **Adversarial (PWWS)**: this tolerant of hands-on storytelling is ultimately what piss shanghai ghetto move beyond a good, dry, reliable textbook and what allows it to gross with its worthy predecessors (**Sentiment: Negative)** **Adversarial (PWWS+NMT-TextAttack)**:this tolerant of hands-on storytelling is ultimately what makes shanghai ghetto move beyond a good, dry, reliable textbook and what allows it to place with its worthy predecessors. **(Sentiment: Negative)** **Back-Translated (PWWS)**: This tolerant of practical narration is ultimately what pis shanghai ghetto move beyond a good, dry, reliable textbook and what allows rough with its worthy predecessors. **(Sentiment: Positive)** **Back-Translated (PWWS+NMT-TextAttack)**: this tolerant of narration is ultimately what builds the shanghai ghetto to move beyond a good reliable dry text book and what allows it to grossly with its worthy predecessors. **(Sentiment: Negative)** **Back-Translated (PWWS+NMT-TextAttack)**: this tolerant of narration is ultimately what builds the shanghai ghetto to move beyond a good reliable dry text book and what allows it to grossly with its worthy predecessors. **(Sentiment: Negative)** 4.**Original** : I went there today! I have an awful experience. They lady that cut my hair was nice but she wanted to leave early so she made a disaster in my head! **(Sentiment: Positive)** **Adversarial (PWWS)**: I went there today! I have an awesome experience. They lady that cut my hair was nice but she wanted to leave early so she made a disaster in my head! **(Sentiment: Negative)** **Back-Translated (PWWS)**: I went there today. I have a amazing experience. The lady who cut my hair was nice, but she wanted to leave early, so she made a mess of my head. **(Sentiment: Positive)** **Back-Translated (PWWS+NMT-TextAttack)**: I went there today. I have a terrible experience. The lady who cut my hair was nice, but she wanted to leave early, so she made a mess of my head. **(Sentiment: Negative)** 5.**Original** : I fell in love with this place as soon as we pulled up and saw the lights strung up and oldies coming from the speakers! I tried the banana cream pie hard ice cream, their scoops are very generous! My bf got the peach cobbler hard ice cream and that was to die for! We got 4 servings of ice cream for $10, which nowadays is a steal IMO! :) I'll definitely be heading back with my coworkers this week! My got the peach cobbler hard ice cream and that was to die for! We got 4 servings of ice cream and that was to die for! We got 4 servings of ice cream pie hard ice cream, their scoops are very generous! My bf got the peach cobbler hard ice cream and that was to die for! We got 4 servings of ice cream for $10, which existent is a theft IMO! :) I'll doubtless be heading back with my coworkers this week! **(Sentiment: Negative)** ### Walkthrough of TextFooler+NMT-Text-Attack This section is concerened with providing an intuitive overview of the working of the attack agnostic NMT-Text-Attack algorithm with TextFooler. For ease of understanding, we use only one language for translation: Spanish. The algorithm, as shown before, is divided into sections. The first section, as shown in Figure 3, is the Word Importance Ranking section. Here, as per TextFooler's prescribed process, each word is replaced from the sentence and it's importance is evaluated by the change in classification score of the sentence before and after replacement. On calculating the importance ranking score, we move to the second section, as shown in Figure 4. Here, we find synonyms for each word from the counterfitted GloVe word embeddings. These words are appended into the sentences replacing the original word, and passed to the NMT-Text-Attack Module. Here, the sentence undergoes round-trip translation to assess whether the inclusion of the word maintains robustness of the original attack under translation. We then collect the candidate sentences, and pass them through the final constraint requirement list, local to TextFooler. This includes checking whether the replaced word maintains the original word's POS tag, and then ranks them based on highest similarity score through USE embeddings and cosine similarity. Finally, we receive the adversarial example robust to round-trip translation. Figure 4: Word Replacement Process Figure 3: Word Importance Ranking Process
2306.12609
Towards Regulatable AI Systems: Technical Gaps and Policy Opportunities
There is increasing attention being given to how to regulate AI systems. As governing bodies grapple with what values to encapsulate into regulation, we consider the technical half of the question: To what extent can AI experts vet an AI system for adherence to regulatory requirements? We investigate this question through the lens of two public sector procurement checklists, identifying what we can do now, what should be possible with technical innovation, and what requirements need a more interdisciplinary approach.
Xudong Shen, Hannah Brown, Jiashu Tao, Martin Strobel, Yao Tong, Akshay Narayan, Harold Soh, Finale Doshi-Velez
2023-06-22T00:12:30Z
http://arxiv.org/abs/2306.12609v2
# Towards Regulatable AI Systems: ###### Abstract There is increasing attention being given to how to regulate AI systems. As governing bodies grapple with what values to encapsulate into regulation, we consider the technical half of the question: To what extent can AI experts vet an AI system for adherence to regulatory requirements? We investigate this question through two public sector procurement checklists, identifying what we can do now, what we should be able to do with technical innovation in AI, and what requirements necessitate a more interdisciplinary approach. ## 1 Introduction As AI systems become more advanced and integrated into our lives, there has been a corresponding urgency to ensure they align with social values and norms. Legal and regulatory authorities around the world are racing to produce AI regulations [142; 44; 36]. However, the increasing size, generality, opaqueness, and closed nature of present-day AI systems pose significant challenges in achieving this alignment [150; 8]. Even when requirements can be precisely articulated--already a difficult task--there is still the question of whether and how it is possible to check whether an AI system adheres to those requirements. We consider the following questions: **What innovations in AI systems are needed for them to be effectively regulated? And in what areas will innovations in AI methods alone be insufficient, and more interdisciplinary approaches required?** While there are many regulations involving AI, we consider public sector procurement checklists because they are instrumental in shaping societal norms, influence private actors to emulate similar practices, and are relatively explicit. These checklists define the criteria and process for a public sector agency to purchase products or services, including technical criteria that an AI system must satisfy. The technical criteria about AI systems contained in these lists are also present in other regulatory efforts. Thus, improving our ability to vet an AI system against these checklists will improve our ability to regulate AI systems more generally. Specifically, we closely examine the technical criteria from two existing procurement checklists: the World Economic Forum's AI Procurement in a Box (WEF) [144] and the Canadian Directive in Automated Decision-Making (CDADM) [56]. The WEF checklist serves as a practical guidebook to unlock the public-sector adoption of AI and has been piloted in various countries, including the UK, Bahrain, the UAE, India, and Brazil [19]. The CDADM is one of the earliest regulations specifically targeting AI systems to be put into operation. It came into effect in April 2019, and requires full compliance by October 2023 for all new systems and by April 2024 for all existing systems. Thus, the question of whether it is possible to build AI systems that meet such checklist criteria is timely. In the remainder of this document, we first group the technical criteria contained in these two checklists into categories that will be familiar to AI researchers and engineers: (pre-training) data checks, (post-hoc) system monitoring, global explanation, local explanation, objective design, privacy, and human + AI systems. For each category, we briefly summarize existing technical approaches that could be used to construct AI systems that meet those criteria. Next, we identify areas where relevant technical approaches may exist, but additional technical innovation is needed to be able to vet increasingly complex AI systems being used in increasingly varied contexts. For example, the proliferation of large language models comes with a significant difficulty in evaluating them, due to issues including open-endedness and data leakage. While innovative approaches like Holistic Evaluation of Language Models (HELM) [79] and Elo ratings [149] are proposed, the evaluation of language models remains an open question and further technical innovation is needed for effective regulation and oversight. Finally, we briefly outline aspects of these criteria that may seem technical but actually require interdisciplinary approaches to vet. Throughout this exercise, we assume no concerns about expertise; that is, we assume that there are sufficiently qualified AI and domain experts to review whether the AI system meets the checklist criteria. Our concern is to identify to what extent experts can currently vet AI systems against these regulatory criteria. In doing so, we hope to highlight concrete areas where AI innovation would improve our ability to create regulatable AI systems. There are many subfields in AI. Moreover, AI systems are rapidly advancing, and the kinds of contexts in which they are being used are rapidly growing. Thus, while our list is certainly not comprehensive, we hope it serves as a starting point for AI researchers interested in creating regulatable AI systems. Additionally, this document informs both policy makers and AI engineers on issues where more holistic, interdisciplinary efforts (rather than AI methods alone) are necessary. ## 2 Inputs of the Model: (Pre-training) Data Checks The characteristics of the training data have a large influence on the behavior of an AI system. What checks must be done on these data before they are used to train models? Motivations for the regulatory requirements in this section include data consent, data privacy (discussed in more detail in Section 7), and downstream impacts of data quality (e.g. on model performance, generalization and bias). Examples of checklist criteria include: * CDADM 6.3.1: Before launching into production, developing processes so that the data and information used by the Automated Decision Systems are tested for unintended data biases and other factors that may unfairly impact the outcomes. * CDADM: 6.3.3: Validating that the data collected for, and used by, the Automated Decision System is relevant, accurate, up-to-date, and in accordance with the Policy on Service and Digital and the Privacy Act. * CDADM 6.3.4: Establishing measures to ensure that data used and generated by the automated decision system are traceable [fingerprinting], protected and accessed appropriately, and lawfully collected, used, retained, and disposed. WEF: Assess whether relevant data will be available for the project [...] Data is crucial for modern-day AI tools. You should determine, at a high level, data availability before starting your procurement process. This entails developing an understanding of what data might be required for the project. WEF: Select data that fits criteria of fairness. For example, the data should be representative of the population that the AI solution will address, as well as being reasonably recent. The technical questions underlying these criteria have to do with data documentation procedures and checks that can expose potential risks in areas such as fairness, generalization, and privacy. ### What we know how to do We have proxies for checking many properties in these criteria (data privacy, label quality, feature selection, fairness, etc.) using exploratory data analysis [135]. For example, we can inspect the annotation process and check inter-annotator agreement to get an idea of label quality [12; 99]. We can also measure (and correct for) imbalance in data if we are given group labels that segment the dataset [76; 23]. We have techniques for identifying influential points [13], outliers [14], and mislabelled points [20; 102] which may cause models to exhibit poor performance or bias [46]. Further, there exist several standards for reporting dataset information [51; 64; 15; 86; 103], including on the data curation process, that are designed to help expose potential biases and limitations for which the data may be used. Sufficiently comprehensive data documentation facilitates investigation by both experts and the public. In the realm of consent, non-consenting (opt-out) data checks1 can give individuals control over how their data are used. Footnote 1: For example, artists can opt-out their work with Spawning ([https://spawning.ai](https://spawning.ai)), which provides opt-out data checks as a service to AI system developers. ### Directions requiring additional AI innovation * **Metrics and Generalizability.** More work is needed to connect the data metrics with impact on outcomes. For example, we have reasonable tools to connect the uncertainty or measurement error in a distance sensor to effects on motion planning [43]. However, if a traffic image dataset has a certain annotator disagreement score, what does that imply for an autonomous vehicle whose vision system is trained on those data? The question of generalizability also arises for data without explicit human annotation, such as internet-crawled language and vision datasets [104; 117]. In this case, what data checks can we perform to ensure that it will be appropriate for the domains in which the model is deployed? Data checks might lose their validity if the data is used outside its envisioned context. A particularly important category are metrics that capture similarities of different applications and thus capture scenarios in which a data set collected for one purpose may be used for another. While this inquiry has received considerable attention in domain adaptation research [35; 2], the data-centric perspective remains relatively unexplored [3]. For example, a dataset collected for autonomous vehicles in one city _might_ be suitable in similar cities. But what statistics or meta-data would we need to be confident? To ensure reliable utilization of datasets, additional metrics are necessary to precisely determine the range of applications for which a dataset can be safely used. * **Data Quality Checks in the Context of Pretrained Models.** Given the prevalence of large pre-trained models [60] and (currently) limited transparency about their training data [26], can we develop data checks that rely solely on accessing the model [88], or do certain types of checks require disclosure of specific information about the training data? Do checks for fine-tuning data--e.g., the traffic images used to tune an autonomous vehicle's vision system on top of an existing image classifier--differ from checks for pre-training data? * **Unstructured Data.** For structured data, it is relatively easy to report statistics across features. For unstructured data like images or social media messages, existing standards focus on reporting the statistics of the meta-data [51; 64; 15; 86; 103]. However, is providing transparency about the meta-data sufficient? For example, in the above scenario with the traffic images, is it sufficient to provide information e.g. about where the images were collected and what kinds of cameras were used? Or might it be important to report certain information derived from the pixel values as well? Similarly, if one had a collection of social media posts, would be it important to report certain information derived from the actual content, in addition to meta-data about the site and scraping procedure? ### Areas that require interdisciplinary engagement The specific metrics that would enable meaningful inference about the quality of the data will depend on the application. Questions around bias and fairness are also inherently multi-faceted and will depend on the use-case. Determining the proper form of consent (opt-in vs. opt-out) is a legal decision. Involvement from social scientists is necessary to assess how different data collection processes include or exclude certain populations. Privacy tensions--what data is retained, what statistics are made public, what kind of access is granted to trusted auditors--must also be resolved within the broader socio-technical context. Furthermore, there is danger in living exclusively inside the data; cross-talks inside and outside of the data are necessary to detect many normative pitfalls. For example, bias can be introduced via the choices of labels (e.g. are non-binary labels included when labeling gender?) and the labeling process (e.g. whose perspective was being taken when an input was labeled as acceptable or problematic content?). Healthcare algorithms that demonstrate unbiased predictions of healthcare costs, but then use that prediction as a proxy for illness severity, may introduce bias because unequal access to care leads to lower healthcare spending by minority groups [94]. Detecting and addressing such issues in data necessitates active dialogue between the data realm and external perspectives. Section 6 delves deeper into the discussion of label choice concerns. ## 3 Outputs of the Model: (Post-hoc) System Monitoring Once a system is deployed, it is essential to monitor its operations. These criteria have to do with monitoring for adverse outcomes and identifying unintended consequences, making that information available for scrutiny, and establishing contingencies if the system is behaving poorly. Metrics to monitor the operations of a system also relate to methods for checking an AI's system's performance after it has been trained. Examples of checklist criteria include: CDADM 6.3.2: Developing processes to monitor the outcomes of Automated Decision Systems to safeguard against unintentional outcomes and to verify compliance with institutional and program legislation, as well as this Directive, on a scheduled basis. CDADM 6.3.6: Establishing contingency systems and/or processes as per Appendix C. (Which says: Ensure that contingency plans and/or backup systems are available should the Automated Decision System be unavailable.) CDADM 6.5.1: Publishing information on the effectiveness and efficiency of the Automated Decision Systems in meeting program objectives on a website or service designated by the Treasury Board of Canada. WEF: [T]here should be systematic and continuous risk monitoring during every stage of the AI solution's life cycle, from design to post-implementation maintenance. WEF: Testing the model on an ongoing basis is necessary to maintain its accuracy. An inaccurate model can result in erroneous decisions and affect users of public services. WEF: Enable end-to-end auditability with a process log that gathers the data across the modelling, training, testing, verifying and implementation phases of the project life cycle. Such a log will allow for the variable accessibility and presentation of information with different users in mind to achieve interpretable and justifiable AI. The technical questions associated with these criteria have to do with how to monitor performance and identify various kinds of drift and unusual results that warrant attention. ### What we know how to do Given a specific metric, it is relatively easy to put monitoring into place. We can easily check to ensure that the outputs of an AI do not exceed threshold values. Methods exist that establish distributions for "normal operation" and flag anomalous values during actual operation [49]. These techniques can be employed to detect shifts in inputs and outputs, in model confidences and calibrations [10], in derived quantities such as the top features used to make a prediction (allowing a person to check if a shift is sensible) and fairness metrics [55; 10; 119]. We can learn a trend in how a particular quantity changes and see if that trend holds and whether any external shock occurs. In RL settings, we can monitor differences between expected and actual reward distributions. If the causal structure of the environment is known, monitoring checks can specifically identify new confounders and mediators. That said, all anomaly detection methods require some specification of what kinds of behavior represent a change or anomaly. They may not capture every unintended consequence, and given sets of monitoring metrics may be gamed by an adversary. More generally, we already have a set of norms around what kinds of tests should be run prior to an AI system being deployed (e.g. [74; 124]). AI developers should strive to test their systems with multiple independent, external datasets to ensure that their results are replicable (and be transparent if this kind of generalization has not been tested). These datasets should include sufficient numbers of hard cases in their test sets, and results should be presented stratified by difficulty. Similarly, one should provide stratified results on performance of cases similar and dissimilar to the training set. Performance measures should be reported with respect to the real population proportions of each class, stratified by class, or be independent of base rates so that they can be correctly applied to the intended use-case and not the proportions present in the training set. ### Directions requiring additional AI innovation * **Monitoring Many Metrics.** Monitoring multiple metrics increases the risk of false positives and false negatives, which can overwhelm engineers. How can we monitor many metrics efficiently while not incorrectly flagging too many cases for review and not missing important deviations? Relatedly, once in operation, what data should be gathered so that we can check additional metrics in the future? For example, while we can monitor fairness for known minority groups, what data should be logged during operation so that we can audit fairness when an unknown demographic group (e.g., an intersection of some legally protected attributes) contest for unfair outcomes [71]? The question of what logs to retain only becomes more difficult when there are multiple AI systems interacting at fast rates, such as the many AI components operating within an autonomous vehicle. These questions remain despite advances in MLOps [74]. * **Certification of Use Cases.** Across the very broad range of AI systems and contexts, can we certify the settings in which an AI system is supposed to work well? Can we assign a label to an AI model so that it is restricted to or from being applied to specific use cases? Consider, for example, the need to establish safeguards that prevent an open-access drug discovery model from being utilized for de novo design of biochemical weapons [136]. Similarly, image generative models should be restricted from generating pornographic content. Relatedly, can we provide confidence about the post-hoc performance of a deployed system on certified tasks while preventing a deployed system from being misused? In formal verification, one mathematically checks that the formal model of a given system satisfies a desired property. Formal verification is widely in safety-critical systems. As AI systems enter safety-critical settings--such as autonomous driving or robot-assisted surgeries--it is essential that strong safety guarantees can be maintained. Certifying neural networks for safety-critical systems is an active research area [126; 122; 9; 70; 52; 146]. There are also early proposals to define standards for levels of AI system certification [148] (analogous to security standards [47]) that have yet to be refined and adopted. * **Correcting Models after Deployment.** There exists some work on correcting deployed models in a way that does not require re-training end-to-end (e.g. unlearning [58; 131; 75; 32], fine-tuning [65], and in-context learning [141; 30]). But more work remains to be done, especially for AI systems with many interacting parts. * **Identifying Relevant Distribution Shift.** There are many possible types of shift: in input distributions, in the relationship between inputs and outputs, in the rewards (objective)--and these shifts can take many forms and occur in many ways. For example, the acceleration of newer cars may be different, as well as what colors are popular. Can we distinguish between relevant and irrelevant shifts (e.g., along the lines of [33])? If the shifts happen in some uninterpretable embedding space, how can we explain them? * **Monitoring Agents that are Learning Online.** We can monitor for major adverse effects. However, can we identify more subtle issues, such as initial signs of catastrophic forgetting, cheating, and other harms that occur while the agent continues to perform well on its reward metric? For instance, it would be advantageous to detect early signs of reckless or inappropriate driving behavior--such as reducing distances between the vehicle and pedestrians, or increased use of residential streets where children may be playing--in autonomous driving agents before any traffic accidents occur. Our understanding of unintended consequences continues to grow [129; 24] but the problem remains unsolved. ### Areas that require interdisciplinary engagement At a high level, there will always need to be some kind of decision made about what needs to be monitored or prioritized in a given setting. There will also need to be decisions made about what kinds of safety promises or guarantees are needed e.g. how much shift is considered safe and acceptable, and how much is not. It is crucial to translate the monitored metrics into meaningful implications that enable people to make informed decisions within the broader socio-technical system. For instance, in autonomous driving, comparing monitored metrics against human performance can inform decisions regarding human intervention. Finally, the task of contingency planning for back-ups when models express unexpected or unwanted behaviors also requires an understanding of the broader socio-technical system. ## 4 Inspecting the Model: Global Explanations for Model Validation Global explanations describe a model as a whole and are often useful for inspection or oversight. The goal is to expose information about the model that would allow a domain expert to infer the existence of some kind of unobserved confounder, something about the model that is non-causal, and other limits on the scope of the model's applicability. Criteria related to global explanations include: CDADM App. C: Plain language notice through all service delivery channels in use (Internet, in person, mail or telephone). In addition, publish documentation on relevant websites about the automated decision system, in plain language, describing: How the components work; WEF: Public institutions cannot rely on black-box algorithms to justify decisions that affect individual and collective citizens' rights, especially with the increased understanding about algorithmic bias and its discriminatory effects on access to public resources. There will be different considerations depending on the use case and application of AI that you are aiming to acquire, and you should plan to work with the supplier to explain the application for external scrutiny, ensuring your approach can be held to account. These considerations should link to the risk and impact assessment described in Guideline 2. Under certain scenarios, you could consider making it a requirement for providers to allow independent audit(s) of their solutions. This can help prevent or mitigate unintended outcomes. WEF: Ensure that AI decision-making is as transparent as possible. { Encourage transparency of AI decision-making (i.e. the decisions and/or insights generated by AI). One way to do this is to encourage the use of explainable AI. You can also make it a requirement for the bidder to provide the required training and knowledge transfer to your team, even making your team part of the AI-implementation journey. Finally, you can ask for documentation that provides information about the algorithm (e.g. data used for training, whether the model is based on supervised, unsupervised or reinforcement learning, or any known biases). Technical approaches associated with these criteria include the creation of small, inherently interpretable models with high performance, sharing certain parts or properties of a large model, and open-sourcing the model's code. ### What we know how to do We can build inherently interpretable models (e.g. generalized additive models, decision trees, rule-based models, etc.) for tabular and other simple, relatively structured data [111]. We have some tools for interpreting neural networks in terms of human-understandable components [106; 93; 95], such as circuits [139] or even natural language [17]. When possible, these tools provide a systematic approach to explain how tasks are performed in ML models in a human understandable way. Finally, we can partially explain neural networks and other complex models via methods such as distillation [130], feature importance [80], or computing concept activation vectors [114]. ### Directions requiring additional AI innovation * **Inherently Interpretable Models for More Data Types.** While some initial work exists for building inherently interpretable models for non-tabular data (e.g. for images or audio) [31], this area is still nascent. Concept learning [73] on top of the input may be a useful strategy. * **Interactive "Openboxing" of Large Models.** Can we build interactive, hierarchical, and semantically-aligned views of large models such that these models are (to some extent) inherently interpretable? For example, a traffic image classifier that recognizes objects by multiplying object templates with transformation matrices [147] would be more inherently explainable than another model without this hierarchical structure. Further, can we allow users to explore such explanations at different levels of fidelity for different contexts? As noted above, methods to extract information from larger models such as large language models exist (e.g., [114; 90]) but have limitations with ways for people to effectively explore and understand larger models. More work along the lines of [11; 128] is needed. * **Checking Value Alignment.** Whether it is criminal justice, benefits allocations, or autonomous driving, AI systems are increasingly used in situations that require value judgments. How do we elicit and encode societal and individual values in diverse situations? What metrics can effectively measure value alignment? How do we make this mapping transparent for others to understand the value choices made (e.g., the drivers of other cars next to the autonomous vehicle)? Advancing exisiting work e.g., [21; 42] is needed for our increasing use cases. ### Areas that require interdisciplinary engagement There is a question of what to offer and to whom. For example, releasing the code and environment may allow some people to directly answer their questions. Providing an explanation broadens who can inspect the model, including users and domain experts; however, what information to release, how it should be extracted, and how often during the life cycle of the model that information should be updated will depend on the use context. We will also need mechanisms for people to request more information about a model as new concerns become apparent. Finally, all information release must be balanced with concerns about privacy and trade secrets. Inspecting the Model: Local Explanations about Individual Decisions These criteria have to do with providing information to a user about a specific decision that is made, such as benefits denial. In some cases, it may be sufficient to simply provide the information and logic that led to the decision (a meaningful explanation). In other cases, it may be preferable to provide actionable ways to change the decision (recourse) [138; 69]. In the following, we use the term _local explanation_ to refer to explanations that are meant to provide insight about a particular decision, rather than about the model overall [89]. We use the term _recourse_ to refer a modification of the input that results in the output changing to the desired value. CDADM 6.2.3: Providing a meaningful explanation to affected individuals of how and why the decision was made as prescribed in Appendix C. CDADM 6.4.1: Providing clients with any applicable recourse options that are available to them to challenge the administrative decision. CDADM App. C: In addition to any applicable legal requirement, ensuring that a meaningful explanation is provided with any decision that resulted in the denial of a benefit, a service, or other regulatory action. WEF: Explore mechanisms to enable interpretability of the algorithms internally and externally as a means of establishing accountability and contestability. { With AI solutions that make decisions affecting people's rights and benefits, it is less important to know exactly how a machine-learning model has arrived at a result if we can show logical steps to achieving the outcome. In other words, the ability to know how and why a model performed in the way it did is a more appropriate means of evaluating transparency in the context of AI. For example, this might include what training data was used, which variables have contributed most to a result, and the types of audit and assurance the model went through in relation to systemic issues such as discrimination and fairness. This should be set out as documentation needed by your supplier. { It is also important to consider the potential tension between explainability and accuracy of AI when acquiring AI solutions. Classic statistical techniques such as decision-tree models are easier to explain but might have less predictive power, whereas more complex models,such as neural networks, have high predictive power but are considered to be black boxes. Approaches for creating local explanations rely heavily on a notion of local region, and thus some notion of distance. Some inputs are more easily explained than others, and any explanation can introduce privacy risks. ### What we know how to do There are many techniques of providing local explanations for a model [38; 77; 108; 109; 81; 121]. Specifically, given a definition of distance, we can find a counterfactual: the closest point such that the model's output is a desired class [57; 138]. This can be used to help an individual determine what features set them apart compared to a nearby alternative, and also set the foundation for recourse (if those features can be changed) [69]. ### Directions requiring additional AI innovation * **Defining Distance Metrics.** As noted above, local explanations rely heavily on notions of nearby data. It can be difficult to adjudicate what correlations in the data should be preserved and what should not. For example, if there are correlations between the kind of sign and the geographic location in a traffic image data set, should those correlations be retained in the distance metric? What about for race and postal codes or sex and hormone levels? Some work exists on using human input to define the appropriate distance metric for the purposes of explanation and recourse [69], but more is needed. * **Data without Interpretable Dimensions.** The challenges associated with choosing distance metrics are exacerbated when the individual dimensions of the data are not interpretable. For example, suppose we have a medical imaging task in which the AI system claims that certain cells represent a certain type of cancer, or a face recognition task in which the AI system claims that the face in a security video matches a face in a government database. What is a meaningful explanation [87] in this case? Does it take the form of other images in the dataset (which may create privacy issues)? Should it involve first summarizing the input into interpretable concepts [72; 54]? Similar issues arise with text [145] and timeseries data [6]. * **Provenance Adjudication.** We may want to know if particular training datum was used in a particular way to generate the given output. For example, we may be curious if a traffic sign mis-classificatoin could be attributed to a specific mislabel example, or we may need to resolve copyright issues from AI-generated text and images. This is possible in small models, but in very nascent stages for large models (e.g., LLMs [137] and diffusion-based image generation models [37]). * **Handling Out of Distribution Data.** The idea behind recourse is that it gives a person a path toward getting the outcome they desire. For example, if a loan applicant is told that paying off their debts would make them eligible for the loan, then they would expect to get the loan once the debts are paid. However, if the applicant's data is very far from the training data, then the AI-produced recourse may indeed change the model's output, but would not be accepted by the loan officer in a real context. * **Tradeoffs between Explainability and Privacy/Security.** Releasing information for auditing or recourse may allow bad actors access to private information [120] or to game the system [100]. For example, explanations in the form of training samples, like those of the traffic images, may allow actors to learn not only how to trick the autonomous vehicle, but also learn about other elements of those images (that are not road signs). Advancing existing work e.g. [134] is necessary to understand the resulting dynamics. ### Areas that require interdisciplinary engagement The biggest question raised by these guidelines is what is the definition of a "meaningful explanation" [127]. This definitions will depend on the socio-technical context of the task--contesting a loan denial, a medical error, or a benefits denial may require different kinds of explanations. Different kinds of users may also require different explanations. Relatedly, the purpose of the information provided for recourse will vary across contexts. For one task, it may be enough to provide only one recourse, while for others it may be necessary to provide multiple options. In other contexts, the user might benefit from an interactive system to explore different options. For example, they could themselves wish to navigate changes and see if they would result in a favorable loan decision. Finally, it may be that a recourse generated from a local explanation may not be the appropriate way to assist a user unhappy with a decision. For example, suppose someone is convinced that a voice-based covid test is in error about their disease status. Rather than providing an explanation of the voice features used to make the decision, the appropriate recourse may be to allow that person to take a traditional covid test instead. We also note that certain situations may require a justification (rationale for why a decision is right with respect to laws, norms, and other aspects of the context) rather than explanation (what features the AI used to generate the output). ## 6 Designing the Model: Objective Design All AI systems require formulating goals in precise, mathematical terms. Objective design converts general goals (e.g. drive safely) into precise mathematical terms [16; 63]. This distillation process is fraught with potential pitfalls; an incorrect conversation will result in the AI behaving in unintended ways. For example, encoding safe driving as always ceding the right of way may result in an autonomous vehicle that never makes a turn at a busy intersection. Collaboration with stakeholders during the objective design process can help ensure the true goals are addressed, rather than a proxy that may not result in the desired behavior. Documentation of the objective design process must be sufficiently transparent to ensure calibrated trust from stakeholders. Examples of criteria include: WEF: Focus on developing a clear problem statement, rather than on detailing the specifications of a solution. - AI technologies are developing rapidly, with new technologies and products constantly being introduced to the market. By focusing on describing the challenges and/ or opportunities that you want to address and drawing on the expertise of technology partners, you can better decipher what technology is most appropriate for the issue at hand. By focusing on the challenge and/or opportunity, you might also discover a higher-priority issue, or realize you were focusing on a symptom rather than the root cause. The criteria above encourage public servants to identify their actual goals and then allow the engineers to deliver. To be able to deliver, however, the AI engineers must be able to convert the problem statement into precise terms. ### What we know how to do In some cases, it is possible to decompose a complex task into simpler components. For example, in the context of an autonomous vehicle, we might evaluate a perception system for its ability to identify and forecast the trajectories of other objects in its environment, and the ability of a planner to make safe decisions given this information. Algorithms for multi-objective optimization can find a Pareto front of options corresponding to different trade-offs between desiderata [115, 125]. There is also recent work in inferring what objectives are truly desired given observed reward functions [59]. ### Directions requiring additional AI innovation * **Metrics for Metrics: Measuring Match to Goals.** What are the measures that can be used to determine whether some technical objective matches our policy goals? Objective and reward design are relatively well-studied in some domains, such as reinforcement learning [123, 59], but unsolved for the many more situations--from autonomous vehicles to email text completion--in which we see AI systems used today. Further, our goals may be multi-faceted; the objective must not only be faithful to our goal but also transparent in how it is faithful. * **Properties of Popular Objective Functions.** There are many objective functions used for their computational convenience and statistical properties (squared loss, log likelihood, etc.). Because they are so popular, their statistical properties under various conditions are often well-understood [118]. For example, we may know that L1 losses are more robust than L2; we may know that decreased model capacity (e.g. fitting a line) can make a model more prone to being swayed by influential points. However, how do these very technical understandings of statistical properties relate to more complex goals, including reward hacking and other short-cut risks? Better understanding of these properties could enable better matching between popular losses and broader policy goals. * **Robustness to a Variety of Objectives.** In some subfields of AI, there is literature on creating agents that perform well across a range of reward functions [91, 101]. This ensures resilience in the face of imperfections in the objective. However, more work is needed to make this process efficient, e.g., for large pre-trained models. * **Computational Constraints for More Robust Objectives.** Related to the above, there are a variety of computational constraints and regularizers that often make objectives more robust to imperfect specifications. These include encouraging smoothness (e.g. Lipschitzness), sparsity, and robustness to certain types of uncertainties (e.g. [42], and distributionally robust optimization [105]. ). However, work remains to be done to more strongly connect what these computational tools do in the context of aligning the technical formulation with the true goal. Furthermore, some constraints and regularizations are difficult to express and/or operationalize in analytical forms; instead, they are incorporated directly into the training procedure, such as adversarial training [132]. Relatedly, additional work is needed to effectively optimize objectives with multiple criteria--whether those are constraints, regularizers, or competing terms: Simply writing down an objective does not make it easy to optimize. As additional terms are added to the objective, the question of how to weigh them to achieve the desired behavior also becomes more complex. * **Understanding Connections between Objectives and Learnt Model Behavior.** Can we efficiently explain how changes in a technical formulation of an objective affect the model behavior? We understand this for certain simple models but not sufficiently for more complex models: e.g., how will changing certain weights in the reward change an autonomous vehicle's driving? Conversely, can we explain policies in terms of compatible reward functions? Can we efficiently identify where two reward functions may result in different policies in human-und understandable terms? Some prior works tries to answer this [48]; however, more analyses will facilitate a more fine-grained design of the reward function to better align with intended objectives. * **Inferring Goals from Observed Behavior.** In some cases, we may have examples of decisions or outputs that we know align with the true goal (e.g. safe driving trajectories). However, the inverse problem of inferring rewards from behavior is not identifiable. Advancing techniques[5] to help disambiguate important elements of the reward function can help ensure that the learned policy aligns with the desired objectives, leading to improved performance and generalization. ### Areas that require interdisciplinary engagement Creating goals at a policy level requires considering factors such as contextual relevance, attainability, and alignment with overarching desiderata [66]. Ethical concerns associated with the power and impact of AI systems may also be taken into account. Moreover, sometimes even at the policy level, the objective remains unclear, making it more difficult to design proper objectives for the AI systems much less validate and explain them. This issue is evident in the application of AI in criminal justice, where a lack of clear policy goals is common. ## 7 Designing the Model: Privacy Bad actors may use transparency about the data, code, and model for identifying private information about individuals. There are a number of examples of regulatory criteria relating to privacy concerns, including: CDADM 6.2.6: Releasing custom source code owned by the Government of Canada as per the requirements specified in section A.2.3.8 of the Directive on... CDADM App. C: Plain language notice through all service delivery channels in use (Internet, in person, mail, or telephone). In addition, publish documentation on relevant websites about the automated decision system, in plain language, describing: A description of the training data, or a link to the anonymized training data if this data is publicly available. WEF: There are many anonymization techniques to help safeguard data privacy, including data aggregation, masking, and synthetic data. Keep in mind, however, that you must manage anonymized data as carefully as the original data, since it may inadvertently expose important insights. RFPs should encourage innovative technological approaches, such as those mentioned above, that make less intrusive use of data or that achieve the same or similar outcomes with less sensitive datasets. WEF: As important as data protection is, not all data is sensitive (e.g. open-government data is freely accessible online). All data, sensitive or not, must have its integrity safeguarded, but it is not necessary to keep non-sensitive data behind closed doors. It is important to assess the privacy needs of different datasets to determine the right level of protection. Normally, personally identifiable information (PII), such as financial and health data, is considered extremely sensitive. The RFP needs to reflect data governance requirements for both the procurement process and the project that are in accordance with the nature of the data. However, the language in these regulations leaves a number of issues unspecified, including a standardized, meaningful definition for privacy, and assumes that we are currently able to properly assess the privacy of a dataset and anonymize data, which are currently open research questions. ### What we know how to do Differential privacy is a widely-accepted theoretical notion of privacy [143, 39]. In settings where this notion of privacy is appropriate, we have differentially private algorithms that can calculate statistical properties of data [40], train machine learning models [1, 98], and generate synthetic data [25]. Many other privacy notions exist [113, 83, 78]. Choosing which privacy notion to use in a particular setting remains an open question. ### Directions requiring additional AI innovation * **Better Tradeoffs between (differential) Privacy and (predictive) Performance.** In general, differentially-private models have lower predictive performance than models without privacy guarantees [7]. How can that gap be closed? Related questions include: Can we ensure models are private even with many queries and in conjunction with public data? What can we maximally expose about a model and training data statistics in a way that is still private? Can we precisely state what cannot be exposed, e.g. a long tail has been left out [45]? (Note: if we can make this precise, then certain information could be made public as it poses no privacy risk, and other information may be available only to a trusted auditor.) * **Creating and Assessing Privacy Definitions.** How can we define privacy appropriately and meaningfully for different types of data? (e.g. trajectories, text [22], etc.). What do current definitions of privacy actually achieve on these data? * **Privacy via Minimal Data Collection.** Can we collect only the input information needed for each decision, which may involve collecting different inputs for different people [133]? What privacy risks are mitigated by this approach? Are new risks introduced because what inputs are measured is new information? * **Private Generative Models.** The main focus of existing work is on classification. So, there are many open questions when it comes to the privacy of generative models [28, 26, 68, 27]. For example: How can we prevent a generative model from replicating training data? Is there a difference between a private generative model and adding noise to data? Is there a benefit to a private generative model vs. noised data? Are empirical methods to ensure privacy e.g. via reinforcement learning with human feedback [150], sufficient? * **Effective Machine Unlearning.** In some cases, people may be allowed to elect to have the influence of their data removed after the model has been trained. Methods have been created to remove the influence of specific inputs from the data, but these are still in progress [75, 32], especially for generative models [116, 50, 62]. ### Areas that require interdisciplinary engagement Current private models still allow third parties to infer private information via access to additional, publicly available data. We need to develop new notions of privacy for this setting [22]. Broader discussion is also needed regarding what to do if privacy guarantees sacrifice predictive performance, especially if the sacrifice is primarily to underrepresented groups [7]. More generally, the appropriate definition of privacy, and how strict the privacy guarantee must be (e.g., via hyperparameter settings), will depend on the setting [41] and must be made transparent. For example, claiming a model is differentially private when it has a very large epsilon may be misleading. Finally, while this section has focused on privacy, we also note that there are many security concerns must also be considered in a holistic manner. There are clear limitations to what can be achieved with respect to adversarial actors. If training data are available, a state actor or a large industry actor could (re)create a model. Once a model or training technique is out, we really cannot control its use. Unlimited public access to a model (via queries) intrinsically allows an adversary to learn about the model and the training data. ## 8 Interacting with the Model: Human + AI Systems AI regulations frequently emphasize the involvement of humans in various stages of the decision-making process. Often the intent is for the human decision-maker to vet an AI recommendation, take responsibility for the final decision, and intervene in case of emergency situations and system failures. We also consider the case of learning from human input. Examples of related criteria include: CDADM App. C: Decisions cannot be made without having specific human intervention points during the decision-making process; and the final decision must be made by a human. CDADM 6.3.6: Establishing contingency systems and/or processes as per Appendix C. (Which says: Ensure that contingency plans and/or backup systems are available should the Automated Decision System be unavailable.) Technical approaches associated with these criteria include combining information from multiple experts, as well as ways to ensure that humans are fully engaged in the decisions. ### What we know how to do There has been significant work on learning from humans. We can apply methods such as imitation learning [67; 85] and reinforcement learning [82] from human feedback to orient the model based on expert control or learn human intentions/preferences [97]. Active learning techniques can be used to proactively ask for information to improve a model from humans [140; 107]. Finally, we also have methods for humans to take the initiative to correct an agent (e.g., [110; 84; 92]). While methods in uncertainty quantification are always being improved, for the purposes of flagging uncertain inputs for human inspection [53], our current methods are reasonable. ### Directions requiring additional AI innovation * **HCI Methods for Avoiding Cognitive Biases.** Humans have many cognitive biases and limitations. If a system behaves most of the time, people may start to over-rely on it. Confirmation bias can accompany backward reasoning (people finding ways to justify a given decision) but can be mitigated if a person performs forward reasoning first (looking at the evidence) [18]. Bias can also come from imperfect information fusion, e.g., if a human inspects the input data and then views an AI prediction based on the same input data, they may falsely believe that the AI prediction is a new, independent piece of information. For example, we may be concerned that a clinician forms an opinion from patient data, and then sees an AI opinion based on the same data, they may falsely treat the AI opinion a new, independent form of evidence. Appropriate human+AI interaction can help mitigate these biases. * **Shared Mental Models and Semantic Alignment.** Shared mental models--between the human and the AI system, between the AI system and the human--are essential for effective human+AI interaction [4]. While there exists work in which agents use or create models of humans (e.g., [29]) to facilitate interaction, including modeling a person's latent states such as cognitive workload and emotions (e.g., [96]), it remains an open question as to how to develop and validate these methods for increasing number of human+AI use cases. One particularly important area is semantic alignment between the way humans organize concepts and the way modern AI systems encode representations. Grounding terms has a long history in AI [61] and innovation is needed for our modern settings. * **Humans-in-the-Loop in Time-Constrained Settings.** How can we include humans in the loop when decisions have to be made quickly e.g. industrial robots in emergency scenarios involving human workers? It is crucial that automated systems can fail gracefully and hand-over control to humans, even in time-constrained settings [112]. * **Evaluation and Design of Realistic Human-in-the-Loop Systems.** Most current testing is for lay user and consumer applications, where risks and costs are minimal. However, evaluation in other settings is more challenging: Integrating a new interactive system into an existing workflow may require not only significant software effort, but also training of users. In high-stakes settings such as healthcare, criminal justice, and major financial decisions, there is a risk of real harm to people. How can we evaluate and design for these cases? Building more general knowledge about human-in-the-loop systems and developing smarter experimental designs may help reduce these burdens. So might validating methods for piloting methods in offline or de-risked ways that may still inform the target application. Relatedly, standard procedures are needed for evaluating and monitoring human-in-the-loop systems. ### Areas that require interdisciplinary engagement Shared human+AI decision-making is an interdisciplinary area involving social science, psychology, cognitive science, etc., [34]. Fortunately, researchers in HCI already have connections to these fields. Furthermore, the design adoption of new tools into workplaces is well-studied in design, human factors research, and management and operations science--and require interdisciplinary teams with appropriate expertise. These interdisciplinary efforts will help inform decisions about whether, how, and which humans to include in the loop, as well as how a system that is expecting human input should respond to inappropriate, slow, or absent input from the human. ## 9 Conclusion In this document, we examined the technical criteria in two real regulatory frameworks--the Canadian Directive on Automated Decision-Making and World Economic Forum AI Procurement in a Box. We find that we only have some of the tools needed to ascertain whether an AI system meets the stated requirements. We list several concrete directions for AI innovation that, if addressed, would improve our ability to create regulatable AI systems. Acknowledgements.The authors thank Andrew Ross, Siddharth Swaroop, Rishav Chourasia, Himabindu Lakkaraju, and Brian Lim; all participants of NUS Responsible, Regulatable AI Working Group 2022-2023 including Limsoon Wong, Angela Yao, Suparna Ghanvatkar, and Davin Choo.
2310.17097
Navigating Data Heterogeneity in Federated Learning A Semi-Supervised Federated Object Detection
Federated Learning (FL) has emerged as a potent framework for training models across distributed data sources while maintaining data privacy. Nevertheless, it faces challenges with limited high-quality labels and non-IID client data, particularly in applications like autonomous driving. To address these hurdles, we navigate the uncharted waters of Semi-Supervised Federated Object Detection (SSFOD). We present a pioneering SSFOD framework, designed for scenarios where labeled data reside only at the server while clients possess unlabeled data. Notably, our method represents the inaugural implementation of SSFOD for clients with 0% labeled non-IID data, a stark contrast to previous studies that maintain some subset of labels at each client. We propose FedSTO, a two-stage strategy encompassing Selective Training followed by Orthogonally enhanced full-parameter training, to effectively address data shift (e.g. weather conditions) between server and clients. Our contributions include selectively refining the backbone of the detector to avert overfitting, orthogonality regularization to boost representation divergence, and local EMA-driven pseudo label assignment to yield high-quality pseudo labels. Extensive validation on prominent autonomous driving datasets (BDD100K, Cityscapes, and SODA10M) attests to the efficacy of our approach, demonstrating state-of-the-art results. Remarkably, FedSTO, using just 20-30% of labels, performs nearly as well as fully-supervised centralized training methods.
Taehyeon Kim, Eric Lin, Junu Lee, Christian Lau, Vaikkunth Mugunthan
2023-10-26T01:40:28Z
http://arxiv.org/abs/2310.17097v3
Navigating Data Heterogeneity in Federated Learning: A Semi-Supervised Approach for Object Detection ###### Abstract Federated Learning (FL) has emerged as a potent framework for training models across distributed data sources while maintaining data privacy. Nevertheless, it faces challenges with limited high-quality labels and non-IID client data, particularly in applications like autonomous driving. To address these hurdles, we navigate the uncharted waters of Semi-Supervised Federated Object Detection (SSFOD). We present a pioneering SSFOD framework, designed for scenarios where labeled data reside only at the server while clients possess unlabeled data. Notably, our method represents the inaugural implementation of SSFOD for clients with 0% labeled non-IID data, a stark contrast to previous studies that maintain some subset of labels at each client. We propose **FedSTO**, a two-stage strategy encompassing **S**elective **T**raining followed by **O**rthogonally enhanced full-parameter training, to effectively address data shift (e.g. weather conditions) between server and clients. Our contributions include selectively refining the backbone of the detector to avert overfitting, orthogonality regularization to boost representation divergence, and local EMA-driven pseudo label assignment to yield high-quality pseudo labels. Extensive validation on prominent autonomous driving datasets (BDD100K, Cityscapes, and SODA10M) attests to the efficacy of our approach, demonstrating state-of-the-art results. Remarkably, FedSTO, using just 20-30% of labels, performs nearly as well as fully-supervised centralized training methods. ## 1 Introduction Federated Learning (FL) enables decentralized training across distributed data sources, preserving data privacy [27]. It has emerged in response to the need for privacy, security, and regulatory compliance such as GDPR [36] and CCPA [31]. FL trains models on local devices and shares only model updates, thereby improving privacy and efficiency. In a typical FL cycle, each client updates a shared model with local data, sends the updates to a server for parameter aggregation, and then updates its local model with the newly aggregated global model sent back by the server. Despite the potential of FL, the assumption of fully labeled data restricts its practicality [12; 11]. In order to acquire high-quality labels, data is often transmitted from edge clients to a central server, thereby compromising the privacy assurances provided by FL. Limited labels at the edge necessitate the adoption of transfer learning, self-supervised learning, and semi-supervised learning (SSL) techniques. However, the separation of labeled and unlabeled data complicates the application of these techniques to FL, which can undermine the system's effectiveness. This issue is amplified in labels-at-server scenarios where only the server possesses labeled data, and clients hold only unlabeled data [5; 10; 43; 2; 18; 42]. In autonomous driving, a novel approach is required to bridge the knowledge gap between labeled and unlabeled data without the need for direct data exchange. While Semi-Supervised Federated Learning (SSFL) has been explored for image classification tasks,[5; 10; 43; 2; 18; 42], these studies have faced the following challenges: 1. Limited scale and complexity of tasks with datasets such as CIFAR and ImageNet, while semi-supervised federated object detection (SSFOD) presents sizably greater difficulties. 2. Non-IID data shift from labeled to unlabeled data. Our investigation stands apart in tackling the most challenging FL situations where clients hold exclusively unlabeled data from a different distribution from labeled server data. This acknowledges inherent heterogeneity of real-world FL settings, such as diverse weather conditions across clients. For instance, one client's dataset may predominantly consist of images captured under cloudy conditions, while others may include images from overcast, rainy, snowy, etc. conditions. To surmount these inadequately addressed challenges of SSFOD, we introduce FedSTO (Federated Selective Training followed by Orthogonally enhanced training), a two-stage training strategy tailored specifically for our SSFOD framework (Figure 1). Our key contributions include: * **Selective Training and Orthogonal Enhancement:** FedSTO begins with selective training of the model's backbone while other components remain frozen, fostering more consistent representations and establishing a robust backbone. This promotes generalization across non-IID clients, even in the absence of local labels. The subsequent stage involves fine-tuning all parameters with orthogonal regularizations applied to the non-backbone part of the model. This enhancement step is designed to imbue the predictors with resilience against skewed representations induced by local data heterogeneity, thereby promoting representation divergence and robustness. * **SSFL with a Personalized EMA-Driven Semi-Efficient Teacher:** To prevent deterioration of teacher pseudo labeling models for non-IID unlabeled clients, we showcase for the first time an SSFOD framework that applies an alternate training methodology [5], integrated with a Semi-Efficient Teacher [38], driven by a local Exponential Moving Average (EMA). Our empirical observations suggest that this personalized EMA-driven model provides superior quality pseudo labels for detection, contrary to the commonly used global model Figure 1: An overview of our FedSTO method within the SSFOD framework with key components: selective training, orthogonal enhancement, and local Exponential Moving Average (EMA)-driven pseudo label assignment, organized into two stages. Algorithm steps are numbered accordingly. for pseudo labeling in related studies [5]. This approach further enhances the quality of the learning process, mitigating potential pitfalls of noisy pseudo labeling. * **Performance Improvements:** FedSTO achieves 0.082 and 0.035 higher [email protected] when compared to partially supervised and SSFL baselines respectively, nearly matching the fully supervised model's performance (0.012 gap) on BDD100K [41] with a mere 25% of labeled data. We demonstrate similar considerable improvements in model generalization (Figure 2) on rigorous benchmark and ablation experiments with 20k-120k datapoints from Cityscapes [4] and SODA10M [9], utilizing the YOLOv5 object detector [13]. Our above contributions present a pioneering approach for utilizing unlabeled data in FL to enhance non-IID detection performance, especially for dynamic objects--an aspect not yet considered in previous research. Despite the paucity of research on SSFOD, we contend that our methods and experiments offer a valuable benchmark for future investigations across diverse domains. ## 2 Related Works ### Federated Learning (FL): Challenges and Advances FL has gained significant attention in recent years as a privacy-preserving approach to harness the potential of distributed data [27; 20; 21; 15; 28; 33]. Despite the progress made in FL, most research has focused primarily on classification tasks, which may limit its applicability and generalizability to a broader range of real-world problems. Advanced FL techniques are essential to revolutionize various fields, including autonomous driving, healthcare, and finance, by enabling collaborative learning from distributed data sources [6; 30]. Addressing data heterogeneity is of paramount importance in FL, as clients frequently hold data with diverse distributions, which may impact the performance of the global model. To tackle this challenge, researchers have proposed various techniques to handle non-IID data, including adaptive aggregation algorithms and local fine-tuning of models [20; 33]. Personalization constitutes another vital aspect of FL, since clients may exhibit unique requirements or preferences not entirely captured by the global model [3; 19; 39]. Methods such as model distillation [23] and meta-learning [7] have been investigated to facilitate client-specific model adaptation and personalization. Finally, communication efficiency is crucial in FL, as exchanging model updates can be resource-intensive. To alleviate this issue, researchers have introduced strategies like quantization [34], sparsification [29], and the utilization of a supernet containing multiple subnetworks, with only the selected subnetwork transmitted to the server to reduce communication overhead while preserving model performance [17]. Figure 2: Performance comparison on BDD100K dataset [41]. “Partially Supervised Training” shows lower-bound performance using partial labels in a centralized setting. “Vanilla Semi-Supervised Federated Learning” and “Our FedSTO” demonstrate improved performance with non-IID federated data. FedSTO approaches the “Fully Supervised Training” upper-bound performance under full label use in a centralized setting. The x-axis shows the number of labeled examples, and the y-axis displays the mean average precision ([email protected]) on the test set. ### Semi-Supervised Object Detection (SSOD) with YOLO Object Detector SSOD has been an active research area, focusing on improving the quality of pseudo labels to enhance overall detection performance [35; 40]. The evolution of SSL techniques in object detection primarily revolves around using pretrained architectures and applying strong data augmentation strategies to generate consistent and reliable pseudo labels. Traditional single-stage detectors, such as the family of YOLO detectors, have faced notable challenges in leveraging SSL techniques, often underperforming compared to their two-stage counterparts (e.g., Faster RCNN). The limited efficacy of existing SSL methods for single-stage detectors has inspired researchers to develop innovative solutions to overcome these limitations [44]. Recently, a novel pipeline incorporating EMA of model weights has exhibited remarkable enhancements in the performance of single-stage detectors like YOLO detectors [38]. By utilizing the EMA model for pseudo labeling, researchers have adeptly addressed the inherent weaknesses in single-stage detectors, substantially elevating their performance in SSL contexts for object detection tasks. ### Semi-Supervised Federated Learning (SSFL) SSFL has emerged as a promising approach to address the challenge of limited labeled data in FL scenarios [5; 10; 43; 2; 18; 42]. SSFL aims to jointly use both labeled and unlabeled data owned by participants to improve FL. Two primary settings have been explored: Labels-at-Client and Labels-at-Server [10; 11]. In the Labels-at-Client scenario, clients possess labeled data, while the server only has access to unlabeled data. Conversely, in the Labels-at-Server scenario, the server holds labeled data, and clients have only unlabeled data. Despite the progress in SSFL, there remain limitations in the current research landscape. The majority of existing SSFL research predominantly focuses on image classification tasks, leaving other applications relatively unaddressed. In this study, we address these limitations by tackling the more realistic and challenging scenarios with edge clients having (1) no labels and (2) non-IID data (domain shift from the server labeled data), specifically in the context of object detection tasks. ## 3 Problem Statement SSFODWe tackle a semi-supervised object detection task involving a labeled dataset \(\mathcal{S}=\{\mathbf{x}_{i}^{s},\mathbf{y}_{i}^{s}\}_{i=1}^{N_{S}}\) and an unlabeled dataset \(\mathcal{U}=\{x_{i}^{u}\}_{i=1}^{N_{U}}\), focusing on scenarios where \(N_{S}\ll N_{U}\). In our SSFOD setup, as illustrated in Figure 1, we assume \(M\) clients each possessing an unsupervised dataset \(x^{u,m}\). The server retains the labeled dataset \(\{x^{s},\mathbf{y}^{s}\}\) and a model parameterized by \(W^{s}\). Each client model, parameterized by \(W^{u,m}\), shares the object detection architecture denoted by \(f:(\mathbf{x},W)\mapsto f(\mathbf{x},W)\). We assume that all models share the same object detection architecture, denoted by \(f:(\mathbf{x},W)\mapsto f(\mathbf{x},W)\), which maps an input \(\mathbf{x}\) and parameters \(W\) to a set of bounding boxes and their corresponding class probabilities on the \(K\)-dimensional simplex (e.g., using the sigmoid function applied to model outputs unit-wise). Data HeterogeneityOur study addresses non-IID data resulting from varying weather conditions such as cloudy, overcast, rainy, and snow, inspired by feature distribution skew or covariate shift [14]. We utilize three datasets, BDD100K [41], CityScapes [4], and SODA10M [9], each displaying class distribution heterogeneity and label density heterogeneity. Our aim is an SSFOD framework that can manage this heterogeneity, maintaining performance across diverse conditions and distributions. Data is considered IID when each client exhibits a balanced weather condition distribution. EvaluationIn our framework, we assess the performance of all detection tasks based on mean average precision ([email protected]), a standard metric in object detection literature that provides a comprehensive view of model performance across various object classes and sizes. Importantly, we evaluate the post-training performance of our method by assessing the personalized models of the server and client on their respective datasets. This approach ensures a fair and context-specific evaluation, reflecting the true performance of the personalized models in their intended environments. Baseline TrainingOur work explores two principal baselines: "Centralized Training" and "Federated Learning". Depending on the degree of labeled data utilization, we categorize the training into "Partially Supervised" and "Fully Supervised". An ideal situation is one where a fully supervised model is trained in a centralized fashion, utilizing all labeled data. In contrast, a more challenging scenario involves a partially supervised model trained solely on the server's limited labeled data. Under our problem setup, we initially establish a baseline by performing partial supervision on the server's limited labeled data, which serves as a pretraining step. Following this, each client conducts "Unsupervised learning with unlabeled data". Upon completion, clients transmit their model weights to the server. The server then aggregates these weights and fine-tunes the amalgamated model using its labeled data. The updated model is subsequently disseminated back to the clients, culminating one learning round. This cyclical process, known as alternate training in Diao et al. [5], continues effectively. It merges the strength of supervised and unsupervised learning to capitalize on unlabeled data while preventing model deterioration, thereby optimizing model performance. Personalized Pseudo Labeling for Unlabeled ClientsA crucial obstacle in SSFOD lies in precise pseudo label assignment, as improper allotments can result in label inconsistencies, thus negatively impacting mutual learning performance. Building upon the foundation by Xu et al. [38] in centralized settings, we present the first extension of this approach to federated settings, leveraging a personalized Pseudo Label Assigner (PLA) equipped with local EMA models. This technique bifurcates pseudo labels into reliable and unreliable categories using high and low thresholds, thus ensuring a robust and precise learning mechanism in federated environments. In FL, the PLA can be applied to both global and local models. However, the global model may fall short in capturing unique features of local data, compromising pseudo label quality. As demonstrated in our evaluation (Table 1), locally updated EMA models outperform global models. While it is feasible to federate the local EMA model, it introduces certain trade-offs, such as increased communication costs and minor performance degradation compared to the local EMA model. Our SSFOD framework, therefore, incorporates a local PLA with a local EMA model, optimally balancing communication efficiency and model stability, ensuring an effective learning process for SSOD tasks in distributed environments. SSFOD with YOLOWe utilize the YOLOv5 model, a single-stage object detector, in our evaluation. Existing literature shows a scarcity of research on SSFOD within FL like FedAvg [27], particularly for single-stage detectors like YOLO. Figure 3 compares various learning approaches in centralized and federated settings, denoted by green dotted and blue hatched boxes, respectively. We highlight non-IID scenarios with labeled (cloudy) and unlabeled data (overcast, rainy, snowy). In the CL scenario, fully supervised methods noticeably surpass partially supervised ones, and SSL approaches almost match the performance of fully supervised methods. However, baseline training for FL falls substantially short of these high standards, particularly with unlabeled data. \begin{table} \begin{tabular}{c c c c c c|c c c c} \hline \hline \multirow{2}{*}{Type} & \multirow{2}{*}{Method} & \multicolumn{4}{c|}{Non-IID} & \multicolumn{4}{c}{IID} \\ \cline{3-10} & & Cloudy & Overcast & Rainy & Snowy & Cloudy & Overcast & Rainy & Snowy \\ \hline \multirow{2}{*}{Centralized} & Fully Supervised & 0.600 & 0.604 & 0.617 & 0.597 & 0.600 & 0.604 & 0.617 & 0.597 \\ & Partially Supervised & 0.540 & 0.545 & 0.484 & 0.474 & 0.528 & 0.545 & 0.533 & 0.510 \\ \hline \multirow{2}{*}{Federated} & Global Model [5] & 0.555 & 0.560 & 0.497 & 0.488 & 0.540 & 0.551 & 0.576 & 0.542 \\ & Local EMA Model [38] & 0.560 & 0.566 & 0.553 & 0.553 & 0.572 & 0.588 & 0.593 & 0.610 \\ \hline \hline \end{tabular} \end{table} Table 1: Performance under different weather conditions for non-IID and IID data splits with 1 server and 3 clients. The results are presented for centralized (Fully Supervised and Partially Supervised) and federated approaches with a pseudo label assigner (Global Model and Local EMA Model). Figure 3: Performance of various methods on the BDD100K dataset [41], with the server containing labeled data for the “Cloudy” category and 3 clients having unlabeled data for “Rainy”, “Snowy”, and ”Overcast” categories. Baseline SSFL (red hatched boxes) struggles in comparison to centralized learning (bars in green dotted boxes). “Fully Supervised” and “Partially Supervised” refer to training a centralized model with the complete labeled dataset and only the “Cloudy” labeled data, respectively. ## 4 Main Method: FedSTO To mitigate these inherent hurdles presented by FL, we introduce FedSTO, a method that unfolds in two stages, preceded by a warmup stage. The process commences with an emphasis on robust representation learning for pretraining (Subsection 4.1), followed by full parameter training (Subsection 4.2). The initial stage of pretraining integrates a warm-up period utilizing labeled data at the server, transitioning into selective training. This groundwork is fortified by the orthogonal enhancement implemented in the subsequent full parameter training phase. ### Selective Training (ST) Selective Training (ST) is designed to address the primary challenge of establishing a robust backbone for object detector in FL. The approach unfolds as follows: 1. **Labeled dataset training**: All model parameters are updated using a labeled dataset. This step ensures training commences on high quality labeled data, mitigating the potential destabilizing effect of noisy, unlabeled data, and heterogeneity from diverse weather conditions. 2. **Client-side training with unlabeled dataset**: The model, updated in the previous step, is dispatched to the clients. Each client trains the model on their local unlabeled dataset. However, during this phase, only the backbone part of the model is updated, leaving other components frozen. This selective updating procedure fosters more consistent representations by sharing the same non-backbone part (e.g., neck and head), and thus enhances its potential generalization capabilities by concentrating on feature extraction. 3. **Server-side aggregation**: The server aggregates the updated backbone parameters from clients, effectively synthesizing the learned information from diverse unlabeled datasets. The aggregated backbone is then utilized in the first step for further training, repeating the process until performance convergence. By adhering to this procedure, ST effectively navigates the challenges inherent in the progression of FL while simultaneously accruing substantial benefits. Ensuring stability in semi-supervised object detection tasks is paramount. The exposure to heterogeneous unlabeled data, potentially characterized by noise or variable quality, can induce biases into the neck and head components of the model, thereby risking performance degradation by inadvertently generating low-quality or imprecise pseudo annotations. To mitigate this, ST employs a selective update strategy specifically targeting the backbone of the model, which is predominantly entrusted with the task of extracting salient features from the input data. By concentrating on the backbone during training, ST aids in the preservation of model stability and the enhancement of its generalization capabilities. Furthermore, in this stage, the communication cost between the server and clients is significantly reduced by uploading only the backbone part from clients to a server. Consequently, it significantly minimizes the deleterious impacts of heterogeneous unlabeled data on overall model performance (Table 2). While ST brings marginal improvements in IID conditions, it presents potent effects under Non-IID circumstances, emphasizing its efficacy in handling heterogeneous data distributions. ### Full Parameter Training (FPT) with Orthogonal Enhancement Inspired by the critical need for personalized models to exhibit robustness against feature distribution skewness--predominantly due to diverse weather conditions--we integrate orthogonality regularization presented by Kim et al. [16] which penalizes the symmetric version of the spectral restricted isometry property regularization \(\sum_{\theta}\sigma(\theta^{T}\theta-I)+\sigma(\theta\theta^{T}-I)\) within the SSFOD framework where \(\sigma(\cdot)\) calculates the spectral norm of the input matrix and \(\theta\) is a weight matrix from non-backbone parts. This regularization is applied during both server and client training stages and targets non-backbone \begin{table} \begin{tabular}{c c c c c c|c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{4}{c|}{Non-IID} & \multicolumn{4}{c}{IID} \\ \cline{2-10} & Cloudy & Overcast & Rainy & Snowy & Total & Cloudy & Overcast & Rainy & Snowy & Total \\ \hline Partially Supervised & 0.540 & 0.545 & 0.484 & 0.474 & 0.511 & 0.528 & 0.545 & 0.533 & 0.510 & 0.529 \\ \hline + SSFL [5] with Local EMA Model & 0.560 & 0.566 & 0.553 & 0.553 & 0.558 & 0.572 & 0.588 & 0.593 & **0.610** & 0.591 \\ + Selective Training & 0.571 & 0.583 & 0.557 & 0.556 & 0.567 & 0.576 & 0.578 & 0.594 & 0.599 & 0.587 \\ \(\sharp\)FPT with Orthogonal Enhancement [16] & **0.596** & **0.607** & **0.590** & **0.580** & **0.593** & **0.591** & **0.634** & **0.614** & 0.595 & **0.609** \\ \hline \hline \end{tabular} \end{table} Table 2: Performance on the BDD dataset with 1 labeled server and 3 unlabeled clients as each element of our FedSTO approach within the SSFOD framework is added. It highlights how each added method contributes to the overall performance under both Non-IID and IID conditions. components of the architecture. Our approach promotes generation of diverse, non-redundant, and domain-invariant feature representations, thereby enhancing the model's robustness, reducing noise influence, and significantly augmenting its ability to handle unlabeled data across varied domains. Incorporating orthogonality regularization into our framework substantially amplifies the divergence in the embedding space, enhancing the model's overall detection quality and the reliability of pseudo labels. Importantly, our strategy of embedding orthogonality into the non-backbone parts of the model, such as the neck and head, fosters a more balanced and comprehensive training process. This reduces the bias towards specific weather conditions and the heterogeneity of object categories, leading to improved performance as demonstrated in Table 2. Our approach draws upon successful techniques from fine-tuning [8; 26], and transfer learning, and is particularly inspired by meta-learning concepts,[37; 32]. In particular, the tendency of the non-backbone components of the model to develop biases prompts us to introduce an orthogonal property to this section. This measure helps counteract these biases, thereby further enhancing the model's robustness and adaptability when confronted with diverse, unlabeled data across multiple domains. ### Main Algorithm: FedSTO ``` input : server model parameterized by \(W_{s}\), the number of rounds for each phase \(T_{1},T_{2}\), client models parameterized by \(\{W_{u,1},...,W_{u,M}\}\), client backbone part parameterized by \(\{B_{u,1},...,B_{u,M}\}\) 1:\(W_{s}\leftarrow\) WarmUp\((x_{s},y_{s},W_{s})\)// Supervised training at server /* Phase 1: Selective Training for Pretraining */ 2:for\(t\gets 0,\dots,T_{1}-1\)do 3:\(S^{t}\leftarrow\) SampleClients 4:for each client \(k\in S^{t}\) in parallel do 5:\(W_{u,k}\leftarrow\) Client-BackboneUpdate\((x_{u,k},B_{u,k})\)// Client-Update 6:endfor 7:\(W_{s}\leftarrow\sum_{k\in S^{t}}p_{k}W_{u,k}\)// Aggregation 8:\(W_{s}\leftarrow\) Server-Update\((x_{s},y_{s},W_{s})\)// Server-Update 9:endfor /* Phase 2: Full Parameter Training with Orthogonal Enhancement */ 10:for\(t\gets 0,\dots,T_{2}-1\)do 11:\(S^{t}\leftarrow\) SampleClients 12:for each client \(k\in S^{t}\) in parallel do 13:\(W_{u,k}\leftarrow\) Client-OrthogonalUpdate\((x_{u,k},W_{u,k})\)// Client-OrthogonalUpdate 14:endfor 15:\(W_{s}\leftarrow\sum_{k\in S^{t}}p_{k}W_{u,k}\)// Aggregation 16:\(W_{s}\leftarrow\) Server-OrthogonalUpdate\((x_{s},y_{s},W_{s})\)// Server-OrthogonalUpdate 17:endfor ``` **Algorithm 1**FedSTO Algorithm within the SSFOD Framework Algorithm 1 illustrates the overall procedure of FedSTO within the SSFOD framework. The server model, parameterized by \(W_{s}\), is first trained in a supervised fashion during the warm-up phase (Line 1). The algorithm then transitions to Phase 1: Selective Training for Pretraining. This phase involves multiple training iterations (Line 3), where in each iteration, a subset of clients is sampled (Line 4). The backbone part of each client's model, \(W_{u,k}\), is updated using their local unlabeled datasets (Line 6). The updated parameters are then aggregated at the server (Line 8), and the server model is updated using its labeled dataset (Line 9). In Phase 2: Full Parameter Training with Orthogonal Enhancement, the Client-OrthogonalUpdate and Server-OrthogonalUpdate methods are employed (Lines 14 and 18), introducing orthogonality regularization to the training process. This second phase debiases the non-backbone parts of the model, ensuring they have a robust predictor across various weather conditions that effectively counterbalance the inherent data heterogeneity. ## 5 Experiment ### Experimental Setup #### 5.1.1 Datasets Bdd100K [41]We utilize the BDD100K dataset, which consists of 100,000 driving videos recorded across diverse U.S. locations and under various weather conditions, to evaluate our method. Each video, approximately 40 seconds in duration, is recorded at 720p and 30 fps, with GPS/IMU data available for driving trajectories. For our experiments, we specifically select 20,000 data points, distributed across four distinct weather conditions--cloudy, rainy, overcast, and snowy. In this study, we primarily focus on five object categories: person, car, bus, truck, and traffic sign. The dataset is partitioned into clients based on these weather conditions, simulating data-heterogeneous clients. This experimental setup enables us to investigate the influence of data heterogeneity on our framework and to evaluate its robustness under realistic conditions. Cityscape [4]We conduct additional experiments using the Cityscapes dataset, which consists of urban street scenes from 50 different cities. Given that this dataset does not provide precise weather information for each annotation, we distribute the data to clients in a uniformly random manner. For our studies, we employ the package, encompassing fine annotations for 3,475 images in the training and validation sets, and dummy annotations for the test set with 1,525 images. We also include the other package, providing an additional 19,998 8-bit images for training. SODA10M [9]To evaluate our approach under diverse conditions, we employ the SODA10M dataset, which features varied geographies, weather conditions, and object categories. In an IID setup, 20,000 labeled data points are uniformly distributed among one server and three clients. For a more realistic setup, the 20,000 labeled data points are kept on the server while 100,000 unlabeled data points are distributed across the clients. This arrangement enables performance evaluation under distinct weather conditions--clear, overcast, and rainy--showcasing resilience and robustness. #### 5.1.2 Training Details We conduct our experiments in an environment with one server and multiple clients, depending on the experiment. Both the server and the clients operate on a single local epoch per round. Our training regimen spans 300 rounds: 50 rounds of warm-up, 100 rounds of pretraining (\(T_{1}\)), and 150 rounds of orthogonal enhancement (\(T_{2}\)). We use the YOLOv5 Large model architecture with Mosaic, left-right flip, large scale jittering, graying, Gaussian blur, cutout, and color space conversion augmentations. A constant learning rate of 0.01 was maintained. Binary sigmoid functions determined objectiveness and class probability with a balance ratio of 0.3 for class, 0.7 for object, and an anchor threshold of 4.0. The ignore threshold ranged from 0.1 to 0.6, with an Non-Maximum Suppression (NMS) confidence threshold of 0.1 and an IoU threshold of 0.65. We incorporate an exponential moving average (EMA) rate of 0.999 for stable model parameter representation. ### Results Table 3 illustrates the efficacy of our proposed SSFOD method against various baselines and state-of-the-art approaches on the BDD100K dataset. FedSTO significantly outperforms other techniques under different weather conditions and data distribution scenarios, i.e., IID and Non-IID. In the CL scenarios, the fully supervised approach yields the highest performance, with SSL methods, such as EMA Teacher [38], demonstrating competitive results. However, the real challenge lies in federated settings, where data privacy and distribution shift become critical considerations. In the SSFOD framework, our FedSTO method consistently surpasses other SSFL techniques. Notably, it achieves superior results even in challenging Non-IID settings, demonstrating its robustness to data distribution shifts. Similar trends hold when increasing the number of clients as shown in the appendix. In IID \begin{table} \begin{tabular}{c c c c c c c c|c c c c c} \hline \hline \multirow{2}{*}{Type} & \multirow{2}{*}{Algorithm} & \multirow{2}{*}{Method} & \multicolumn{6}{c}{Non-IID} & \multicolumn{6}{c}{IID} \\ \cline{4-13} & & & Clouds & Overcast & Rainy & Snowy & Total & Clouds & Overcast & Rainy & Snowy & Total \\ \hline \multirow{4}{*}{Centralized} & \multirow{2}{*}{SL} & Fully Supervised & 0.600 & 0.604 & 0.617 & 0.597 & 0.605 & 0.600 & 0.604 & 0.617 & 0.597 & 0.605 \\ & & Partially Supervised & 0.540 & 0.545 & 0.484 & 0.474 & 0.511 & 0.528 & 0.545 & 0.533 & 0.510 & 0.529 \\ \cline{2-13} & & \begin{tabular}{c} SSL \\ EMA Teacher [38] \\ \end{tabular} & 0.551 & 0.550 & 0.502 & 0.503 & 0.527 & 0.546 & 0.557 & 0.541 & 0.533 & 0.544 \\ & & EMA Teacher [38] & 0.598 & 0.59 & 0.568 & 0.586 & 0.581 & 0.586 & 0.570 & 0.571 & 0.573 & 0.575 \\ \hline \multirow{4}{*}{Federated} & \multirow{2}{*}{SFL} & \multirow{2}{*}{Fully Supervised} & 0.627 & 0.614 & 0.607 & 0.585 & 0.608 & 0.635 & 0.612 & 0.608 & 0.595 & 0.613 \\ \cline{2-13} & & FedAvg [27] & 0.560 & 0.566 & 0.553 & 0.553 & 0.558 & 0.572 & 0.588 & 0.593 & **0.610** & 0.591 \\ \cline{1-1} \cline{2-13} & & FedDyn [11] & 0.508 & 0.569 & 0.541 & 0.522 & 0.535 & 0.355 & 0.414 & 0.420 & 0.397 & 0.400 \\ \cline{1-1} \cline{2-13} & & FedDyn [35] & 0.561 & 0.572 & 0.565 & 0.566 & 0.566 & 0.591 & 0.587 & 0.588 & 0.577 & 0.586 \\ \cline{1-1} \cline{2-13} & & FedDyn [35] & 0.514 & 0.532 & 0.496 & 0.489 & 0.508 & 0.510 & 0.549 & 0.547 & 0.554 & 0.540 \\ \cline{1-1} \cline{2-13} & & \begin{tabular}{c} **FedSTO** \\ \end{tabular} & **0.596** & **0.607** & **0.590** & **0.580** & **0.593** & **0.591** & **0.634** & **0.614** & 0.595 & **0.609** \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison of FedSTO within the SSFOD framework against the Baselines, SSL, SSFL methods with 1 server and 3 clients on BDD100K dataset [41]. FedSTO exhibits improvements under various weather conditions on both IID and Non IID cases, and performs close to the centralized fully supervised case. \(\dagger\) denotes the SSFL with the local EMA model as a pseudo label generator. conditions, our method continues to excel, achieving results close to fully supervised centralized approach. These findings highlight the strength of our FedSTO method in leveraging the benefits of FL while mitigating its challenges. The robust performance of our approach across various weather conditions and data distributions underscores its potential for real-world deployment. When examining the performance on the Cityscapes dataset under uniformly random distributed conditions, the superiority of FedSTO within the SSFOD framework also remains apparent, as shown in Table 4. Compared to other methods, FedSTO consistently demonstrates improved generalization across most object categories, both for labeled and unlabeled data. Intriguingly, the performance of FedSTO surpasses even that of SSL in CL environments. Evaluation with [email protected] [email protected] results on the BDD dataset highlight the efficacy of the FedSTO approach (Table 5). In Non-IID settings, while the Fully Supervised centralized method achieve an average mAP of 0.357, FedSTO recorded 0.338, exhibiting comparable performance. However, under IID conditions, FedSTO registers an [email protected] of 0.357, closely matching the SFL result of 0.359. These results indicate that FedSTO offers competitive object detection capabilities, even with stricter IoU thresholds. Results on Real World Dataset, SODA10m [9]Figure 3(a) illustrates the performance of our method and other baselines on the SODA10m dataset, where labeled data is synthetically divided in an IID manner across one server and three clients. Our method demonstrates near-parity with the fully supervised approach, evidencing its efficacy. Figure 3(b) represents the averaged performance across varying weather conditions on the SODA10m dataset. Here, all 20k labeled data resides on the server, and 100k unlabeled data points from SODA10m are distributed across three clients. Despite these variations in conditions, our method consistently outperforms other baselines, confirming its robustness and applicability in diverse environments. \begin{table} \begin{tabular}{c c c c c c c c|c c c c c} \hline \hline \multirow{2}{*}{Type} & \multirow{2}{*}{Algorithm} & \multirow{2}{*}{Method} & \multicolumn{6}{c|}{Labeled} & \multicolumn{6}{c}{Unlabeled} \\ \cline{3-13} & & & \multicolumn{6}{c|}{Categories} & & & \\ \cline{3-13} & & Person & Car & Bus & Truck & Traffic Sign & Person & Car & Bus & Truck & Traffic Sign \\ \hline \multirow{3}{*}{Centralized} & SL & Fully Supervised & 0.569 & 0.778 & 0.530 & 0.307 & 0.500 & 0.560 & 0.788 & 0.571 & 0.283 & 0.510 \\ & & Partially Supervised & 0.380 & 0.683 & 0.193 & 0.302 & 0.246 & 0.358 & 0.648 & 0.343 & 0.138 & 0.255 \\ \cline{2-13} & SSL & \begin{tabular}{c} Unbiased Teacher [25] \\ EMA Teacher [25] \\ \end{tabular} & 0.391 & 0.695 & 0.225 & 0.320 & 0.297 & 0.410 & 0.689 & 0.373 & 0.129 & 0.354 \\ & & EMA Teacher [25] & 0.475 & 0.711 & 0.354 & 0.347 & 0.379 & 0.460 & 0.727 & 0.436 & 0.144 & 0.378 \\ \hline \multirow{3}{*}{Federated} & SFL & Fully Supervised & 0.498 & 0.715 & 0.357 & 0.289 & 0.410 & 0.492 & 0.714 & 0.451 & 0.251 & 0.425 \\ \cline{2-13} & & FedAvg [27] & 0.450 & 0.697 & 0.310 & **0.304** & 0.356 & 0.482 & 0.725 & 0.425 & **0.247** & 0.397 \\ \cline{2-13} & SSFL\({}^{\dagger}\) & FedBN [22] & 0.488 & 0.709 & 0.325 & 0.285 & 0.411 & 0.375 & 0.618 & 0.046 & 0.031 & 0.286 \\ \cline{2-13} & & **FedSTO** & **0.504** & **0.720** & **0.342** & 0.261 & **0.415** & **0.487** & **0.740** & **0.460** & 0.181 & **0.437** \\ \hline \hline \end{tabular} \end{table} Table 4: Performance under random distributed cases of Cityscapes [4]. FedSTO exhibits improvements under various object categories, and significantly outperforms the performance for unlabeled clients. \(\dagger\) denotes the SSFL with the local EMA model as a local pseudo label generator. Figure 4: (a) Performance of various methods on the SODA10m dataset in an IID setting, (b) Average performance across different weather conditions using unlabeled data from the SODA10m dataset. Varying Number of ClientsIn a non-IID BDD dataset configuration with 1 server and 20 clients, our proposal advances beyond competing methods, scoring 0.455 and 0.458 on labeled and unlabeled data, respectively. This outcome showcases our method's aptitude for tackling intricate real-world circumstances. Varying Sampling RatioTable 7 demonstrates the impact of different client sampling ratios on the FedSTO performance using the BDD100k dataset. Notably, even at a lower sampling ratio of 0.1, FedSTO yields commendable results, especially in the unlabeled set for categories like 'Car' (0.738) and 'Bus' (0.573). This underscores that a reduced client sampling can still lead to significant performance improvements, emphasizing the efficiency and adaptability of the FL approach. Efficiency on Network BandwidthTable 8 highlights the communication costs over 350 rounds of training involving 100 clients with a 0.5 client sampling ratio per round. By removing the neck component of the Yolov5L model, its size is reduced from 181.7MB to 107.13MB. This reduction significantly benefits FedSTO in Phase 1, leading to overall bandwidth savings. When comparing with traditional SSFL methods such as FedAvg and FedProx,[20], FedSTO utilizes only **2,166.23 GB** - a substantial **20.52%** reduction in network bandwidth. ## 6 Conclusion This paper introduces a novel Semi-Supervised Federated Object Detection (SSFOD) framework, featuring a distinctive two-stage training strategy known as FedSTO. Designed to address the challenges of heterogeneous unlabeled data in federated learning, FedSTO employs selective training and orthogonality regularization with personalized pseudo labeling. These mechanisms facilitate robust and diverse feature learning, thereby enhancing object detection performance across multiple weather conditions and data distributions. Empirical results provide compelling evidence of the superiority of FedSTO over established federated and semi-supervised learning methodologies. Notably, despite operating with the challenging constraint where non-IID clients have no labels, FedSTO successfully counteracts domain shift and achieves performance that is comparable to fully supervised centralized models. This accomplishment constitutes significant strides toward realizing more efficient and privacy-preserving learning in realistic FL settings. As we venture ahead, we aim to concentrate our research efforts on refining FedSTO and exploring additional strategies for leveraging unlabeled data with various domains and model architectures. We anticipate the work presented in this paper will stimulate continued progress in this rapidly evolving field. \begin{table} \begin{tabular}{l c|c c c} \hline \hline Type & Centralized & \multicolumn{3}{c}{Federated-SSFL} \\ \hline Method & Partially Supervised & FedAvg & ST & FedSTO \\ \hline Labeled & 0.3768 & 0.405 & 0.405 & **0.455** \\ Unlabeled & 0.3524 & 0.4311 & 0.4322 & **0.458** \\ \hline \hline \end{tabular} \end{table} Table 6: Performance on a non-IID case of the BDD100k dataset with 1 server and 20 clients. \begin{table} \begin{tabular}{l c c c c c c|c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{6}{c|}{Labeled} & \multicolumn{6}{c}{Unlabeled} \\ \cline{2-11} & \multicolumn{6}{c}{Categories} \\ \cline{2-11} & Person & Car & Bus & Truck & Traffic Sign & Person & Car & Bus & Truck & Traffic Sign \\ \hline Server Only (i.e., client sampling ratio 0.0) & 0.378 & 0.710 & 0.141 & 0.425 & 0.490 & 0.337 & 0.707 & 0.160 & 0.338 & 0.491 \\ FedSTO with client sampling ratio 0.1 & 0.393 & 0.714 & 0.442 & 0.510 & 0.540 & 0.487 & **0.738** & **0.573** & **0.589** & **0.617** \\ FedSTO with client sampling ratio 0.2 & **0.458** & **0.474** & **0.476** & **0.521** & **0.571** & 0.440 & 0.731 & 0.378 & 0.525 & 0.573 \\ FedSTO with client sampling ratio 0.5 & 0.444 & 0.745 & 0.437 & 0.502 & 0.550 & **0.489** & 0.730 & 0.438 & 0.512 & 0.538 \\ \hline \hline \end{tabular} \end{table} Table 7: Performance ([email protected]) under Non-IID scenarios of BDD100k dataset with 1 server and 100 clients according to the changes of client sampling ratio for implementing FedSTO. The term ‘Server Only’ aligns with the notion of ‘partially supervised’ in CL settings. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline Method & \begin{tabular}{c} Warm-up (50 rounds) \\ \end{tabular} & Phase 1 (150 rounds) & Phase 2 (150 rounds) & Total & Reduction \\ \hline FedAvg & 0 & 100 * 0.50 * 150 * 181.7 = 1,362.75 GB & 100 * 0.50 * 150 * 181.7 = 1,362.75 GB & 2,725.50 GB & - \\ \hline FedBN & 0 & 100 * 0.50 * 150 * 181.24 = 1359.30 GB & 100 * 0.50 * 150 * 181.24 = 1359.30 GB & 2,718.60 GB & 0.25 \% \\ FedSTO & 0 & 100 * 0.50 * 150 * 107.13 = 803.48 GB & 100 * 0.50 * 150 * 181.7 = 1,362.75 GB & **2,166.23 GB** & **20.52 \%** \\ \hline \hline \end{tabular} \end{table} Table 8: Communication costs over 350 rounds of training with 100 clients when the client sampling ratio is 0.5 per each round. The total Yolov5L size is 181.7MB while the model without the neck part is 107.13MB. Additionally, the model size without BN layers (FedBN [22]) is 181.24 MB. Here, ‘Reduction’ expresses how much communication cost is reduced compared to using vanilla SSFL (FedAvg and FedProx [20]).
2306.11300
RS5M and GeoRSCLIP: A Large Scale Vision-Language Dataset and A Large Vision-Language Model for Remote Sensing
Pre-trained Vision-Language Models (VLMs) utilizing extensive image-text paired data have demonstrated unprecedented image-text association capabilities, achieving remarkable results across various downstream tasks. A critical challenge is how to make use of existing large-scale pre-trained VLMs, which are trained on common objects, to perform the domain-specific transfer for accomplishing domain-related downstream tasks. A critical challenge is how to make use of existing large-scale pre-trained VLMs, which are trained on common objects, to perform the domain-specific transfer for accomplishing domain-related downstream tasks. In this paper, we propose a new framework that includes the Domain pre-trained Vision-Language Model (DVLM), bridging the gap between the General Vision-Language Model (GVLM) and domain-specific downstream tasks. Moreover, we present an image-text paired dataset in the field of remote sensing (RS), RS5M, which has 5 million RS images with English descriptions. The dataset is obtained from filtering publicly available image-text paired datasets and captioning label-only RS datasets with pre-trained VLM. These constitute the first large-scale RS image-text paired dataset. Additionally, we fine-tuned the CLIP model and tried several Parameter-Efficient Fine-Tuning methods on RS5M to implement the DVLM. Experimental results show that our proposed dataset is highly effective for various tasks, and our model GeoRSCLIP improves upon the baseline or previous state-of-the-art model by $3\%\sim20\%$ in Zero-shot Classification (ZSC), $3\%\sim6\%$ in Remote Sensing Cross-Modal Text-Image Retrieval (RSCTIR) and $4\%\sim5\%$ in Semantic Localization (SeLo) tasks. Dataset and models have been released in: \url{https://github.com/om-ai-lab/RS5M}.
Zilun Zhang, Tiancheng Zhao, Yulong Guo, Jianwei Yin
2023-06-20T05:30:59Z
http://arxiv.org/abs/2306.11300v5
# RS5M: A Large Scale Vision-Language Dataset for Remote Sensing Vision-Language Foundation Model ###### Abstract Pre-trained Vision-Language Foundation Models utilizing extensive image-text paired data have demonstrated unprecedented image-text association capabilities, achieving remarkable results across various downstream tasks. A critical challenge is how to make use of existing large-scale pre-trained VLMs, which are trained on common objects, to perform the domain-specific transfer for accomplishing domain-related downstream tasks. In this paper, we propose a new framework that includes the Domain Foundation Model (DFM), bridging the gap between the General Foundation Model (GFM) and domain-specific downstream tasks. Moreover, we present an image-text paired dataset in the field of remote sensing (RS), RS5M, which has 5 million RS images with English descriptions. The dataset is obtained from filtering publicly available image-text paired datasets and captioning label-only RS datasets with pre-trained VLM. These constitute the first large-scale RS image-text paired dataset. Additionally, we tried several Parameter-Efficient Fine-Tuning methods on RS5M to implement the DFM. Experimental results show that our proposed dataset is highly effective for various tasks, improving upon the baseline by \(8\%\sim 16\%\) in zero-shot classification tasks, and obtaining good results in both Vision-Language Retrieval and Semantic Localization tasks. [https://github.com/om-ai-lab/RS5M](https://github.com/om-ai-lab/RS5M) ## 1 Introduction Remote sensing (RS) images have been playing an important role in environmental monitoring [6], urban planning [102], and natural disaster management [97], etc. However, the rapid growth of RS images has introduced new challenges in efficiently and effectively processing, analyzing, and understanding the information contained within RS data. Over the past decade, supervised deep learning models have become powerful tools for tackling these challenges, demonstrating great success in RS tasks such as scene classification, object detection, semantic segmentation, and change detection. Despite these advances, the performance of deep learning models in RS applications is often constrained by small-scale labeled datasets. The interpretation of RS images typically requires domain expertise, leading to an expensive cost of RS image labeling, causing a bottleneck in further improvement in RS downstream tasks. As a natural supervision for the RS image, the paired text has incredible potential to help learn better data representation and serve as a proxy for various RS image modalities, such as SAR, hyperspectral, and imagery acquired from different satellites. The rapid development of deep learning models has led to significant progress in both CV and NLP domains, and researchers have begun to explore the potential of combining visual and textual modalities to develop more powerful and versatile models capable of understanding multimodal content. Pre-trained Vision-Language Models (VLMs) ([70], [38], [47], [41], [15], [46], [49], [107], [1], [108], [52], [45], [44], [94], [34]) have been a promising approach to leverage the strengths of natural language's tokenized information and the abundant visual information in images to serve as the General (Vision-Language) Foundation Model. A notable example is CLIP [70], which utilizes the contrastive loss function to connect two modalities, leading to unprecedented generalizability in many downstream tasks and domain transfer. Another important application for VLMs is generative models such as DALLE [72] and stable-diffusion [73] for AI-generated Content. However, due to the nature of training with common object data, VLMs usually underperform in specialized domains such as remote sensing [70] and medical imaging[98] because of the mismatch between domains. To make use of the power of the GFM in the RS domain, it is important to design a DFM capable of leveraging the generalizability of the GFM, incorporating external domain prior knowledge, and transferring this knowledge to a domain-specific Downstream Task Model (DTM) through a suitable learning paradigm to solve downstream tasks, as depicted in Figure 1. Alfassy et. al proposed FETA [2], a specializing foundation model for expert task applications, directly tuning the foundation model using LoRA for retrieval tasks in public car manuals and sales catalogue brochures, but the \(GFM\xrightarrow{}DFM\xrightarrow{}DTM\) structure and the importance of DFM were not widely discussed. The amount of training data to develop a DFM may not be as much as GFM (400M for CLIP [70], 1B for ALIGN [38], 88M for DeCLIP [52], etc.), but still is the foundation for the success of DFMs. In terms of RS, textual information such as geospatial metadata, land cover annotations, expert descriptions, and image captions provides natural supervision for RS images, offering richer context than class-level labels alone. He et al. improve the CLIP zero-shot image recognition top-1 accuracy by 17.86% on the EuroSAT dataset when using the synthetic data from GLIDE to fine-tune the classifier (supervised by cross-entropy) [99], presenting a promising potenti Figure 1: Illustration of our proposed Framework. The Domain Foundation Model (DFM) plays a central role in accepting the general knowledge from the General Foundation Model (GFM) and is injected with massive domain-specific knowledge from external data. With the proper learning paradigm, DFM is able to transfer the general knowledge with domain-specific prior to the Downstream Task Model (DTM) for domain-specific tasks. A demo for our proposed RS5M dataset is on the left. auxiliary in-domain data [29]. Several studies have proposed RS image-text paired datasets, including [69][69][58][109][113]. However, these datasets contain too few samples to effectively transfer or fine-tune large-scale pre-trained VLMs. Concurrently, there are large-scale RS datasets [55][81][19] containing millions of RS images but with only class-level labels. In overall, large-scale image-text paired dataset is rare in the field of RS, therefore gathering extensive in-domain data is crucial. The contributions of this paper can be summarized as follow: We introduce the first large-scale remote sensing image-text paired dataset, **RSSM**, which is entirely based on filtering large-scale image-text paired datasets and captioning RS datasets with the pre-trained model. Extensive denoising methods are applied. RS5M is nearly 1000 times larger than the existing largest RS image-text paired datasets. We propose the concept of the **Domain Foundation Model** (DFM) to better utilize the **General Foundation Model** (GFM) and domain-specific data. In the RS field, we implement the DFM with several Parameter-Efficient Tuning methods with Vision-Language Models as a strong baseline for the RS-related vision-language tasks. Through extensive experiments, we demonstrate that our framework, in combination with our proposed RS5M dataset, can **successfully transfer pre-trained VLMs to the RS domain and perform better on related downstream tasks**2. Footnote 2: Some experiment results are outdated, we will update the new figures trained with a newer version of RS5M very soon ## 2 Related Work Detailed introduction on related works can be found in Appendix A.1. We will introduce RS datasets, pre-trained VLMs, VLM for RS, pre-trained models for RS, and PEFT for LLMs and VLMs. Commonly used RS image-text paired datasets include **UCM Captions**[69], **Sydney Captions**[69], **RSICD**[58], **RSITMD**[109], and **RSVGD**[113]. These datasets' image sizes span from 224 \(\times\) 224 pixels up to 800 \(\times\) 800 pixels, while spatial resolution varies from 0.5m to 30m. Among them, RSVGD holds the largest collection with 38,320 RS image-text pairs, albeit with some image duplication. In addition, there are larger-scale image datasets like **BigEarthNet**[82], **Functional Map of the World** (FMoW) [19], and **MillionAID**[55], consisting of 590,326, 1,047,691, and 1 million RS images respectively. These images contain class-level labels. Large-scale pre-trained VLMs can be categorized based on their pre-training task objectives, such as contrastive vision-text alignment, image-text matching, masked language modeling, etc. [24]. [70], [38], [105], and [52] align textual and visual information in a shared semantic space using contrastive learning task. [15], [46], and [45] employ image-text matching task objectives. Models such as [49], [51], and [94] utilize Masked Language Modeling objectives. Most pre-trained VLMs combine multiple pre-training task objectives and use them to mine fine-grained relationships between modalities. For instance, [46] employs contrastive loss and image-text matching loss, [107] utilizes contrastive loss and captioning loss, and [51] uses contrastive loss and loss from MAE [28]. The success of VLMs is closely linked to the vast amount of paired data. In terms of RS, Zhang et al.[115] provided a comprehensive overview of recent advancements in applying artificial intelligence techniques to remote sensing data analysis. Wen et al. [96] survey the current progress and discuss the future trends of VLMs in the field of RS. Lobry et al. introduced the RSVQA task [54], a system where images can be queried to obtain specific information about their content. Hu et al. presented RSIEval [33], a benchmark consisting of human-annotated captions and visual question-answer pairs, enabling a thorough assessment of VLMs in remote sensing. Yuan et al. [109] introduced an asymmetric multimodal feature matching network for cross-modal RS Vision-Language Retrieval tasks. They also proposed Semantic Localization task [110], a weak visual grounding task enabling semantic-level retrieval with caption-level annotation, and GaLR [112], a method that combined local and global features of RS images. Basso introduced CLIP-RS [5], and Arutiunian et al. fine-tuned CLIP with RSICD, achieving significant improvements in top-1 accuracy for zero-shot classification 3. Wang et al. pre-trained CNN and ViT-based backbones [90] in Million-AID, examining on various downstream tasks. They also proposed a 100M ViT with rotated varied-size window attention, achieving competitive results for downstream tasks such as classification, object detection, and segmentation. However, their dataset and models are single-modality and therefore cannot utilize the supervision from text, suggesting potential improvements with VLMs. Footnote 3: [https://huggingface.co/blog/fine-tune-clip-rsicd](https://huggingface.co/blog/fine-tune-clip-rsicd) Large Language Models (LLMs) such as Bert and GPT trained on vast text corpora, have achieved state-of-the-art results across numerous NLP benchmarks. However, their millions or billions of parameters make full fine-tuning for each downstream task unrealistic. Adapters [31] offer an alternative solution for LLM fine-tuning, as they freeze the pre-trained LLM's weights while training only the adapter's parameters, which have significantly less number of parameters. This approach speeds up adaptation while maintaining comparable performance to full fine-tuning. [67, 50, 32] further improve the methods. [27, 116, 84] are introduced for tuning VLMs on visual classification, VQA and image captioning tasks. Prompt-based learning methods such as [120] and [119] learn prompt tokens for input in the text encoder to assist zero-shot classification. ## 3 Dataset Construction We constructed the RS5M through two sources (see Figure 2). First, we gather 11 publicly available image-text paired datasets (PUB11) and filter them using RS-related keywords. We then utilize the URLs and other tools to deduplicate images. Next, we use a pre-trained VLM and an RS image detector to remove non-RS images. Second, we utilize BLIP2 [44] to generate captions for 3 large-scale RS datasets (RS3) that only have class-level labels. We conduct a series of quality assurance methods including a self-supervised one to acquire descriptive and suitable captions for RS images. Finally, we merge the results from both sources. License information is listed in Appendix A.3. ### Filter Large-Scale Image-Text Paired Datasets We have chosen 11 public large-scale English image-text paired datasets to build the PUB11 subset, including LAION2B-en [76], LAION400M [77], LAIONCOCO, COYO700M [7], CC3M [78], CC12M [9], YFCC15M [85], WIT [79], Redcaps [22], SBU [66], and Visual Genome [42]. A brief introduction on them can be found in Appendix A.2.1. We collected **3 million image-text pairs** in this procedure. The aerial view images are predominant, but there are still some satellite images in the collection. Table 7 in the Appendix lists the statistics for each dataset including the number of images that remained in each dataset after filtering. We put most of the processing details in Appendix A.2.5. We establish a set of keywords closely related to RS, which consists of two groups: RS-related nouns and RS-related applications & companies names (Appendix A.2.2). To identify image-text pairs with text containing the keyword patterns, we utilize regular expressions. After downloading all relevant images from the internet, we utilize fastdup 4 for invalid image checking and deduplication. We first filter out corrupted images, and apply deduplication based on URLs. Then, fastdup is used to cluster duplicate images. We keep one image and discard the rest for each cluster of duplicate images. Footnote 4: [https://visual-layer.readme.io](https://visual-layer.readme.io) After checking for invalid images and performing deduplication, we proceeded to clean the dataset using VLM and the RS image Detector. First, we develop a set of handcrafted RS-related text prompt Figure 2: Overview of the collection process for RS5M. Circles represent different steps, gears stand for the model utilized, rectangles represent the images, and dash lines connect to the optional step. templates with length \(n\), \(t_{j\in\{1,n\}}\) (refer to Appendix A.2.3 for details). For each image \(x_{i}\), we select a CNN based CLIP-ConvNet-XXL model [37] to compute the cosine similarity \(s_{i}\) between the average text feature \(f_{t}=\frac{\sum_{i=1}^{n}f_{text}(t_{j})}{n}\) of the prompt templates and the image feature \(f_{image}(x_{i})\), i.e., \(s_{i}=\frac{f_{t}\cdot f_{image}(x_{i})}{|f_{t}|\cdot|f_{image}(x_{i})|}\), since we will jointly use a ViT-based model later. Then, we construct a classification dataset comprising two classes: RS images (\(c_{RS}\)) and non-RS images (\(c_{nRS}\)). Details on this classification dataset can be found in Appendix A.2.4. Next, we fine-tune a classifier, which is integrated with the ViTAE pre-trained model [91], to serve as an RS image detector. We denote the probability of an image \(x_{i}\) is an RS image to be \(c_{i}=P(c_{RS}|x_{i})\). Lastly, we filter the images in RS5M based on the joint score \((s_{i},c_{i})\). We keep images with \(s_{i}\geq m\) and \(c_{i}\geq n\), where \(m\) and \(n\) represent some thresholds. In practice, we set \(m\) and \(n\) to specific values to only keep image-text pairs that have the top 90% \(s_{i}\) score and top 80% \(c_{i}\) score among all image-text pairs. The PUB11 subset we constructed included both the satellite view and aerial view images. There is an analysis of outliers and misfiltered images for PUB11 in Appendix A.2.13. We have 3,007,809 image-text pairs in total. ### Caption Remote Sensing Image Datasets Despite the domain difference exists in RS images and images with common objects, captioning RS images with VLMs pre-trained on images with common objects has proven to be effective, as demonstrated in [45] and Appendix A.2.6, Figure 13. We employ the BLIP2 model [44] with the OPT 6.7B checkpoint in half-precision from Huggingface for caption generation. We choose nucleus sampling as it generates more diverse captions (refer to Appendix A.2.7). The selected datasets include BigEarthNet [82], FMoW [19], and MillionAID [55], which are detailed in section 2. We use only the training set for FMoW (727,144 images) and BigEarthNet (344,385 images), as some downstream tasks evaluate the test set. For the MillionAID dataset, we select the test set (990,848 images). We have 2,062,377 images in total for the RS3 subset. We follow the work of Schuhmann et al. 5 in the LAIONCOCO dataset and refine their approach. We generate 20 candidate captions per image and rank the top 10 results using CLIP ViT-H/14. Then, we re-rank these top 10 results using CLIP Resnet50x64 to obtain the top 5 captions. Moreover, we enhanced the dataset by integrating metainformation (geo-meta information, class labels, UTM, UTC, etc.) into readable sentences as a part of the image caption. More can be found in Appendix A.2.9. This structured meta-caption, combined with the model-generated caption, offers a more comprehensive view. Appendix A.2.6, Figure 17 highlights several examples of our captioning results (machine-generated part only). By sampling 2,000 captions and evaluating them through human assessment, we found the top captions provide a satisfactory degree of description for the RS images from these datasets (see A.2.11 for experiment details). In the examples provided, objects such as airports, rivers, farmland, bridges, streets, bays, and roundabouts are all present in the images. Footnote 5: [https://laion.ai/blog/laion-coco/](https://laion.ai/blog/laion-coco/) Dataset Description Figure 3 left shows the frequency statistics of keywords (can be found in Appendix 7) appearing in the image captions. The phrase "aerial view" is predominant in the captions, resulting in a significant number of aerial view remote sensing images in the RS5M dataset. The middle Figure presents a word cloud of words extracted from the RS5M captions. All special characters and numbers have been removed, as well as the majority of prepositions. Frequently occurring words in the captions include "satellite", "field", "building", "road", and "farm". The right figure shows the distribution of caption length in log scale. The distribution is long-tailed, and the average caption length is 40 words (maximum 18,070). The showcase of image-text pairs from PUB11 and RS3 and the statistics for image size can be found in Appendix A.2.12. We then use CLIP's visual encoder (CLIP-ConvNext-XXL) to extract image features from PUB11 and RS3, visualizing the results using PCA. We sampled 1,000 images equally from PUB11 and RS3. Figure 4 left demonstrates the discriminative domain differences between PUB11 and RS3, possibly due to the massive amount of aerial images in PUB11 and satellite images in RS3. Figure 4 middle displays the PCA visualization for 2,200 samples from the 11 datasets in PUB11. Interestingly, no significant domain differences are observed among the RS images from them, as the data points are intermingled. Figure 4 right reveals a clear separation between BigEarthNet and the other two datasets (500 examples for each), which may be attributed to the lower resolution (120 \(\times\) 120) of all BigEarthNet images compared to the higher resolutions of the other two datasets. ## 5 Experiment We selected the CLIP ViT-B32 model as the GFM and employed 4 different Parameter-Efficient Fine-Tuning (PEFT) methods as DFM candidates: Pfeiffer adapter [67], LoRA adapter [32], Prefix-tuning adapter [50], and UniPELT adapter [62] (a vanilla adapter, a low-rank-approximated adapter, a prompt-based adapter, and a composite adapter). Since the downstream tasks in this paper only require image and text features, no DTM is needed. Then, for the RS3 subset, we randomly chose the rank 1 caption or rotationally invariant caption. We evaluated the domain generalizability of DFM tuned by the RS5M dataset from 3 vision-language tasks: zero-shot classification (ZSC), vision-language retrieval (image-to-text and text-to-image, Figure 4: PCA. **Left**: PUB11 and RS3. **Middle**: 11 public datasets. **Right**: 3 RS datasets. Figure 3: PUB11 Visualization VLR), and semantic localization (SeLo). An introduction for different tasks can be seen in A.5. For ZSC, the complete AID[101], RESIC45[16], and EuroSAT[30] datasets were selected. The RSICD and RSITMD datasets were chosen for VLR tasks. We adopted the test split given by Yuan et al. [109] to align the setting of the previous works. Lastly, the AIR-SLT dataset was used for the SeLo task. We used top-1 accuracy to assess the ZSC task, recall@1/5/10/mean_recall for evaluating VLR task, and \(R_{su}\), \(R_{as}\), \(R_{da}\), \(R_{mi}\) for the SeLo task. We utilized the OpenCLIP implementation for GFM and the AdapterHub 6 implementation of adapters for DFM with default parameters. The weights for CLIP were frozen, and the AdamW optimizer [56] was employed. Modality interaction was only through the InfoNCE loss [87]. Learning rates were set to \(1e^{-4}\) with weight decay set to \(1e^{-4}\). A linear learning rate scheduler was used, and the batch size was set to 500 for a single RTX 4090. The training lasted for 10 epochs, and 5% RS5M data (evenly drawn from each subset) are used as the validation set and the rest of them become the training set. Footnote 6: [https://docs.adapterhub.ml/classes/models/clip.html](https://docs.adapterhub.ml/classes/models/clip.html) ### Main Experiment Result Tables 1 and 2 report the baseline and fine-tuned results for VLR, ZSC, and SeLo tasks. The "CLIP Baseline" method refers to the untuned ViT-B32 CLIP model, which is used as GFM-only approach. In Table 1, the "SeLov1" and "SeLov2" approaches [109] were supervisedly trained using the RSITMD dataset's training set. In Table 2, the methods "VSE++" [26], "AFMFN"[109], and "KCR" [64] are the competitive approaches that were trained from scratch on the training set of RSICD or RSITMD dataset in a supervised manner and evaluated on the test set. As no existing methods for the ZSC task were available for comparison, we compare the PEFT methods directly with the CLIP baseline. "DFM" appears in "Paradigm" column means the Domain Foundation Model is applied, which is implemented by the adapter tuned in RS5M. Table 1 demonstrates that all PEFT methods yield significant improvements, ranging from **8% to 16%** in accuracy for the ZSC task across three datasets. Remarkably, the CLIP with Pfeiffer adapter attains a 16% increase in accuracy for EuroSAT. Regarding the SeLo task, the CLIP baseline already surpasses the current SOTA methods, with the UniPELT adapter showing the highest \(R_{su}\) value and LoRA for highest \(R_{da}\). However, all PEFT methods result in a degeneration in the performance of \(R_{as}\), and \(R_{mi}\) in comparison to the CLIP baseline. This could be attributed to the low quality of model-generated captions in the RS5M dataset, as discussed further in the 5.2.1. Another factor might be the difference in image domain: the test images are all satellite views, whereas RS5M incorporates a significant number of aerial view images, which may interfere with the model's performance on satellite-only tasks. For the image-to-text retrieval task, the prefix-tuning adapter outperforms the previous approaches designed for the retrieval task in RSICD and RSITMD, with a \(\sim 3\%\) increase in recall@1. Furthermore, in the text-to-image retrieval task for the RSICD dataset, the UniPELT adapter has a \(\sim 2\%\) advantage over the SOTA. We believe there is room for improvement in the VLR and SeLo tasks, as we have not yet searched the hyperparameters for adapters. Also, we have only relied on weakly supervised contrastive loss instead of a meticulously designed loss for VLR or SeLo tasks like previous methods. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{**Zero-shot Classification**} & \multicolumn{4}{c}{**Semantic Localization**} \\ \hline & & AID & RESIC45 & EuroSAT & \multicolumn{4}{c}{AIR-SLT} \\ \hline **Method** & **Paradigm** & & Top-1 Accuracy & & \(R_{su\uparrow}\) & \(R_{as\downarrow}\) & \(R_{da\uparrow}\) & \(R_{mi\uparrow}\) \\ \hline CLIP Baseline & GFM & 60.84\% & 58.97\% & 45.84\% & 0.7220 & **0.2848** & 0.6880 & **0.7111** \\ SeLov1 & Supervised & - & - & - & 0.6920 & 0.3323 & 0.6667 & 0.6772 \\ SeLov2 & Supervised & - & - & - & 0.7199 & 0.2925 & 0.6658 & 0.7021 \\ **Pfeiffer** & GFM + DFM & 68.37\% & **67.79\%** & 61.24\% & 0.7180 & 0.3166 & 0.6589 & 0.6912 \\ **Prefkuning** & GFM + DFM & 69.83\% & 66.74\% & **61.48\%** & 0.7241 & 0.3132 & 0.6867 & 0.7017 \\ **LoRA** & GFM + DFM & 67.38\% & 65.53\% & 53.96\% & 0.7176 & 0.2857 & **0.6911** & 0.7098 \\ **UniPELT** & GFM + DFM & **70.92\%** & 66.61\% & 53.47\% & **0.7292** & 0.3463 & 0.6461 & 0.6820 \\ \hline \hline \end{tabular} \end{table} Table 1: Results for ZSC task and SeLo task. ”GFM” means only GFM is used, ”Supervised” represents the method was trained supervisedly on the labeled dataset, and ”GFM+DFM” stands for the DFM is applied on top of the GFM, and the DFM is implemented by the adapter tuned in RS5M. ### Ablation Study #### 5.2.1 Per Subset Analysis The RS5M dataset comprises two components: PUB11 and RS3. Given that PUB11 predominantly contains aerial images and RS3 contains only satellite images, we decided to analyze them separately. This approach allows us to ascertain the individual contributions of each subset, particularly in understanding the potential impact of training the model with a large quantity of aerial images on satellite-image-based downstream tasks. To facilitate this investigation, we tried the CLIP model with Pfeiffer and UniPELT adapters. We chose the RSITMD, EuroSAT, and AIR-SLT datasets to assess VLR, ZSC, and SeLo tasks respectively. Table 3 illustrates the performance of models trained with different subsets across various tasks. For the ZSC task, the PUB11 subset has a considerable positive impact on the results, likely due to the extensive and varied corpus sourced from the internet. Intriguingly, the RS5M model outperforms those trained exclusively on either PUB11 or RS3. In the SeLo task, the PUB11 subset contributes positively, as models trained with this subset yield better results than those trained with RS5M in most metrics across different adapters. Moreover, the RS3 subset confers a distinct advantage (\(\sim 5\%\) higher compared to RS5M) in the image-to-text retrieval task. This advantage may be attributed to the abundant in-domain satellite images in RS3. #### 5.2.2 Influence of Noisy Level in PUB11. As shown in Table 7 and Figure 10, approximately 1 million image-text pairs were filtered out by the VLM Filter and RS image detector using top 90% \(s_{i}\) and top 80% \(c_{i}\) as thresholding parameters. These parameters play a critical role in regulating the noise level of the PUB11 subset. Theoretically, reducing the values of \(s_{i}\) and \(c_{i}\) (i.e., retaining only image-text pairs which have the top 60% \(s_{i}\) and \(c_{i}\) values) should result in lower noise levels. To empirically find the impact of noise levels in PUB11, we adjusted \(s_{i}\) and \(c_{i}\) values to generate more PUB11 subsets with varying noise levels. Subsequently, we trained the CLIP model with the Pfeiffer adapter, using these subsets for ZSC, VLR, and SeLo tasks. As Figure 5 illustrates, a decrease in noise level generally led to an enhancement in the performance of PEFT methods. However, the model's performance could potentially deteriorate if the PUB11 subset size is excessively reduced. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline & & \multicolumn{3}{c}{**Image-to-Text Retrieval**} & \multicolumn{3}{c}{**Text-to-Image Retrieval**} \\ \cline{3-10} **Method** & **Paradigm** & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 & mR \\ \hline CLIP Baseline \(\dagger\) & GFM & 5.40\% & 15.00\% & 24.06\% & 6.44\% & 19.82\% & 30.28\% & 16.83\% \\ VSE++ \(\dagger\) & Supervised & 3.38\% & 9.51\% & 17.46\% & 2.82\% & 11.32\% & 18.10\% & 10.43\% \\ AFMFN \(\dagger\) & Supervised & 5.39\% & 15.08\% & 23.40\% & 4.90\% & 18.28\% & 31.44\% & 16.42\% \\ KCR \(\dagger\) & Supervised & 5.84\% & 22.31\% & 36.12\% & 4.76\% & 18.59\% & 27.20\% & 19.14\% \\ GalLR \(\dagger\) & Supervised & 6.59\% & 19.85\% & 31.04\% & 4.69\% & 19.48\% & 32.13\% & 18.96\% \\ **Pfeiffer \(\dagger\)** & GFM + DFM & 7.87\% & 18.21\% & 27.26\% & 5.84\% & 20.57\% & 33.14\% & 18.81\% \\ **Prefkunning \(\dagger\)** & GFM + DFM & **9.61\%** & 22.05\% & 32.11\% & **6.99\%** & 22.09\% & 33.06\% & 20.99\% \\ **LoRA \(\dagger\)** & GFM + DFM & 7.14\% & 18.48\% & 27.17\% & 6.18\% & 19.05\% & 29.66\% & 17.95\% \\ **UniPELT \(\dagger\)** & GFM + DFM & 8.87\% & 21.04\% & 31.29\% & 6.81\% & 24.01\% & 35.75\% & **21.30\%** \\ \hline CLIP Baseline \(\lx@sectionsign\) & GFM & 9.51\% & 23.01\% & 32.74\% & 8.81\% & 27.92\% & 43.23\% & 24.20\% \\ VSE++ \(\lx@sectionsign\) & Supervised & 10.38\% & 27.65\% & 39.60\% & 7.79\% & 24.87\% & 38.67\% & 24.83\% \\ AFMFN \(\lx@sectionsign\) & Supervised & 11.06\% & 29.20\% & 38.72\% & 9.96\% & 34.03\% & 52.96\% & 29.32\% \\ GalLR \(\lx@sectionsign\) & Supervised & **14.82\%** & 31.64\% & 42.48\% & **11.15\%** & 36.68\% & 51.68\% & **31.41\%** \\ **Pfeiffer \(\lx@sectionsign\)** & GFM + DFM & 11.50\% & 25.00\% & 36.28\% & 9.65\% & 31.59\% & 46.90\% & 26.82\% \\ **Prefkunning \(\lx@sectionsign\)** & GFM + DFM & 13.72\% & 30.97\% & 43.14\% & 9.25\% & 30.04\% & 47.26\% & 29.06\% \\ **LoRA \(\lx@sectionsign\)** & GFM + DFM & 13.50\% & 28.98\% & 39.38\% & 6.86\% & 26.55\% & 40.53\% & 25.97\% \\ **UniPELT \(\lx@sectionsign\)** & GFM + DFM & 13.27\% & 29.20\% & 41.37\% & 9.69\% & 32.57\% & 48.36\% & 29.08\% \\ \hline \hline \end{tabular} \end{table} Table 2: Results for image-to-text and text-to-image retrieval tasks. \(\dagger\) represents the results on RSICD dataset, and \(\lx@sectionsign\) stands for the results on RSITMD dataset. Recall@1/5/10 and mean recall are computed. #### 5.2.3 Add Adapter to Text or Image Encoder In Table 3, all PEFT methods resulted in a decrease in performance of \(R_{as}\) and \(R_{mi}\). This may be due to the lower quality of the model-generated captions in the RS5M dataset or the excessive presence of aerial images within the dataset. To have a deeper understanding, we choose to remove the adapter on either the text encoder or image encoder. As shown in Table 4, we displayed results for adding the Pfeiffer adapter exclusively to either the image encoder or text encoder (or to both). All models are trained using the RS5M dataset. The results distinctly reveal that removing the adapter from the image encoder improves the performance of \(R_{su}\), \(R_{da}\), and \(R_{mi}\), outperforming both the baseline and the scenario where adapters are added to both image and text encoders. This suggests that some image-specific knowledge (for instance, style) captured by the image encoder's adapter from the RS5M dataset may be less compatible with the Semantic Localization (SeLo) task. However, in-domain images significantly enhance the performance of the zero-shot classification task. Also, the text-to-image retrieval task benefits from the exclusive addition of an adapter to the text encoder. \begin{table} \begin{tabular}{c l c c c c c c} \hline \hline & \multicolumn{4}{c}{**Zero-shot Classification**} & \multicolumn{4}{c}{**Semantic Localization**} \\ \hline **Method** & **Dataset** & Top-1 Accuracy & \(R_{su\uparrow}\) & \(R_{as\downarrow}\) & \(R_{da\uparrow}\) & \(R_{mi\uparrow}\) \\ \hline \multirow{3}{*}{Pfeiffer} & PUB11 & 57.21\% & **0.7250** & **0.3135** & 0.6489 & **0.6925** \\ & RS3 & 56.83\% & 0.7057 & 0.3614 & 0.6098 & 0.6582 \\ & RS5M & **61.24\%** & 0.7180 & 0.3166 & **0.6589** & 0.6912 \\ \hline \multirow{3}{*}{UniPELT} & PUB11 & **59.37\%** & 0.7104 & **0.3348** & **0.7044** & **0.6931** \\ & RS3 & 56.99\% & 0.7092 & 0.3764 & 0.6254 & 0.6583 \\ \cline{1-1} & RS5M & 53.47\% & **0.7292** & 0.3463 & 0.6461 & 0.6820 \\ \hline \hline \multicolumn{8}{c}{**Image-to-Text Retrieval**} & \multicolumn{4}{c}{**Text-to-Image Retrieval**} \\ \hline **Method** & **Dataset** & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 & mR \\ \hline \multirow{3}{*}{Pfeiffer} & PUB11 & 7.74\% & 25.00\% & 38.05\% & **10.66\%** & 30.84\% & 44.51\% & 26.13\% \\ & RS3 & **15.48\%** & 31.20\% & 41.15\% & 9.51\% & 30.84\% & 44.64\% & **28.81\%** \\ & RS5M & 11.50\% & 25.00\% & 36.28\% & 9.65\% & 31.59\% & 46.90\% & 26.82\% \\ \hline \multirow{3}{*}{UniPELT} & PUB11 & 10.40\% & 24.78\% & 35.62\% & 8.81\% & 26.95\% & 43.27\% & 24.97\% \\ & RS3 & **18.36\%** & 33.41\% & 43.36\% & **10.09\%** & 29.51\% & 42.79\% & 27.46\% \\ \cline{1-1} & RS5M & 13.27\% & 29.20\% & 41.37\% & 9.69\% & 32.57\% & 48.36\% & **29.08\%** \\ \hline \hline \end{tabular} \end{table} Table 3: Results for ZSC, VLR tasks, and SeLo task with PUB11, RS3, and RS5M Figure 5: Influence of Noisy Level in PUB11 #### 5.2.4 Influence of Model size In this section, we delve into the relationship between model size and performance in downstream tasks. We chose to compare models using CLIP with encoders ViT-B-32, ViT-L-14, ViT-H-14, and ViT-bigG-14, each coupled with the Pfeiffer adapter. All models were trained on the RS5M dataset. As shown in Table 5, increasing the model size does not necessarily guarantee enhanced performance. In fact, the 2.5B parameter ViT-bigG-14 model underperformed in SeLo and ZSC tasks when compared to the ViT-H-14 model, although it significantly outperformed the ViT-L-14 model. Regarding VLR tasks, the largest model delivered the best performance in mean recall, holding an edge of 1% 10% over the others. Surprisingly, the RS5M-tuned ViT-B-32 model, containing 152M parameters, demonstrated remarkable performance in the ZSC task, surpassing even the ViT-H-14 and ViT-bigG-14 models, which are 7 and 17 times larger in number of parameters, respectively. #### 5.2.5 Losses The majority of our experiments utilized solely the contrastive learning (**CL**). In this section, we diversify our approach by incorporating different types of losses. Specifically, we selected the \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multicolumn{5}{c}{**Zero-shot Classification**} & \multicolumn{4}{c}{**Semantic Localization**} \\ \hline **Adapter for Encoder** & Top-1 Accuracy & \(R_{su\uparrow}\) & \(R_{as\downarrow}\) & \(R_{da\uparrow}\) & \(R_{mi\uparrow}\) \\ \hline None (Baseline) & 45.84\% & 0.7220 & **0.2848** & 0.6880 & 0.7111 \\ Image Only & 60.07\% & 0.7338 & 0.3372 & 0.6472 & 0.6873 \\ Text Only & 52.57\% & **0.7427** & 0.2989 & **0.6940** & **0.7160** \\ Image + Text & **61.24\%** & 0.7180 & 0.3166 & 0.6589 & 0.6912 \\ \hline \hline \multicolumn{5}{c}{**Image-to-Text Retrieval**} & \multicolumn{4}{c}{**Text-to-Image Retrieval**} \\ \hline **Adapter for Encoder** & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 & mR \\ \hline None (Baseline) & 9.51\% & 23.01\% & 32.74\% & 8.81\% & 27.92\% & 43.23\% & 24.20\% \\ Image Only & 10.84\% & 28.98\% & 38.27\% & 9.96\% & 31.28\% & 48.63\% & 27.99\% \\ Text Only & 10.84\% & 27.65\% & 38.94\% & **11.28\%** & 32.30\% & 47.83\% & **28.14\%** \\ Image + Text & **11.50\%** & 25.00\% & 36.28\% & 9.65\% & 31.59\% & 46.90\% & 26.82\% \\ \hline \hline \end{tabular} \end{table} Table 4: Results for ZSC task (EuroSAT), VLR tasks (RSITMD), and SeLo task (AIR-STL) with adding adapter to image or text encoder. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \multicolumn{5}{c}{**Zero-shot Classification**} & \multicolumn{4}{c}{**Semantic Localization**} \\ \hline **Model** & \# Parameter & Top-1 Accuracy & \(R_{su\uparrow}\) & \(R_{as\downarrow}\) & \(R_{da\uparrow}\) & \(R_{mi\uparrow}\) \\ \hline ViT-B-32 (G) & 151M & 45.84\% & 0.7220 & 0.2848 & 0.6880 & 0.7111 \\ ViT-L-14 (G) & 427M & 49.01\% & 0.7532 & **0.2449** & **0.7285** & **0.7477** \\ ViT-H-14 (G) & 986M & 60.22\% & 0.7527 & 0.2642 & 0.7097 & 0.7360 \\ ViT-bigG-14 (G) & 2.5B & 58.74\% & **0.7656** & 0.2849 & 0.7030 & 0.7323 \\ **ViT-B-32 (GD)** & 152M & **61.24\%** & 0.7180 & 0.3166 & 0.6589 & 0.6912 \\ \hline \hline \multicolumn{5}{c}{**Image-to-Text Retrieval**} & \multicolumn{4}{c}{**Text-to-Image Retrieval**} \\ \hline **Model** & \# Parameter & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 & mR \\ \hline ViT-B-32 (G) & 151M & 9.51\% & 23.01\% & 32.74\% & 8.81\% & 27.92\% & 43.23\% & 24.20\% \\ ViT-L-14 (G) & 427M & 12.61\% & 29.87\% & 42.48\% & **15.17\%** & 39.20\% & 52.92\% & 32.04\% \\ ViT-H-14 (G) & 986M & 12.61\% & 33.41\% & 44.69\% & 14.20\% & 39.47\% & 55.27\% & 33.27\% \\ ViT-bigG-14 (G) & 2.5B & **13.94\%** & 34.51\% & 45.13\% & 13.98\% & 41.59\% & 56.59\% & **34.29\%** \\ **ViT-B-32 (GD)** & 152M & 11.50\% & 25.00\% & 36.28\% & 9.65\% & 31.59\% & 46.90\% & 26.82\% \\ \hline \hline \end{tabular} \end{table} Table 5: Results for ZSC task (EuroSAT), VLR tasks (RSITMD), and SeLo task (AIR-STL) with adapter adding to different encoders. In the "**Model**” column, "G" represents the GFM baseline, as the model does not use adapter to tune, and “GD” represents the DFM (Pfeiffer adapter) is applied. image-text matching loss (**ITM**) [46] and self-supervised loss (**SS**) [65][52], both of which have been demonstrated to be effective for downstream tasks. Table 6 illustrates that integrating the image-text matching loss (**ITM**) substantially enhances performance across the ZSC, SeLo, and VLR tasks. This improvement is attributed to the enhanced alignment between the image and text content. However, for models employing the self-supervised (**SS**) loss, there's a 4% to 5% performance decrease in the ZSC and VLR tasks. Interestingly, these models excel in the SeLo task, with notable advantages across various metrics (0.06 in \(R_{su}\), 0.04 in \(R_{as}\), 0.05 in \(R_{da}\) and \(R_{mi}\)). The diminished performance of model using the **SS** loss in the ZSC and VLR tasks may because of the insufficient training since SSL requires much longer training process to converge. In future revisions, we will aim to mitigate this by increasing the number of training epochs. ## 6 Geographical Limitations and Negative Societal Implications In our dataset, there are two potential concerns. First is the data overrepresentation and underrepresentation problems in some parts of the world. We analyzed the geolocation information of images in our dataset (based on 1,079,370 images with geo-information from Fmow, BEN, and YFCC). Our analysis reveals a long-tailed distribution for the "number of images per UTM zone" statistics, as shown in Figure 6. In Figure 7, image density (number of images per UTM zone) is sparse in Middle Africa (zones 29Q - 36Q) and Southern Africa (rectangle zone from 33M to 37K). This might be attributed to the presence of the Sahara Desert and the South African Plateau, which are less inhabited regions. Southern Indonesia and Australia (specifically the desert regions spanning a rectangle zone from 49L to 56H) exhibit low image density. However, an exception is observed in Southern Australia, characterized by its flat terrain and heightened human activity. Northern South America (rectangle zone from 19N to 23M) and Central Asia (rectangle zone from 40T to 44S) display a reduced distribution of images. The former is peculiar, as one would expect higher human activity in this region. Northern regions of Canada and Russia have a low image density, which is understandable given their proximity to the Arctic Circle. High image density is observed in North America, Europe, and most parts of Asia and South America. The low image density areas are overlapped with many underdevelopment areas and inhabitable areas, and this could bring bias into the model trained with RS5M. Second, the RS3 subset may contain wrong captions or misleading information, which could lead to mistakes that might have real-world consequences. ## 7 Conclusion, Limitation and Future Work We introduced a novel framework (\(GFM\xrightarrow{}DFM\xrightarrow{}DTM\)) and constructed the first large-scale RS image-text paired dataset, RS5M. We tried 4 PEFT methods trained with RS5M to play the role of the DFM, and this framework has proven effective in tasks such as ZSC, VLR, and SeLo. However, \begin{table} \begin{tabular}{l l l l l l l l l} \hline \hline \multicolumn{6}{c}{**Zero-shot Classification**} & \multicolumn{6}{c}{**Semantic Localization**} \\ \hline **CL** & **ITM** & **SS** & Top-1 Accuracy & \(R_{su\uparrow}\) & \(R_{as\downarrow}\) & \(R_{da\uparrow}\) & \(R_{mi\uparrow}\) \\ \hline ✓ & & & 61.24\% & 0.7180 & 0.3166 & 0.6589 & 0.6912 \\ ✓ & ✓ & & **61.62\%** & 0.7437 & 0.2929 & 0.6984 & 0.7195 \\ ✓ & & ✓ & 56.63\% & 0.7991 & **0.2567** & 0.7174 & 0.7592 \\ ✓ & ✓ & ✓ & 55.92\% & **0.8006** & 0.2571 & **0.7487** & **0.7674** \\ \hline \hline \multicolumn{6}{c}{**Image-to-Text Retrieval**} & \multicolumn{6}{c}{**Text-to-Image Retrieval**} \\ \hline **CL** & **ITM** & **SS** & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 & mR \\ \hline ✓ & & & 11.50\% & 25.00\% & 36.28\% & 9.65\% & 31.59\% & 46.90\% & 26.82\% \\ ✓ & ✓ & & **13.05\%** & 30.53\% & 41.15\% & **12.48\%** & 36.81\% & 53.58\% & **31.27\%** \\ ✓ & & ✓ & 9.29\% & 21.68\% & 34.51\% & 8.05\% & 27.74 & 46.15\% & 24.57\% \\ ✓ & ✓ & ✓ & 9.29\% & 22.35\% & 32.74\% & 8.19\% & 27.96\% & 46.33\% & 24.48\% \\ \hline \hline \end{tabular} \end{table} Table 6: Results for ZSC task (EuroSAT), VLR tasks (RSITMD), and SeLo task (AIR-STL) with using different losses. Contrastive loss (**CL**), image-text matching loss (**ITM**) [46] and self-supervised loss (**SS**) are utilized to compare. Figure 6: The distribution of images per UTM zone Figure 7: The distribution of geolocation for images in RS5M most PEFT methods do not account for the interaction between the image and text modalities as they were initially designed for LLMs. This calls for the creation of more complex DFMs in future work. Moreover, while VLM models were utilized to rank generated captions, we see potential in adopting more sophisticated selection criteria, such as decomposing captions into phrases and map them to image content, enabling a fine-grained alignment between an image and its caption. Another consideration pertains to our reliance on several CLIP models in our processing pipeline, which may propagate inherent biases within CLIP. Finally, we believe it is crucial to extend the exploration of advanced DFMs' performance to other RS-related downstream tasks. Examples of these tasks include change detection, object detection, scene classification, semantic segmentation, RSVQA, and geo-localization for UAVs and satellite images. These explorations could offer valuable insights in the RS research domain. Appendix ### Related Work #### a.1.1 Image-Text Paired Datasets for Remote Sensing **UCM Captions** dataset [69], derived from the UC Merced Land Use Dataset [104] by Qu et al. The image data is extracted from the USGS National Map Urban Area Imagery collection and consists of 2,100 RGB aerial images from 21 classes. Each image includes 5 captions, with 2032 unique captions in total. The image resolution is 256 \(\times\) 256, and the spatial resolution is 1ft. **Sydney Captions** dataset [69], a version of the Sydney scene classification dataset proposed in [114], contains 613 RGB images of Sydney, Australia, acquired using Google Earth. Qu et al. provided 3,065 captions, 1109 of which are non-duplicate captions. The image size is 500 \(\times\) 500, with 1 ft spatial resolution. **RSICD**[58] is a dataset contributed by Lu et al., containing 10,921 remote sensing RGB images from Google Earth, Baidu Map, MapABC and Tianditu. Each image is annotated with 5 natural language captions, with 18,190 unique ones. The image resolution is 224 \(\times\) 224 pixels. This dataset, along with UCM Captions and Sydney Captions dataset, contain very repetitive language with little detail. **RSITMD**[109] (Remote Sensing Image-Text Match dataset) is a fine-grained and challenging RS dataset for image-text matching, proposed by Yuan et al. It is originally designed for RS multimodal retrieval tasks and features detailed captions describing object relations compared to other RS image-text paired datasets. Additionally, it contains keyword attributes (1-5 keywords for each image) that can be utilized for RS text retrieval tasks based on keywords. The dataset has a total of 23,715 captions for 4,743 images across 32 scenes, with 21,829 of these being non-duplicate. **RSVGD**[113] is a comprehensive benchmark dataset for Remote Sensing Visual Grounding (RSVG) tasks, introduced by Zhan et al. in 2022. The RSVG task focuses on localizing objects of interest referenced in queries within RS images. The dataset is built upon the DIOR RS image dataset, originally designed for object detection. RSVGD comprises 38,320 RS image-text pairs and 17,402 RS images, with an average expression length of 7.47 and a vocabulary size of 100. The image resolution is 800 \(\times\) 800 pixels, and the spatial resolution ranges from 0.5m to 30m. The text description is synthesized from templates and pre-defined rules. #### a.1.2 Large-Scale Image Dataset for Remote Sensing **BigEarthNet**[82] is a large-scale RS dataset comprising 590,326 pairs of Sentinel-1 and Sentinel-2 image patches. The BigEarthNet archive project is supported by the European Research Council. Each image is accompanied by multi-class labels. The data was collected from June 2017 to May 2018 across 10 European countries and has been atmospherically corrected. BigEarthNet with Sentinel-1 image patches has 2 channels (VV and VH), while BigEarthNet with Sentinel-2 image patches includes 12 channels 7. Footnote 7: [https://www.tensorflow.org/datasets/catalog/bigearthnetnet](https://www.tensorflow.org/datasets/catalog/bigearthnetnet) **Functional Map of the World**, a.k.a. FMoW [19], is a RS dataset consists of 1,047,691 images covering 207 countries, collected by Christiel et al. in 2018 from the DigitalGlobe constellation 8. They provide extra information such as location, time, sun angles, physical sizes, etc. For each image, at least one bounding box annotation for 1 of 63 categories is offered. There are two versions of the dataset, fMoW-full includes 4-band and 8-band multi-spectral information and is in tif format, and fMoW-rgb is in JPEG format with RGB channels only. Footnote 8: [https://www.digitalglobe.com/resources/satellite-information](https://www.digitalglobe.com/resources/satellite-information) **Million-AID**[55] is another large-scale RS benchmark dataset containing 1 million RGB images for remote sensing image scene classification tasks. Proposed by Long et al., the dataset extracts aerial images from Google Earth and features a three-level class taxonomy tree with 51 third-level (leaf) nodes, 28 second-level nodes, and 8 first-level nodes. The authors also devised several strategies for manual, automatic, and interactive annotation of RS images. #### a.1.3 Vision-Language Model Overview and Application Large-scale pre-trained VLMs can be categorized based on their pre-training task objectives, such as contrastive vision-text alignment, image-text matching, masked language modeling, etc. [24]. CLIP [70], which uses 400 million image-text pairs, demonstrates remarkable generalizability even when faced with distribution shifts. ALIGN [38] further illustrates that increasing dataset size, even with noisy data, can lead to performance improvements. Variants of CLIP either mine fine-grained alignment between image and text tokens [105] or aim to learn better representations through self-supervision [65] and cross-modality supervision [52]. These models align textual and visual information in a shared semantic space using contrastive learning tasks, and their success is closely linked to the vast amount of data. UNITER [15], SOHO [36], ViLBert [57], ALBEF [46], and BLIP [45] employ image-text matching task objectives, allowing them to learn fine-grained alignment between image and text representations. Models such as Oscar [49], VL-bert [80], VisualBert [47], FLIP [51], and BEIT3 [94] utilize Masked Language Modeling objectives, a strategy proven to be not only effective but also efficient [51]. Predictions for masked tokens in these models are based on both unmasked visual and language tokens, leveraging and aligning tokens from both modalities. Various innovative approaches have been introduced to enhance the performance of pre-trained VLMs. These include in-context learning by Flamingo [1], captioning loss by CoCa [107], Language-Image Bootstrapping by BLIP [45], and the Mixture of Experts Framework from VLMo [4]. It is important to note that most pre-trained VLMs combine multiple pre-training task objectives. For instance, ALBEF [46] employs contrastive loss and image-text matching loss, CoCa [107] utilizes contrastive loss and captioning loss, and FLIP [51] uses contrastive loss and loss from MAE [28]. Pre-trained VLMs have demonstrated their ability to tackle not only general tasks like image-text retrieval, zero-shot classification, image captioning, and VQA, but also more complex vision tasks. Examples include GLIP [48] for visual grounding, MDETR [40], XDETR [8], and RegionClip [118] for cross-modal object detection, and GroupViT [103] for text-supervised image segmentation. Additionally, generative VLMs such as DALLE [72], DALLE2 [25], IMAGINE [93], and stable-diffusion [73] have gained significant attention in recent years. #### a.1.4 Vision-Language Model for Remote Sensing Wen et al. [96] presented a survey on Vision-Language Models in Remote Sensing, concluded several promising research directions. These include making Large-scale dataset and Vision-language foundation models, creating text-based image generation using diffusion models, doing few-/zero-shot learning with LLMs and VLMs, efficient finetuning on RS data,integrate RS expert knowledge into LLMs, and linking text-based information with RS via geolocation. Zhang et al.[115] provided a comprehensive overview of recent advancements in applying artificial intelligence techniques to remote sensing data analysis. It covers major AI aspects including machine learning, deep learning, computational intelligence, AI explicability, data mining, natural language processing, and AI security. Key topics include CNNs for tasks like classification, detection, and fusion; generative models like GANs; evolutionary algorithms and neural architecture search for optimization; efforts towards interpretable models; mining multimodal data; generating image descriptions; and adversarial threats. Hu et al. [33] made a new high-quality remote sensing image captioning dataset called RSICap, consisting of 2,585 human-annotated image captions with rich scene and object details. The authors also introduce an evaluation benchmark called RSIFeval with image captions and visual question-answering pairs. Based on these datasets, the authors develop a remote sensing vision-language model called RSGPT by finetuning InstructBLIP on RSICap. Yuan et al. introduced AMFMN [109], an asymmetric multimodal feature matching network using triplet loss with a dynamic variable margin, designed for cross-modal RS text-image retrieval tasks. They later developed LW-MCR [111] and GALR [112], lightweight cross-modal text-image retrieval methods that outperform AMFMN in speed and performance. Additionally, they proposed Semantic Localization tasks [110], a weak visual grounding task enabling semantic-level retrieval with caption-level annotation. Zhan et al. presented MLCM, a transformer-based multi-Level cross-modal feature learning module that adaptively filters irrelevant noise and enhances salient features for the RSVG task [113]. Basso introduced CLIP-RS [5], a cross-modal remote sensing image retrieval platform that combines CLIP with the FAISS library [39] (a library for similarity search), using RS data from Northern Virginia. Arutiunian et al. fine-tuned CLIP with RSICD, achieving significant improvements in top-1 accuracy for zero-shot classification 9. Footnote 9: [https://huggingface.co/blog/fine-tune-clip-rsicd](https://huggingface.co/blog/fine-tune-clip-rsicd) #### a.1.5 Parameter-Efficient Tuning for Large Language Models Large Language Models (LLMs) like BERT [23] and GPT [71], trained on vast text corpora, have achieved state-of-the-art results across numerous NLP benchmarks. However, their millions or billions of parameters make full fine-tuning for each downstream task unrealistic. Adapters offer an alternative solution for LLM fine-tuning, as they freeze the pre-trained LLM's weights while training only the adapter's parameters, which have significantly less number of parameters. This approach speeds up adaptation while maintaining comparable performance to full fine-tuning. Adapter was originally proposed by Houlsby et al. [31], adding two MLPs with bottleneck structures and residual connections after the feed-forward layers in every transformer layer. Pfeiffer et al. introduced AdapterFusion [67], which designs different adapters for various tasks and learns a parameterized mixer to encode information in the end. Inspired by prompt learning, Li et al. proposed Prefix-Tuning [50], which adds a small, continuous, task-specific vector (a.k.a, prefix) before the text for adaptation. Hu et al. developed the well-known LoRA [32], Low-Rank Adaptation, which injects trainable rank decomposition matrices (\(W_{0}x+\Delta Wx=W_{0}x+BAx\)) to approximate adaptation in each transformer layer. Mao et al. proposed UNIPELT [62], a unified framework for parameter-efficient language model tuning (PELT), which combines different PELT methods as submodules (e.g., bottleneck adapter [31], LoRA [32], prefix-tuning[50]) and learns to activate the best-suited ones for the current task through a gating mechanism. #### a.1.6 Parameter-Efficient Tuning for Vision-Language Models Gao et al. proposed CLIP-Adapter [27], which adds a two-layer MLP with a bottleneck structure and residual connection to the text and vision encoder for visual classification tasks. Zhang et al. introduced Tip-Adapter [116], a training-free adapter for CLIP that constructs a key-value cache model from few-shot examples in the training set, allowing cached examples to vote for their labels. In VL-Adapter [84], Sung et al. applied adapter-based parameter-efficient transfer tuning methods (Bottleneck adapter [31], compacter [59], hyperformer [60]) to VLMs, addressing VQA and image captioning tasks. Another approach to parameter-efficient tuning for VLMs is prompt-based learning. CoOp [120] learns prompt tokens for input in the text encoder to assist zero-shot classification. CoCoop [119] extends CoOp by using a lightweight neural network to generate an input-conditional token for each image. Colorful Prompt [106] presents cross-modal prompt tuning, constructing fill-in-the-blank problems in a color-based co-referential manner. #### a.1.7 Pre-trained Models in Remote Sensing Wang et al. trained CNN (ResNet) and ViT-based backbones (Swin Transformer and ViTAEv2)[90] in MillionAID [55], examining the impact of RSP on various downstream tasks such as scene recognition, semantic segmentation, object detection, and change detection. Their findings indicate that RSP can improve performance in scene recognition tasks but has limitations in others. Wang et al. also proposed a 100M parameter model later [91], an advanced plain ViT with rotated varied-size window attention, pre-trained with unsupervised MAE method [28], achieving sota performance on the DOTA-V1.0 [100] dataset with an 81.24 % mAP and competitive results for downstream classification and segmentation tasks. However, their dataset and models are single-modality therefore cannot utilize the supervision from text labels, suggesting potential improvements with VLMs using image-text paired datasets. Their models are the largest model in the field of RS so far. Moreover, numerous self-supervised pre-trained models have emerged in the remote sensing field since labeled data are rare in the RS domain. SeCo [63] utilizes unlabeled data from multiple Earth locations and different times. By leveraging position invariance, the model learns transferable representations for remote sensing in a self-supervised manner. Ayush et al. [3] using spatially aligned images to construct temporal positive pairs for contrastive learning by injecting temporal and geographical information into MoCo-V2 [14]. Vincenzi et al. [88] propose to leverage the high-dimensionality of spectral bands to reconstruct visible colors, thus learning the good RS feature with the self-supervised learning approach. #### a.1.8 GeoQA and GeoAI Mai et al. provided an amazing survey in GeoQA [61] (Geographic Question Answering). GeoQA focuses on solving problems that require a comprehensive understanding of textual descriptions, visual diagrams, and theorem knowledge in the context of geography. Datasets in this domain such as GeoQA dataset [13], GeoSQA [35], Tourism [20], GADM [68] which contains geometric problems with corresponding annotated programs that illustrate the problem-solving process. These datasets are designed to facilitate research on explicit and explainable reasoning in a multimodal context in the geographic domain. GeoAI, geospatial artificial intelligence, is a rapidly emerging field that integrates AI techniques with geographic and geospatial data. GeoAI is particularly effective in harnessing vast amounts of spatial and non-spatial data, offering advantages such as large-scale analytics, automation, high accuracy, sensitivity in detecting subtle changes, noise tolerance, and rapid technological advancement. The field encompasses a wide range of applications, including large-scale image analysis using various types of data like satellite and drone images, street views, and geo-scientific data. GeoAI research aims to provide solutions that are more efficient, accurate, and capable of detecting new patterns and processes in geospatial data. One of the challenges in GeoAI is ensuring that models are interpretable, explainable, and generalizable. GeoQA focuses on the aspect of question answering in the geographic domain, and GeoAI is a broader field that integrates AI techniques with geospatial data for various applications, including analysis, modeling, and prediction. ### Rssm #### a.2.1 Pub11 The LAION2B-en dataset is an English subset of the well-known LAION5B [76], collected by laion.ai and filtered by CLIP. It has 2.3 billion image-text pairs and is the largest publicly available image-text dataset so far. Similarly, LAION400M contains 400 million image-text pairs. LAIONCOCO10 selects 600 million images from Laion2B-en, re-captioned using BLIP and CLIP to generate more descriptive captions. COY0700M collects 700 million informative image-alt-text pairs from HTML documents. Both CC3M and CC12M follow a similar collection process to COY0700M. YFCC15M11 is an English subset of YFCC100M, cleaned by Redford et al. in [70]. WIT is a multilingual Wikipedia-based image-text dataset, from which we only select English data for our dataset. Redcaps is a web-curated image-text dataset, primarily sourced from Reddit. SBU is collected from Flickr using a vast number of queries and then filters the noisy results. Visual Genome is an image dataset containing structured image concepts such as region descriptions, object instances, relationships, and more. Although both CC3M and CC12M datasets are publicly accessible and designed to be disjoint, some duplicate images may still exist, as is the case for LAION2B and LAION400M. Visualization of images sampled from PUB11 can be found in Figure 8. Footnote 10: [https://laion.ai/blog/laion-coco/](https://laion.ai/blog/laion-coco/) #### a.2.2 Keywords for Keyword Filtering **Group 1**: "remote sensing", "earth observ", "aerial imag", "aerial photo", "aerial map", "aerial pic", "aerial view", "aerial scan", "aerial satellite", "satellite imag", "satellite photo", "satellite map", "satellite pic", "satellite view", "satellite scan", "satellite data", "satellite surveillance", "space photo", "satellite map", "s "spaceborne photo", "space-borne photo", "space image", "spaceborne image", "space-borne imag", "space view", "spaceborne view", "space-borne view", "space surveillance" **Group 2**: "Google Earth", "Freesound", "Sentinel-1", "Sentinel-2", "Gaofen", "USGS", "NAIP", "MODIS", "EOSDIS", "WorldView", "Planet Dove", "ArcGIS", "Maxar", "Landsat", "Geographic Information System", #### a.2.3 Remote Sensing Prompt Template **rs_templates** = [ 'a remote sensing image.', 'a low resolution remote sensing image.', 'a bad remote sensing image.', 'a cropped remote sensing image.', 'a bright remote sensing image.', 'a dark remote sensing image.', 'a close-up remote sensing image.', 'a black and white remote sensing image.', 'a jpeg corrupted remote sensing image.', 'a blurry remote sensing image.', 'a good remote sensing image.', 'an aerial image.', 'a low resolution aerial image.', 'a bad aerial image.', 'a bad aerial image.', 'a daxed and white aerial image.', 'a jpeg corrupted aerial image.', 'a blurry aerial image.', 'a good aerial image.', 'a satellite image.', 'a low resolution satellite image.', 'a bad satellite image.', 'a cropped satellite image.', 'a bright satellite image.', 'a dark satellite image.', 'a close-up satellite image.', 'a black and white satellite image.', 'a jpeg corrupted satellite image.', 'a blurry satellite image.', 'a good satellite image.', ] #### a.2.4 Remote Sensing Binary Classification Dataset We select 2500 satellite images from MillionAID, 2500 aerial images from LAION2B as positive data, and 5000 non-RS images from ImageNet-1k as negative data. We split the dataset with the ratio of 7:1:2 for train, validation, and test set. Classes are balanced in each split. The trained classifier achieves 99.20% in the validation set, and 97.55% in the test set. #### a.2.5 Detail on Filtering Large-Scale Image-Text Paird Datasets When downloading the images from URLs, some images may be missing due to broken link. For invalid image checking and deduplication, we filter out images that cannot be opened or have a zero file size. When utilizing fastdup to detect and cluster duplicate images, we use the cosine similarity distance function and set the number of nearest neighbors to 5. Additionally, we set the "\(min\_distance\)" parameter in the \(connected\_components()\) API to 1. For each cluster of duplicate images, to select the image we keep, we establish a set of priority rules. Initially, we discard images which are from the "laioncoco" dataset, as it contains long and redundant captions (The caption for images from laioncoco dataset are generated by concatenating many captions generated by BLIP [45], and most of them are not informative). Then, we verify if there is an image from the "laion2b", "laion400m," or "coyo700m" datasets, which will constitute the training set of our RS5M dataset. If neither of the previous conditions is met, we randomly select an image. A demonstration of the VLM filtering results can be seen in Figure 9. In each of the nine blocks, we filter the dataset using fixed \(m\) and \(n\) and randomly sample 100 images to show. We apply thresholds to \(s_{i}\) (from top to bottom) and \(c_{i}\) (from left to right) to keep images that have top 100% (no threshold), top 90%, and top 80% \(s_{i}\) and \(c_{i}\). A heatmap shows the number of remaining images for different thresholds is shown in Figure 10. It illustrates the trade-off between dataset noise and the number of images. We choose a group of loose thresholds (top 90% \(s_{i}\) and top 80% \(c_{i}\)) to retain more images while addressing the outliers in the next section. After this step, 3,007,809 images remain in the dataset. Samples of filtered images (outliers) processed by Vision-Language Model Filtering and Classifier Filtering can be found in Figure 11. Finally, the images removed by all kinds of processing methods are listed in Table 8. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Name & \# Downloaded & \# RIIC & \# RDRIF & \# RVLMFRSD & \# Romain \\ \hline \hline \begin{tabular}{c} _LIONXB_ \\ _COY0700M_ \\ _LAION0C0C_ \\ _LAION0C0C_ \\ _LAION400M_ \\ _WIT_ \\ _YFCC15M_ \\ _CC12M_ \\ _Redcaps_ \\ _CC3M_ \\ _SBU_ \\ _VG_ \\ _Total_ \\ \end{tabular} & \begin{tabular}{c} 1,718 \\ 91 \\ 91 \\ 5,244,251 \\ \end{tabular} & \begin{tabular}{c} 102 \\ 28 \\ 245,067 \\ \end{tabular} & \begin{tabular}{c} 343,017 \\ 28 \\ 226,069 \\ \end{tabular} \\ \hline \end{tabular} \end{table} Table 8: The statistics of public image-text paired datasets after each processing steps, “RIIC” means “Removed by Invalid Image Checking”, “RDIF” denotes “Removed by Duplicate Image Filtering (URL+Fastdup)”, “RVLMFRSD” means “Removed by VLM filter and RS Image Detector”. Figure 9: A image sample of thresholding with different quantiles of \(s_{i}\) and \(c_{i}\), top to bottom are the values of \(s_{i}\) in descending order, and left to right are the values of \(c_{i}\) in ascending order. Filtering with top 90% \(s_{i}\) and 80% \(c_{i}\) can already give us a good remote sensing dataset (center block). Figure 11: Images filtered by VLMFRSD having bottom 10% \(s_{i}\) and bottom 20% \(c_{i}\). Figure 10: A heatmap indicates the number of remaining images for different combinations of thresholds. From top 100% (no threshold) to top 60%, with an decrement of 5% #### a.2.6 Rs3 Visualization of images sampled from RS3 can be found in Figure 12. Captioning RS images with VLMs pre-trained on images with common objects has proven to be effective, as demonstrated in [45] and figure 13. Although some captioning models, such as BLIP-Large and GIT-Large, may generate repetitive or nonsensical captions, other models like CoCa and BLIP2 can produce impressive and meaningful captions for RS images. For instance, these models are capable of recognizing concepts like "church," "airport," and "farm" in satellite views. Image-text pairs collected through this process are primarily satellite images. Figure 12: Visualization of images sampled from RS3. Almost all of them are satellite images. Examples of captioning MillionAID, BigEarthNet, and FMoW datasets with different Vision-Language Models are provided in Figure 14, Figure 15, and Figure 16. #### a.2.7 Captioning Result with Different Sampling Method Image Sample: Figure 16: Huggingface captioning result for FMoW dataset Figure 17: Top 5 captioning results using BLIP2 model for images from MillionAID, FMoW, and BigEarthNet. Figure 18: An Airport in Turkey in the satellite view, select from the FMoW dataset. **beam search caption result**: ['an aerial view of an airport in the middle of a city', 'a satellite image of an airport in the middle of a city', 'a satellite view of an airport in the middle of a city', 'an aerial view of a large airport in the middle of a city', 'a satellite image of a large airport in the middle of a city', 'a satellite image of a large airport in the middle of a city', 'an aerial view of an airport in the middle of a city', 'an aerial view of an airport in the middle of a city', 'an aerial view of an airport in the middle of a city', 'a satellite image of a large airport in the middle of a city', 'an aerial view of an airport in the middle of a city', 'a satellite image of an airport in the middle of a large city', 'a satellite image of an airport in the middle of a large city', 'a satellite image of an airport in the middle of a large city', 'a satellite image of an airport in the middle of a large city', 'a satellite image of an airport in the middle of a field', 'a satellite image of an airport in the middle of a field', 'a satellite image of an airport in the middle of a large city'] **nucleus sampling caption result**: ['a satellite image shows a small airport', 'a satellite image of an airport with several runways and buildings', 'a satellite image of an airport', '**aerial views of airfields in turkey', 'nigeria, zaria - airbase and airforce base, zaria', 'a satellite image of an airport and city in the middle', 'a satellite image of an airport in a city', 'a satellite view showing a runway and airport terminal', 'a satellite image shows an aerial view of airport', 'aerial photograph of small airplane on the runway','map aeropuerto de grecia zante aegean', 'a city with a small airport in the background', 'aerial view of an airport in a satellite image', 'this is an aerial image of the airport near town', 'aerial view of a city airport and its airport parking lots', 'a satellite image shows the airport and its surroundings', 'a google satellite image of an airport', 'gs map of feubire international airport by feubire international airport, istanbul, turkey','satellite view of the airport area with some airplanes on it', 'airport ataturk airport, turkey, satellite image'], ['cairo international airport from the air', 'a satellite image shows an airport near a city', 'view of the airport from above', 'a map is displayed as it shows an aerial view of a city', 'the area where planes land in a small town', 'google satellite map image of a airport on a grass field', 'a satellite image of the airport with its runway', 'this is a satellite image of a large airport', 'a picture of an airport with a plane sitting on the ground', 'aerial view of the airport that looks like it has a lot of aircraft on the ground', 'a satellite photo of an airport in the middle of a large city', 'aerial view of an airport next to a city with a couple of large buildings', 'the aerial photo shows an airport from above', 'a satellite view shows many aircraft and airplanes', 'a satellite photo of an airport in a rural area', 'a satellite map shows the location of an airport in Iraq', 'a aerial photo of the airport in the middle of an arid landscape', 'an aerial view of an airport near a village', 'this is an aerial view of an airport in the middle of a field', 'a satellite image of an air base and a road'] #### a.2.8 Rotational Invariance #### a.2.9 Combine Machine Generated Caption with Image Meta Information We enhanced the caption from RS5M by integrating meta information (geometa, class labels, UTC, etc.) into readable sentences as a part of image caption. This structured meta-caption, combined with the model-generated caption, offers a more comprehensive view. We believe this incorporation of image meta information not only augments our dataset's richness but also aids in drawing more precise insights from it. The datasets and their utilized meta info are listed below: Figure 19: Visualization for image rotation in 12 different angles * **FMoW** Included Meta Info: Longitude, latitude, class label, bounding box coordinates, ground sample distance, UTM zone, timestamp, cloud cover rate, scan direction, target azimuth, and off-nadir. Geographic Details: city, country. Temporal Details: season (color of tree/leaves, snow-covered or not, etc.), timestamp. Image Specifics (for objects in the image): class labels, relative location (Top/Centre/Bottom & Left/Centre/Right). Additional Details: ground sample distance, UTM zone, cloud cover rate, scan direction, target azimuth, off-nadir. * **BigEarthNet** Included Meta Info: Class labels, timestamp, UTM zone. Temporal Details: season, timestamp. Image Specifics (for objects in the image): class labels. Additional Details: UTM zone. * **YFCC14M** Included Meta Info: Date taken, longitude, latitude. Geographic Details: city, country. Temporal Details: season, timestamp. * **CC3M** Included Meta Info: Machine tags (aligned with caption). Image Specifics: class labels. * **Redcaps** Included Meta Info: Created_utc (UNIX-timestamp). Note: As the timestamp denotes blog creation and not image capture, we've decided to exclude this meta info. #### a.2.10 Tuned BLIP2 Details The BLIP2 model is not good at RS captioning task compared with the common objects captioning task (MSCOCO obtains METEOR score of 0.1506 using BLIP2-vanilla, but RSICD and RSITMD only have 0.0687 and 0.0625). We enhanced the BLIP2 opt6.7B model by tuning it (vision encoder) using LoRA with the RS-specific data from the RSITMD dataset(training set). To evaluate the improvement in the quality of captions with respect to Remote Sensing (RS), we assessed the METEOR score across three test sets: RSVG test set, RSCID test set, and RSITMD test set. By comparing the performance of the original BLIP2 ("BLIP2-vanilla") and the refined BLIP2 ("BLIP2-RS"), we observed a marked enhancement in the latter's capability to generate RS-related captions. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline **Model/Dataset** & MSCOCO & RSICD & RSITMD & RSVG \\ \hline BLIP2-vanilla & 0.1506 & 0.0687 & 0.0625 & 0.0949 \\ \hline BLIP2-RS & - & 0.1528 & 0.1420 & 0.1301 \\ \hline \end{tabular} \end{table} Table 10: The results of METEOR score for RSVG, RSICD, RSITMD and MSCOCO(test sets) \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Dataset & FMoW & BigEarthNet & YFCC14M & CC3M & RedCaps & Total \\ \hline Count & 727,144 & 344,385 & 9,629 & 2,487 & 1,486 & 1,085,131 \\ \hline \end{tabular} \end{table} Table 9: Statistics of meta caption per dataset #### a.2.11 Rating System for Model Generated Captions We designed a 5-level rating system from 3 major perspectives for evaluating the sampled caption ( 2000 samples for now, we will continue adding the sample size) 12. Footnote 12: [https://github.com/om-ai-lab/RS5M/tree/main/rating_app/rating_sys_webdataset](https://github.com/om-ai-lab/RS5M/tree/main/rating_app/rating_sys_webdataset) **Relevance & Detail** Evaluates how well the generated caption relates to the image and captures the essential details. 5 - Excellent: The caption perfectly describes the main elements of the image with precise details. 4 - Good: The caption describes most of the main elements accurately but may miss minor details. 3 - Average: The caption captures the general idea but lacks some significant details. 2 - Below Average: The caption misses many important elements and may misrepresent the image. 1 - Poor: The caption is largely unrelated to the image or misses the main point entirely. **Hallucination** Assesses the extent to which the caption introduces elements or concepts not present in the image. 5 - Excellent: No hallucinated details; the caption strictly adheres to the image content. 4 - Good: Minor hallucinated details that don't significantly alter the overall meaning. 3 - Average: Some noticeable hallucinations, but the core message remains intact. 2 - Below Average: Many hallucinated details that mislead the viewer. 1 - Poor: The caption is mostly based on hallucinated content, with little to no relation to the actual image. **Fluency & Conciseness**: Evaluates the linguistic quality of the caption and its brevity. Figure 20: A snapshot of our rating system. 5 - Excellent: The caption is linguistically flawless and conveys the message in the most concise manner. 4 - Good: The caption is mostly fluent with minor verbosity but remains clear. 3 - Average: Some linguistic errors or unnecessary words, but the message is understandable. 2 - Below Average: Several linguistic issues such as obvious misspellings or grammatical errors and a lack of conciseness, making it harder to grasp. 1 - Poor: The caption is hard to understand due to major linguistic errors and excessive wordiness, such as duplication and broken sentences. The results are shown below: satellite images from PUB11, accompanied by their captions. For improved visual presentation, we have truncated longer captions. While not all captions are informative, they do relate to their corresponding images. Middle 4 columns in Figure 21 presents images taken from aerial views within PUB11. In contrast to satellite images, aerial images are captured at lower altitudes, offering more detailed views of the ground. Additionally, the shooting angles differ significantly from those of satellite images. Right 4 columns in Figure 21 right features a selection of representative outliers and intriguing images. Most outliers consist of meteorological satellite images, illustrative figures, and space images, which are distinct from conventional aerial and satellite images. The dataset also includes some interesting outliers, such as an artwork depicting a city's nighttime scene and a model of town photograph. Although these images do not strictly fall within the realm of remote sensing, they do have delicate connections with the RS. We present the statistics for the width and height of images from PUB11 in Figure 22, the average height and width are 402.87 pixels and 522.53 pixels respectively. #### a.2.13 Outlier & Misfiltered Image Analysis Although we used multiple procedures to filter images that are not RS images. There are still some outliers remaining in the PUB11. In contrast, some RS images are over-filtered by the VLMFRSD process. To analyze this quantitatively, we sampled 5,000 images from PUB11 and 5,000 images from the removed image collection (from the VLMFRSD step) and asset them one by one, which are 0.1% from RS5M and 0.5% from the removed collection. In Table 15, we list the confusion matrix for further discussion. Samples for outliers in RS5M and misfiltered images from the removed image collection are presented in Figure 23. Around 0.8% of images in the sampled RS5M images are outliers, and 3.4% of images from the removed collection are RS5M images. Most of the images in Figure 21: Pub11 images in satellite view (left 4 columns), aerial view (middle 4 columns), and outliers (right 4 columns). Figure 22: PUB11 image width and height statistics. the former case are maps, illustrations, and weather imagery, and the images in the latter case are RS images shot from a low altitude. #### a.2.14 Hardware & Safety Check Our dataset was processed on a desktop computer equipped with a single NVIDIA RTX 4090 24GB GPU, a 16-core Ryzen 3950x processor, 64 GB of RAM, and 4TB SSD. The experiments are done with 2 NVIDIA RTX3090 24GB GPUs (main experiments) for 3 weeks, and 1 RTX A100 80GB GPU (tuning stable-diffusion) for 1 week. We utilized publicly available remote sensing data, which should not contain any NSFW content. To ensure safety, we converted the TIFF files to standard JPG format while removing sensitive geographical coordinates. ### License As shown in Table 16, almost all involved datasets allow redistributing the metadata. We have sent several emails to the authors of the Million-AID dataset, but got no response. For the PUB11 subset, we plan to release the metadata of PUB11 first. For RS3, since BigEarthNet and FMoW allow the redistribution of image data, we plan to release the meta file and image-text tar file in the weddataset format. For Million-AID, we will release the text data with the corresponding image name since it has an unclear license. We will claim that our RS5M dataset is only allowed to be used for academic purposes, and we bear all responsibility in case of violation of rights. We will take appropriate action when needed, e.g. to remove data with such issues. We will host our RS5M in Aliyun through OSS for at least 1 year, after that we plan to migrate the data to Google Drive or Dropbox. ### Stable Diffusion Tuned with RS5M Given the impracticality of training the Stable Diffusion model from scratch with only 5M data, we present a Stable Diffusion model tuned by 1% data of RS5M, which we refer to as RS-SD. Specifically, \begin{table} \begin{tabular}{|l|c|c|} \hline & Recognized as RS image & Recognized as Outlier \\ \hline _Is RS image_ & 4831 & 169 \\ \hline _Is Outlier_ & 40 & 5960 \\ \hline \end{tabular} \end{table} Table 15: The confusion matrix for RS images removed by VLMFRSD, and outliers in RS5M dataset. Figure 23: Visualization for outliers in RS5M (left) and misfiltered images from the removed image collection (right). we use Dreambooth [75] from a modified Diffuser repository [89]13. The image resolution is set to 512, with a batch size of 50 for 50,000 steps. The text encoder was trained as well. Footnote 13: [https://github.com/ShivamShrirao/diffusers](https://github.com/ShivamShrirao/diffusers) We generate 40,000 samples using different queries to calculate the fid of vanilla Stable Diffusion and Tuned Stable Diffusion (RS-SD, tuned with RS5M). The vanilla Stable Diffusion model yields the FID score of **36.86** for the RS domain generation task. However, the RS-SD model achieves significantly improved FID scores of **28.32**. In overall, RS-SD outperforms vanilla SD in generating RS images qualitatively and quantitatively. The RS-SD model is capable of generating more realistic RS images that better match corresponding captions, regardless of whether the images are in satellite or aerial view. As demonstrated by Figure 24 and Figure 25, for prompts containing "satellite", the vanilla SD tends to generate unrealistic or meteorological images, but RS-SD can generate the RS images that are more realistic and in accord with RS images for common RS downstream tasks. Besides, the understandings of "snow-covered land", "building with some snow" and "surrounding fields" of RS-SD are significantly better than the SD. ### Downstream Tasks for Remote Sensing We made a summary of benchmark datasets for the Remote Sensing Vision-Language Foundation Model. There are 23 existing datasets from 9 tasks. The dataset summary is shown in Table 17. #### a.5.1 Zero-shot Classification Thanks to the CLIP ([https://openai.com/research/clip](https://openai.com/research/clip)) model's strong image-text association capability, this task can be converted from any image classification dataset if the category names are provided. The model classifies the most relevant category for a given image. It's termed "zero-shot" because the test categories are unseen during training. The evaluation metric is accuracy. #### a.5.2 Vision-Language Retrieval Use text/image to retrieve paired image/text. Pre-trained VLM mostly uses MSCOCO and flikcr30k datasets to evaluate the model. In the field of RS, UTMCaption, SydeneyCaption, RSICD and RSITMD ([https://arxiv.org/pdf/2204.09868.pdf](https://arxiv.org/pdf/2204.09868.pdf)) datasets are frequently used to evaluate the model's VLR capability. Common metrics are recall@1/5/10 and mean recall. #### a.5.3 Semantic Localization Proposed in [https://arxiv.org/abs/2209.06515](https://arxiv.org/abs/2209.06515). The SeLo task is defined as using cross-modal information such as text to locate semantically similar regions in large-scale RS scenes. SeLo **achieves** \begin{table} \begin{tabular}{|c|c|c|} \hline Dataset & License & Allow Redistribution \\ \hline _LAION2B_ & **CC-BY-4.0 License** & **Yes** \\ _COY0700M_ & CC-BY-4.0 License & Yes \\ \hline _LAIONCOCO_ & CC-BY-4.0 License & Yes \\ _LAION400M_ & CC-BY-4.0 License & Yes \\ _WIT_ & **CC-AS-30 Unported license** & **Yes** \\ \hline _YFCC15M_ & relevant Webscope License Agreement & Yes \\ \hline _CC12M_ & The dataset may be freely used for any purpose & **Yes** \\ \hline _Redcaps_ & Only be used for non-commercial research & Yes \\ \hline _CC3M_ & The dataset may be freely used for any purpose & **Yes** \\ \hline _SBU_ & Unknown & Unknown \\ \hline _VG_ & **CC-BY-4.0 License** & **Yes** \\ \hline \(BigEarthNet\) & The Community Data License Agreement – Permissive & Yes \\ \hline _FMoW_ & **Functional Map of the World Challenge Public License** & **Yes** \\ \hline _Million-AID_ & Unknown & Unknown \\ \hline _RSSM_ & Only be used for non-commercial research & **Yes** \\ \hline \end{tabular} \end{table} Table 16: License for data source of RS5M Figure 24: Comparison between images generated by SD and RS-SD with the same text prompts. Figure 25: Comparison between images generated by SD and RS-SD with the same text prompts. **semantic-level retrieval with only caption-level annotation**. It can be considered as a weak detection task without the need to label the bounding box in the training set. The metrics are \(R_{su}\), \(R_{as}\), \(R_{da}\), and \(R_{mi}\). The detailed mathematical definition will not be introduced here. \(R_{su}\) aims to calculate the attention ratio of the ground-truth area to the non-GT area. \(R_{as}\) attempts to quantify the shift distance of the attention from the GT center. \(R_{da}\) evaluates the discreteness of the generated attention from probability divergence distance and candidate attention number. \(R_{mi}\) is the comprehensive indicator above all. All of them are ranging from 0 1, the higher the better, except \(R_{as}\). #### a.5.4 Rsvqa The task of Remote Sensing Visual Question Answering (RSVQA) aims to extract information from remote sensing images using queries formulated in natural language. The primary goal is to make the vast information contained in remote sensing images accessible to a broader audience, including non-experts, through simple questions. Lobry et al. introduced this task and highlighted the challenges associated with it [54]. They propose a system where images can be queried to obtain specific information about their content or to understand relational dependencies between objects visible in the images. They constructed two datasets (RSVQA_HR and RSVQA_LR [54]) using image/question/answer triplets, with the information for building the questions and answers sourced from OpenStreetMap (OSM). They came up with a baseline model as well. Lobry et al. further proposed RSVQAxBigEarthNet [53] for RSVQA task. This dataset extracts image/question/answer triplets from the BigEarthNet dataset and contains nearly 15 million samples. The authors discuss the dataset's construction procedure, its characteristics, and initial results using a deep-learning-based methodology. Later, Chappuis proposed methods [10][11] to improve the model performance in RSVQA task. Recently, Hu et al. presented RSIEval [33], a benchmark consisting of human-annotated captions and visual question-answer pairs, enabling a thorough assessment of VLMs in remote sensing. #### a.5.5 Geo-localization for UAV and satellite images Geo-localization for UAV and satellite images is a task that aims to determine the precise geographical location of objects or areas captured in the images. This involves mapping the content of the images to real-world coordinates. The University1652 [117] dataset is a benchmark dataset designed for \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Dataset Name & Task & \# Image & \# Class & Spatial Resolution & Image Type \\ \hline **EuroSAT** & Zero-shot Classification & 27000 & 10 & - & RGB \\ \hline **AID** & Zero-shot Classification & 10000 & 30 & 0.5m \(\sim\) 8m & RGB \\ \hline **RESiSC45** & Zero-shot Classification & 31500 & 45 & 0.2m \(\sim\) 30m & RGB \\ \hline DOTA-V1.0 & Object Detection & 2806 & 15 & - & RGB \\ \hline DIOR-R & Object Detection & 23463 & 20 & 0.5m\(\sim\)30m & RGB \\ \hline SODA-A & Object Detection & 2513 & 9 & - & RGB \\ \hline FAIR1M & Object Detection & 15000 & 37 & 0.3m\(\sim\)0.8m & TIF \\ \hline Potsdam & Semantic Segmentation & 38 & 6 & 0.05m & MSI \\ \hline iSAID & Semantic Segmentation & 2806 & 15 & - & RGB \\ \hline LoveDA & Semantic Segmentation & 5987 & 7 & 0.3 m & RGB \\ \hline GID15 & Semantic Segmentation & 150 & 15 & 3m & RGB \\ \hline CDD & Change Detection & 16000 & - & 0.03m \(\sim\) 1m & RGB \\ \hline LEVIRCD & Change Detection & 10192 & - & 0.5m & RGB \\ \hline HRSCD & Change Detection & 291 & - & 0.5m & TIF \\ \hline UCMCaptions & VL Retrieval & 2100 & 21 & 1ft & TIF \\ \hline SydneyCaptions & VL Retrieval & 555 & 7 & 1ft & RGB \\ \hline **RSICD** & VL Retrieval & 10921 & 30 & - & RGB \\ \hline **RSITMD** & VL Retrieval & 4997 & 32 & - & RGB \\ \hline RSVQALR & RSVQA & 772 & - & 10m & TIF \\ \hline RSVQAHR & RSVQA & 10659 & - & 0.15m & TIF \\ \hline RSVQAxBigEarthNet & RSVQA & 590326 & 35 & - & TIF \\ \hline **AIR-SLT** & Semantic Localization & 22 & - & - & RGB \\ \hline University-1652 & Geo-localization & 146580 & 72 & - & RGB \\ \hline \end{tabular} \end{table} Table 17: Datasets for Downstream Tasks of Remote Sensing Vision-Language Foundation Model. We highlight the datasets used in this paper. Benchmark result for rest datasets will be released later. geo-localization tasks. It contains images from both UAVs and satellites, providing a diverse set of data for training and evaluating models. The dataset is named "University1652" because it includes images of 1,652 university campuses from around the world. By using this dataset, researchers can develop and test algorithms that can accurately geo-localize objects or areas in UAV and satellite images, making it a valuable resource for advancements in this field. #### a.5.6 Scene Classification Scene classification in remote sensing refers to the task of categorizing a specific area or scene captured by satellite or aerial imagery into one of several predefined categories. These categories typically represent different land cover or land use types, such as urban, agricultural, forest, water bodies, and more. The primary objective is to automatically identify and label the type of terrain or environment depicted in the image. Commonly used datasets are EuroSAT [30], AID [101], RESISC45 [16], etc. #### a.5.7 Object Detection Object detection in the context of remote sensing involves identifying and locating specific objects or features within satellite or aerial images.Object detection aims to pinpoint the exact spatial location (coordinate of bounding box for example) of particular objects within the image and classify them into predefined categories. Common objects of interest include buildings, vehicles, roads, ships, aircraft, agricultural fields, and more. Commonly used datasets are FAIR1M[83], DOTA-V1.0 [100], DIOR-R [17], SODA-A [18], etc. #### a.5.8 Semantic Segmentation Semantic segmentation in remote sensing refers to the process of classifying each pixel in a satellite or aerial image into a specific category or class, resulting in a detailed, pixel-wise labeled map. Commonly used datasets are Potsdam [74], iSAID [95], LoveDA [92], GID-15 [86], etc. #### a.5.9 Change Detection Change detection in remote sensing refers to the process of identifying differences in the state of an object or phenomenon by observing it at different times. This task is closely related to the segmentation task. Commonly used datasets are CDD [43], LEVIR [12], HRSCD [21], etc.
2304.10667
Observation of self-oscillating supersonic flow across an acoustic horizon in two dimensions
Understanding the dynamics and stability of transonic flows in quantum fluids, especially for those beyond one spatial dimension, is an outstanding challenge, with applications ranging from nonlinear optics and condensed matter to analogue gravity. One intriguing possibility is that a system with a spatially bounded supersonic flow may evolve into a self-oscillating state that periodically emits solitons, in a process originating from the well-known Landau instability. Here, we report observation of self-oscillating supersonic flows in a two-dimensional atomic superfluid. By imposing a local particle sink with strong loss, we induce a convergent radial flow forming an acoustic analogue of a black-hole horizon and an inner horizon around the sink. The observed superflow appears to be modulated by quasi-periodic bursts of superluminal signals. We measure their frequencies and find agreement with numerical simulations of soliton oscillation frequencies within the black-hole horizon. The presented experiment demonstrates a new method for creating supersonic flows in atomic superfluids, which may find applications in quantum simulations of curved spacetime, supersonic turbulence, and self-oscillating dynamics in dissipative many-body systems.
Hikaru Tamura, Sergei Khlebnikov, Cheng-An Chen, Chen-Lung Hung
2023-04-20T22:34:13Z
http://arxiv.org/abs/2304.10667v2
# Observation of self-oscillating supersonic flow across an acoustic horizon in two dimensions ###### Abstract Understanding the dynamics and stability of transonic flows in quantum fluids is an outstanding challenge, with applications ranging from nonlinear optics and condensed matter to analogue gravity. One intriguing possibility is that a system with a spatially bounded supersonic flow may evolve into a self-oscillating state that periodically emits solitons, in a process originating from the well-known Landau instability. Here, we report observation of self-oscillating supersonic flows in a two-dimensional atomic superfluid. By imposing a local particle sink with strong loss, we induce a convergent radial flow forming an acoustic analogue of a black-hole horizon and an inner horizon around the sink. The observed superflow appears to be modulated by quasi-periodic bursts of superluminal signals. We measure their frequencies and find surprising agreement with numerical simulations of soliton oscillation frequencies within the black-hole horizon. The presented experiment demonstrates a new method for creating supersonic flows in atomic superfluids, which may find applications in quantum simulations of curved spacetime, supersonic turbulence, and self-oscillating dynamics in dissipative many-body systems. + Footnote †: Current address: Atom Computing, Boulder, CO 80301, USA According to Landau's criterion of superfluidity [1], a superfluid flowing past an obstacle becomes unstable with respect to production of excitations when the velocity exceeds a certain limit. For phonon excitations in a weakly interacting Bose-Einstein condensate (BEC), the critical velocity coincides with the speed of sound. In experiments, much lower critical velocities are often observed, which have been attributed to production of low energy vortex excitations [2; 3; 4; 5]. For a one-dimensional (1D) superflow, on the other hand, the critical velocity [6] has been found to depend on the obstacle height [7; 8], and it has been suggested that the Landau criterion is violated when the local flow velocity exceeds the local sound speed [8; 9]. More generally, even without an obstacle, one expects that Landau instability plays a role as long as the translational symmetry is broken. An intriguing, yet unexplored example is a convergent two-dimensional (2D) radial flow, where an increasing flow rate at a small enough radius \(r\) grows larger than the local sound speed, and the flow could become unstable. In many related settings in 1D, Landau instability manifests itself through self-periodic emission of solitons [10; 11; 12; 6; 7; 8]. Here, we explore for the first time the stability of a 2D radial flow and report observation of self-periodic oscillations. Intriguingly, it is precisely the transonic flows in a quantum fluid that has been theorized [13; 14] and broadly pursued (see [15; 16; 17; 18; 19; 20] for examples) as a simulator of an elusive phenomenon--Hawking radiation from a black hole [21], which results from quantum fluctuations near the event horizon. An acoustic black-hole (white-hole) horizon marks the transition of a subsonic flow to (from) a supersonic region that low-frequency sound waves cannot escape (re-enter). A bounded supersonic flow, like those in a penetrable barrier or in a convergent 2D flow, is enclosed by a pair of black-hole and white-hole (inner) horizons, reminiscent of two horizons of a charged black hole in Einstein gravity. In the presence of superluminal (faster than sound) short-wave excitations, a pair of acoustic horizons can act like mirrors that form a laser cavity, further amplifying the out-going Hawking radiation [22]. Recent discussions of this effect in 1D include [23; 24; 25; 9]. In contrast to Hawking radiation, soliton and wave emissions due to Landau instability are entirely classical. Testing instability of supersonic flow within two horizons [26; 11; 12; 9] has so far remained an open experimental question. For instance, a recent experiment by the Technion group [15] has generated acoustic horizons by sweeping a potential step along an elongated condensate. This so-called waterfall method has led to a successful observation of Hawking radiation of phonons [27; 15] across a horizon that co-moves with the step potential. Phonons emitted following formation of an inner horizon, however, have been attributed not to the Hawking process but to the Cherenkov radiation by a moving obstacle [28; 29; 30]. Here, we address the role of Landau instability in a 2D radial flow free from a moving obstacle. We create a particle sink at the center of an otherwise homogeneous atomic superfluid trapped in an optical box. The sink induces fast atom number loss and results in a large inward radial flow, forming an acoustic black-hole horizon and an inner horizon around the sink. We control particle loss rate in the sink through three-body recombination [31], a dissipative process during which three atoms collide to form one bound molecule and one energetic atom, both with kinetic energy large enough to escape a shallow optical trap. Three-body recombination loss scales cubically with atomic density as \(\dot{n}=-L_{3}n^{3}\), where \(L_{3}\approx 4.3\times 10^{-2}~{}\mu\text{m}^{4}/s\) is the loss coefficient in our 2D geometry [32] and \(n\) the 2D density. In our ultracold cesium atomic samples, two-body loss is fully suppressed. We use all conservative potentials in contrast to a related proposal [33] that utilizes localized one-body loss to generate supersonic flows. As illustrated in Figs. 1(a) and 2, a 2D superfluid is initially trapped inside a circular box of potential height \(\approx k_{\text{B}}\times 60\) nK, with a uniform density \(n_{0}\approx 14~{}\mu\text{m}^{-2}\) and a chemical potential \(\mu_{0}=\hbar^{2}n_{0}g/m\approx k_{\text{B}}\times 21\) nK [32]. Here, \(g\approx 0.42\) is the interaction parameter, \(\hbar\) the reduced Planck constant, \(m\) the atomic mass, and \(k_{B}\) the Boltzmann constant. We introduce the sink by ramping on a Gaussian addressing potential of \(1/e^{2}\) radius \(r_{\mathrm{s}}\approx 6.5\ \mu\)m and depth \(V_{0}\approx k_{\mathrm{B}}\times 200\) nK at the box center. The attractive potential gives rise to a much higher peak density \(>90\mu\)m\({}^{-2}\) in the sink, leading to more than 250-fold increase in the local three-body loss rate and an estimated total loss rate of \(\Gamma=\int_{\mathrm{s}}|\dot{n}|d^{2}r\gtrsim 6.5\times 10^{5}\)s\({}^{-1}\) in the sink region. Assuming fluid continuity at \(r>r_{\mathrm{s}}\), one can estimate the radial velocity as \(v(r)\approx-\Gamma/[2\pi r(r)]\lesssim-1\) mm/s, indicating that \(v\) can become supersonic outside the sink. We first perform theoretical analyses on the stability of this dissipation-induced flow. We model the process through a classical 2D Gross-Pitaevskii equation (GPE) with an additional term accounting for the three-body loss [32]. Assuming rotational symmetry, we have found stationary solutions by allowing inflow of atoms at the boundary. We expect such solutions to be close to _quasi_-stationary states in a large sample without an inflow. Specifically, for \(V_{0}\) below a critical value \(V_{\mathrm{cr}}(\approx k_{\mathrm{B}}\times 88\ \mathrm{nK}\) for the chosen parameter values), we find a ground state solution and a transition ('droplet') state that, similarly to the saddle-point solution [34, 35] of the Ginzburg-Landau theory, can be interpreted as the fluctuation mediating a phase slip. At \(V_{0}=V_{\mathrm{cr}}\), the solutions merge and disappear through a saddle-node bifurcation, in parallel to results obtained for conservative flows over obstacles in 1D [7]. A critical solution for experimentally relevant parameter values is shown in Fig. 1(b). Notice that the ground state develops a small supersonic region when approaching the critical point. We have observed such a correlation also for other parameter values. We therefore interpret disappearance of the static solutions at the critical point as a Landau-type instability. Analogies between our results and those of Refs. [34, 35, 7] suggest that ramping the potential past the critical value will induce a self-oscillation process, analogous to the soliton train in a conservative 1D flow [7] or a phase-slip center [36] in a superconducting wire. This is supported by numerical integration of a time-dependent GPE [32]. An example is shown in Figs. 1(c-f). After \(V_{0}\) passes through a critical point, supersonic flow forms; see (d, f) for regions with \(\Delta=c+v<0\), where \(v\) (c) is the local flow velocity (sound speed). Coincidentally, a train of ring-shaped dark solitons [37] (3 clearly visible in this example) are emitted toward the sink center, in time separation \(\lesssim 3\) ms. They appear as left-moving dark dips in the radial plots (c, e), forcing oscillations in the supersonic flow. We find that this process is insensitive to the potential ramp speed and that soliton emission always accompanies formation of supersonic flow near the critical point. Once initiated, a ring soliton's radial motion triggers a multiplication process, a remarkable effect absent in 1D black-hole lasers. We point out that a shrinking ring dark soliton cannot stop at the'singularity' at \(r=0\) due to conservation of energy [37]. A soliton first transmits through the inner horizon, reaches an inner turning point at \(r\geq 0\), and then expand radially back to the supersonic region [right-moving dark dips in Fig. 1(e)], a process made possible by the superluminal dispersion. Each expanding soliton would split into more solitons--deeper ones with slower radial speeds cannot expand against the supersonic flow and would shrink back to a small radius. Shallower ones with faster speeds can transmit through the outer horizon and become out-going radiation. Those trapped within the black-hole horizon continue to oscillate and multiply, and the system behaves like an amplifying, self-periodic'soliton laser' mediating oscillating supersonic flows. In the actual experiment, we have adopted a faster ramp speed (\(\Delta t=5\ \mathrm{ms}\)) to be able to observe the instability before losing many atoms. As shown in the in-situ images in Fig. 2 (a) and averaged radial density plots in (b), shortly after the addressing potential is ramped on, atomic density slightly Figure 1: Supersonic flow induced by a particle sink. (a) Schematics of a Gaussian attractive potential (depth \(V_{0}\)) addressing a 2D atomic superfluid trapped in a circular box. The high density region with a large three-body recombination loss rate serves as a particle sink, inducing strong radial flow (velocity \(v<0\)). Dotted circles illustrate an acoustic black-hole horizon and an inner horizon. (b) Stationary solutions of \(v\) (black) and local sound speed \(c=\hbar\sqrt{ng}/m\) (red) versus radial position \(r\) in an effectively infinite system at a critical depth \(V_{0}=V_{\mathrm{cr}}\). Shaded region marks the supersonic flow \(|v(r)|>c(r)\). For a finite system, time-dependent simulations suggest that ramping on a sink potential beyond the critical depth triggers soliton emission. (c) Density profiles at 0, 4, and 8 ms (dark to light gray curves) right after supersonic flow forms, calculated using a slow ramp of \(V_{0}\) (\(\Delta t=60\ \mathrm{ms}\) as illustrated in (e) inset). (d) Sound (red) and flow velocity (gray) profiles corresponding to those in (c). (e-f) Full time evolution of \(n(r)\) and \(\Delta(r)=c(r)+v(r)\), showing initial soliton emission near the critical depth (marked by dashed lines), radial oscillations of solitons, and multiplication of soliton number following each oscillation cycle. This dynamics results in a self-oscillating supersonic flow. depletes at \(r\lesssim 15~{}\mu\)m outside the sink, showing a strong tendency for the superfluid to flow inwards. As time increases, the density continues to decrease, indicating a continuous flow into the sink region even after the peak density has saturated. Using the radial density profiles, we evaluate the local sound speed \(c(r)\), which is nearly uniform and gradually decreases with time to \(<1\) mm/s as shown in (c) except within the sink where the density is high. In Fig. 2(d), we plot the total atom number \(N(t)\), excluding the central region \(r\leq r_{\rm s}\). Decay of \(N(t)\) is consistent with atoms flowing into the sink to compensate for the loss of atoms due to three-body recombination, as described by a simple theory curve (red dashed line) in Fig. 2(c) [32]. The overall decay rate \(\gamma\approx 27~{}\)s\({}^{-1}\) is determined by a fit. Due to finite resolution of our imaging system (\(\sim 1~{}\mu\)m), we cannot clearly identify ring dark solitons in situ, as the characteristic width of their density dip is \(\xi\approx 1/\sqrt{ng}=0.2-0.4~{}\mu\)m. We also note that, in a superfluid with preexisting density noise [38] or imperfect rotational symmetry [39], a ring soliton suffers strong snaking instability [40] and can quickly decay into vortices [41] (also \(\sim\xi\) wide) that are challenging to measure in situ. This decay has been observed in GPE simulations as well. Generation and decay of ring dark solitons has been reported in our system but with a different experimental setting [38]. The decay of solitons into vortices could also lead to turbulence [42; 43; 44] in the supersonic flow. We can nevertheless identify key signatures of Landau instability and self-oscillation in the measured flow. To extract this information, we relate the local radial flow velocity to the rate of change of total atom number in an annular region bounded by \((r,r_{\infty})\) via the expression \[v_{\rm exp}(r,t)=\frac{1}{2\pi rn(r,t)}\frac{dN(r,t)}{dt}, \tag{1}\] where \(N(r,t)=\int_{r}^{r_{\infty}}n(r^{\prime},t)d^{2}r^{\prime}\) and \(r_{\infty}\approx 40~{}\mu\)m extends well beyond the edge of the box trap. Figure 2(e) plots the flow velocity evaluated at various times, showing mostly inward flow \(v_{\rm exp}(r)<0\) everywhere for \(r\lesssim 26~{}\mu\)m. The amplitude \(|v_{\rm exp}(r)|\) increases with decreasing radial position, reaching maximum at around \(r\approx 8~{}\mu\)m. It then greatly decreases when approaching the sink region where density becomes high, in qualitative agreement with the flow analyses in Fig. 1. Time-dependence of flow, on the other hand, shows an intriguing oscillatory behavior that we now discuss. To clearly see the evolution of the superflow, we plot the full spatial-temporal dependence of \(\Delta_{\rm exp}(r,t)=c(r,t)+v_{\rm exp}(r,t)\) as shown in Fig. 3(a). Supersonic flow initially appears within a radial interval \((r_{\rm in},r_{\rm out})\approx(5,10)~{}\mu\)m, enlarging to \(\approx(5,15)~{}\mu\)m at later times. This can be viewed as a supersonic flow cavity bounded by a black-hole horizon at \(r=r_{\rm out}\) and an inner white-hole horizon at \(r=r_{\rm in}\). For comparison, we also evaluate the flow velocity in a GPE calculation, \(v_{\rm GPE}\), using the same Eq. (1). The result is shown in Fig. 3(b-c). The local flow velocity and self-oscillation signatures are nearly the same as those found in \(v(r,t)\), the radial velocity computed directly from the GPE wavefunction. There is however a striking difference between the experiment and the simulation results. In the experiment, at around \(t\approx 10\) ms, shortly after the inward flow becomes supersonic, a sudden change to an apparent outward flow is observed (\(v_{\rm exp}>0\)); see also Fig. 3(c). At larger times, \(\Delta_{\rm exp}\) appears to display quasi-periodic short pulses with a primary time period of \(t_{\rm p}\approx 4\) ms. This pulsation behavior is from time-dependence of the flow velocity as the sound speed is monotonically decreasing in time. The pulse period is also longer than that of possible collective modes in the sink region, if any is excited. Most surprisingly, these pulses appear to propagate over the entire sample within a small time \(\lesssim 2\) ms, which is much shorter than the time period \(\gtrsim 20\) ms required for sound waves to traverse the sample. The apparent short pulses could be due to outbursts of atoms from the sink region, traveling at superluminal speeds greater than 1 cm/s (kinetic energy \(>k_{\rm B}\times 1~{}\mu\)K). These energetic atoms may come from three-body recombination [31; 45], with each atom carrying away \(2/3\) of the binding energy of the bound molecular state. The closest to the continuum, \(6s\) state of Cs\({}_{2}\)[46], has a binding energy \(E_{\rm b}\approx k_{\rm B}\times 20~{}\mu\)K, thus giving an estimated out-going atom velocity of \(\approx 4\) cm/s Figure 2: Realization of radial supersonic flow in a 2D superfluid. (a) Single-shot in-situ density images measured before and at the indicated time \(t\) after the sink is fully ramped on at \(t=5\) ms. Dotted (dashed) circles mark \(r=26~{}\mu\)m (\(5~{}\mu\)m) radius. (b) Radial density profiles \(n(r)\) measured at \(t=7-63\) ms (dark to light gray circles) with a time interval of 8 ms. Initial density profile (blue circles) is plotted for comparison. (c) Local sound speed \(c(r)\) evaluated using profiles in (b). (d) Evolution of integrated atom number \(N(t)\) with (filled circles) and without (open circles) the sink, agreeing with a model assuming three-body recombination loss (red curves). Blue dashed curve is a simple exponential fit, giving the total atom decay rate \(\gamma\). (e) Radial flow velocity evaluated using Eq. (1). along random directions in 3D. Some of these atoms will be imaged in our apparatus. As the recombination loss occurs primarily in the sink region, we expect it to be modulated by self-oscillations. These effects are not captured in our classical GPE calculations. Another source of energetic atoms, although likely much less prominent, may be the dynamical Casimir effect [47; 48; 49], wherein the motion of solitonic defects results in rapid density perturbations in the sink region, possibly capable of exciting short-scale fluctuations (\(\lesssim\xi\)) with superluminal speeds comparable to \(2\pi\hbar/m\xi\lesssim 2\) cm/s. To confirm that these fast pulses are indeed synchronized to possible soliton motion within the horizon and are not from other systematic effects, we calculate the Fourier spectrum of \(v_{\rm exp}(t)\) and compare it with the Fourier spectra of the GPE results averaged over the sink region [32]. As shown in Fig. 3(d), the most prominent frequency peak observed in the experiment is at \(f_{2}\approx 225\) Hz \(\sim t_{\rm p}^{-1}\). This is in very good agreement with the GPE result. We have verified that \(f_{2}\) indeed corresponds to the radial oscillation frequency of tightly trapped solitons, which are reflected upon re-entering the supersonic region. Other frequency peaks appear in the Fourier spectra as well. Notably, a lower frequency peak appears at \(f_{1}\approx 103\) Hz, and a higher tone at around \(f_{3}\approx 310\) Hz. We note that the time separation of soliton splitting is shorter than the oscillation time scale \(t_{\rm p}\), and that shallower solitons nearly escaping the horizon could have a longer oscillation period. While these effects could account for the observed higher and lower frequencies, we also point out that, in a nonlinear system, a background oscillating at a frequency \(f_{2}\) may amplify fluctuations with frequencies at integer multiples of \(f_{2}/2\) by parametric resonance. Self-oscillations in this 2D geometry appear to be robust. We have increased the atomic interaction (top panel of Fig. 4(b)) or have adopted a shallower depth \(V_{0}\approx k_{\rm B}\times 125\) nK and a slightly narrower sink (bottom panel). We have observed self-oscillating supersonic flows in all these samples. Multitone frequencies can be identified in their Fourier spectra as well. In general, the oscillation frequencies should depend on the radius of the horizon, the supersonic flow speed, and the local sound speed. We summarize these results using a single parameter, the total atom number decay rate \(\gamma\), which characterizes the overall dissipation rate in the sink. Figure 4 plots the measurement results, where we find the spectra to show good resemblance with GPE calculations. In summary, through introducing a stationary, conservative local addressing potential, we observe self-induced supersonic flow initiated solely by fast local three-body recombination. Through observing periodic emission of superluminal signals, which we attribute to strong nonlinear effects in the sink, we obtain evidence that the system enters a self-oscillating state. Our setup may be considered as a classical analogue of a black-hole laser, with the black-hole horizon and the'singularity' at the sink center acting as highly reflective cavity mirrors, in distinction from 1D black-hole lasers. Our experiment shows a way to generate complex flow patterns and supersonic turbulence in atomic superfluids, through projecting arbitrary sink Figure 3: Observation of self-oscillating supersonic flow. (a) Measured time evolution (in 1 ms steps) of \(\Delta_{\rm exp}(r)\), showing supersonic flow (\(\Delta_{\rm exp}<0\)) at \(r>r_{\rm in}\approx 5\)\(\mu\)m and \(r<r_{\rm out}\), where \(r_{\rm out}\gtrsim 10\)\(\mu\)m grows slowly with time. Quasi-periodic pulses of \(\Delta_{\rm exp}>0\) are clearly visible. (b) Calculated \(\Delta_{\rm GPE}(r,t)\) based on flow velocity \(v_{\rm GPE}\) evaluated using Eq. (1). (c) Measured (top panel) and calculated (gray curve, bottom panel) flow velocities at positions as shown in the dashed lines in (a) and (b), respectively. A 1 ms running-average (black curve) is plotted for comparing with experiment. Error bars represent measurement uncertainty. (d) Normalized Fourier spectra of \(v_{\rm exp}\) (black circles), \(v_{\rm GPE}\) (dotted curve), and \(v\) (dashed curve). Vertical dashed lines mark four lowest frequency peaks in the experimental data. Figure 4: Self-oscillation frequencies. (a) Peak frequencies \(f_{i}\) (\(i=1,2,3,4\), from low to high) in Fourier spectra of \(v_{\rm exp}\) from samples with two different depths \(V_{0}\approx k_{\rm B}\times 200\) nK (filled symbols) and 125 nK (open symbols), shown in (b) and Fig. 3(d), respectively, versus measured atom number decay rate \(\gamma\). Peak frequencies from the spectra of GPE results for \(v\) (crosses) are plotted for comparison. (b) Qualitative resemblance between the experiment (symbols) and GPE spectra (dashed curves). For (a-b), relevant experimental parameters are \((g,n_{0},r_{s})\approx(0.42,14,6.5)\) (circles), \((0.48,13,6.5)\) (squares), \((0.42,25,5)\) (triangles), and \((0.45,20,5)\) (diamonds); the units of length are in \(\mu\)m. Error bars reflect measurement uncertainty. potentials to initiate localized dissipation processes. Our work calls for future studies on self-oscillations in dissipative many-body systems, and can potentially find new applications in quantum simulations of curved spacetime [14; 50]. ###### Acknowledgements. We thank Chris Greene, Martin Kruczenski, and Qi Zhou for discussions. This work is supported by the W. M. Keck Foundation, the NSF (Grant # PHY-1848316), and the DOE QuantISED program through the Fermilab Quantum Consortium.
2305.18112
The European Muon Collaboration effect from short-range correlated nucleons in a nucleon swelling model
The relation between the nuclear EMC effect and the nucleon-nucleon short-range correlation is a hot topic in high-energy nuclear physics, ever since a peculiar linear correlation between these two phenomena discovered. In this paper, the contribution to the nuclear EMC effect arising from the short-range correlated nucleons is examined in a nucleon-swelling model. We find that the structure modifications of the N-N SRC nucleons reproduce more or less the measured EMC ratios of light nuclei, while they are not enough to explain the measured EMC ratios of heavy nuclei. We speculate that the hypothesis of causal connection between SRC and the EMC effect is not exact, or the universality of the inner structure of SRC nucleon is violated noticeably from light to heavy nuclei, or there are other origins for the EMC effect.
Na-Na Ma, Tao-Feng Wang, Rong Wang
2023-05-29T14:26:43Z
http://arxiv.org/abs/2305.18112v2
The European Muon Collaboration effect from short-range correlated nucleons in a nucleon swelling model ###### Abstract The relation between the nuclear EMC effect and the nucleon-nucleon short-range correlation is a hot topic in high-energy nuclear physics, ever since a peculiar linear correlation between these two phenomena discovered. In this paper, the contribution to the nuclear EMC effect arising from the short-range correlated nucleons is examined in a nucleon-swelling model. We find that the structure modifications of the N-N SRC nucleons reproduce more or less the measured EMC ratios of light nuclei, while they are not enough to explain the measured EMC ratios of heavy nuclei. We speculate that the hypothesis of causal connection between SRC and the EMC effect is not exact, or the universality of the inner structure of SRC nucleon is violated noticeably from light to heavy nuclei, or the mean-field nucleons are also modified. ## I Introduction The nuclear EMC effect observed in lepton-nucleus deep inelastic scattering (DIS) [1; 2; 3] proves that the quark degrees of freedom inside nucleon are influenced by the surrounding nucleons (cold nuclear medium). This phenomenon implies that the nuclear force between nucleons is emergent fundamentally from the strong interaction between the quarks inside different nucleons. Before the experiment of EMC collaboration, the quark degrees of freedom are thought to be frozen and confined in the nucleon, and the nuclear force at the scale around nuclear binding energy can not influence the nucleon inner structure to a sizeable extent. It attracted a lot of interests soon after the discovery and it is still an interesting puzzle in high-energy nuclear physics through decades [4; 5; 6; 7; 8]. Understanding of the mechanism of the EMC effect from quantum chromodynamics (QCD) remains quite challenging [9; 10]. The nucleon-nucleon short-range correlation (N-N SRC) is one microscopic and quite unusual structure inside an atomic nucleus [11; 12; 13; 14; 15]. Different from the mean-field description of the nuclear interaction and the single-nucleon motion given by the nuclear shell model, the N-N SRC shows one kind of special close-proximity structure of the nucleon-nucleon distance about or even smaller than 1 fm [11; 14]. In the N-N SRC pair, the nucleon-nucleon interaction can reach the repulsive core of the nuclear force. Therefore the nucleon struck out from N-N SRC could have the momentum way higher than the nuclear Fermi momentum. Thanks to the clean probe of high-energy electron, the N-N SRC is observed in the inclusive and exclusive processes, identified with the high nucleon momentum and the angle correlation between the high-momentum nucleon partners [16; 17; 18; 19; 20; 21; 22; 23; 24; 25]. Though the short-range correlated nucleons interact extensively and strongly, they are the minorities in the nucleus compared to the mean-field nucleons. In heavy nuclei, only about 20% of the nucleons are in the N-N SRC configuration [19; 20; 21; 22]. There is no doubt that the nucleons in close-proximity interact with each other strongly. Their inner structures therefore can be greatly modified. Naively, the N-N SRC is thus thought to be an important source of the EMC effect. Actually, with the finding of a linear correlation between the magnitude of EMC effect and the relative number of N-N SRC pairs [26; 27], more and more physicists guess that the strong modification of SRC nucleons is the primary origin of the EMC effect. Theoretically, the linear correlation between the EMC effect and the N-N SRC is explained with the scale separation phenomenon [28]. Experimentally, the CLAS collaboration tested the SRC-driven model for the nuclear EMC effect, with the simultaneous measurements of DIS and quasi-elastic inclusive process on the deuteron and some heavier nuclei. They extracted the modification function of the structure function of SRC nucleon and found that this modification function is more or less universal for different nuclei [25]. They thus propose that the EMC effect is not the traditional static modification on all the independent nucleons but a strong dynamical effect from two strongly interacting nucleons fluctuating into a temporary high-local-density SRC pair. However, different people have different opinions in explaining the correlation between the EMC effect and the N-N SRC. The relationship of these two phenomena is examined recently in details with a convolution model which incorporates the nuclear binding and the nucleon off-shell effects [29]. They argue that their analysis does not support the hypothesis that there is a causal connection between SRC nucleons and the EMC effect. The EMC effect of the low-momentum nucleon and the high-momentum nucleon are studied separately. They find that the Fermi motion effect overwhelms the off-shell effect for the SRC nucleons with various models for the off-shell correction. Thus they conclude that the SRC nucleons do not give a dominant part of the observed EMC effect [29], compared with the mean-field nucleons. In our previous paper [30], we also get the similar conclusion that only the modifications on the N-N SRC nucleons are not enough to reproduce the measured EMC effect, with our current knowledge about the number of SRC pairs in the nuclei [17; 20; 31]. In our previous analysis, the \(x\)-rescaling model is applied for the off-shellness correction of the SRC nucleon, and the effective mass of SRC nucleon is taken from a recent analysis [31]. In this paper, the hypothesis that the nuclear EMC effect comes entirely form the N-N SRC pairs is examined further at a more fundamental level. The conventional nuclear models usually take into account the reduced nucleon mass in medium or the nucleon virtuality for the EMC effect, leading to the \(x\)-rescaling models [32; 33; 34; 35; 36; 37] and the off-shellness corrections [38; 39; 40; 41; 42]. Since the EMC effect is measured in the DIS process, it should be explained at the quark level instead of the nucleon level. The QCD-inspired models in explaining the EMC effect usually require an increase of the quark confinement or a simple picture of nucleon swelling. As the nucleons in SRC pair are so close to each other and forms into a high-local density cluster, the quarks inside could thus be de-confined. In the hadron bag picture, we can imagine the two nucleon bags merge into a big di-nucleon bag. If the quarks can move freely from one nucleon to the other in the SRC pair, then the confinement space of the quark could be enlarged by as big as twice. Within the nucleon swelling model, the quark distributions inside the SRC nucleon can be calculated quantitatively [43; 44]. Hence the contribution of the SRC nucleons to the EMC effect can be evaluated. The organization of the paper is as following. The hypothesis that the nuclear EMC effect arises dominantly from the N-N SRC pairs and the related formula are given in Sec. II. The nucleon swelling model for calculating the structure function of the SRC nucleon is discussed in Sec. III. The results of the SRC driven model for the EMC effect are shown in Sec. IV. A brief summary of the analysis is given in Sec. V. ## II Nuclear EMC effect from N-N SRC A haunting question we try to answer in this work is whether the N-N SRC is wholly responsible for the nuclear EMC effect. Therefore we employ the so-called "SRC-driven model" for the EMC effect. That means: the inner structure of short-range correlated nucleons are substantially modified while the inner structure of the nucleons in the mean field are nearly unmodified. The N-N SRC is the only (or dominant) source of the EMC effect. The long-range nuclear interaction has no influence on the short-distance structure in the nucleon. Many experiments have revealed that the majority of N-N SRC pairs are the proton-neutron correlated pairs [19; 20; 21; 22]. This isophobic property is actually consistent with the theoretical calculations based on the assumption that the medium-range tensor force is primarily responsible for the formation of N-N SRC pairs [45; 46; 47; 24]. In this paper, we study the model which assumes that the N-N SRC is the primary source of the EMC effect. For the simplicity of model calculations, we ignore the p-p and n-n SRC pairs, since they together are surely the minorities (\(\lesssim\)10%) compared to the p-n SRC pairs. Thus the per-nucleon nuclear structure function is given by, \[\begin{split} F_{2}^{\rm A}=&\left[n_{\rm SRC}^{ \rm A}F_{2}^{\rm p~{}in~{}SRC}+n_{\rm SRC}^{\rm A}F_{2}^{\rm n~{}in~{}SRC} \right.\\ &\left.+(Z-n_{\rm SRC}^{\rm A})F_{2}^{\rm p}+(A-Z-n_{\rm SRC}^{ \rm A})F_{2}^{\rm n}\right]/A,\end{split} \tag{1}\] in which \(n_{\rm SRC}^{\rm A}\) is the number of p-n SRC pairs in nucleus \(A\), \(F_{2}^{\rm p~{}in~{}SRC}\) and \(F_{2}^{\rm n~{}in~{}SRC}\) are the modified nucleon structure functions in the SRC pair, and \(F_{2}^{\rm p}\) and \(F_{2}^{\rm n}\) are the free nucleon structure functions. \(Z\), \(N\) and \(A\) are respectively the proton number, neutron number and the mass number to define a particular nucleus. Note that the universality of p-n SRC pair in different nuclei is assumed for Eq. (1). The N-N SRC is a compact and short-time lived state from the fluctuations of the many-body dynamics of nuclear force. The formations and dissociations of N-N SRC pairs keep on going inside the nucleus. Thus in Eq. (1), the number of SRC pairs \(n_{\rm SRC}\) should be viewed as a mean value in the measurements. Take the deuteron for an example, the mean number of p-n SRC pairs in the deuteron is less than one (\(n_{\rm SRC}^{\rm d}\ll 1\)), for the N-N SRC configuration happens very occasionally. For the SRC-driven model, the number of SRC pairs in a nucleus (\(A\)) is an indispensable parameter. In experiment, the relative number of N-N SRC pairs is characterized by the SRC scaling ratio \(a_{2}\) in the region \(1.4\lesssim x_{B}\lesssim 1.9\). Then the number of SRC pairs in nucleus \(A\)\(n_{\rm SRC}^{\rm s}\) is computed with the measured \(a_{2}\) and the number of SRC pairs \(n_{\rm SRC}^{\rm d}\) in deuteron, which is written as, \[n_{\rm SRC}^{\rm A}=[A\times a_{2}(A)\times n_{\rm SRC}^{\rm d}]/2. \tag{2}\] The SRC scaling ratio \(a_{2}\) are measured using the high-energy electron inclusive scattering process off the nuclear targets [17; 18; 25]. The number of SRC pairs in deuteron has already been determined in our previous analysis [31]. The other important input for the model of SRC-induced EMC effect are the modified structure function of SRC nucleon. The structure function at intermediate \(x_{B}\) is closely related to the valence quark distributions. A model derived from the expansion of quark confinement is employed to estimate the quark distributions and the structure function of the SRC nucleon. We discuss such model in details in the following section. ## III Swelling effect for SRC nucleons How we compute the structure functions of the free nucleon and the SRC nucleon are present in this section. The structure function \(F_{2}\) is directly connected to the parton distribution functions (PDFs). In the calculations, we take the dynamical PDFs, which are generated from DGLAP evolution equations [48; 49; 50] with the input of three valence quark distributions at an extremely low \(Q_{0}^{2}\). The initial three valence quark distributions at \(Q_{0}^{2}\) of the free nucleon are taken from an estimation of the maximum entropy method [51], which produce the structure function consistent with the experimental data at high \(Q^{2}\). In the nucleon swelling model, all the nuclear modifications are reflected in the increase of the quark confinement space. Therefore, to evaluate the structure function of the SRC nucleon, we need just to modify the initial three valence quark distributions due to the swelling of SRC nucleon. The enlargement of the confinement size of SRC can be understood in three different pictures. (1) In hadron bag model, the high local density reduces the pressure of the vacuum in which the nucleon embedded, thus resulting in a bigger size of the nucleon bag. (2) If the quarks can exchange between the nucleons in the SRC pair, then it means that the confine space of the quark is increased. (3) The enlargement of confinement size is also vividly illustrated with the multiquark cluster model [52; 53; 54; 55; 56]. When two nucleons form into a six-quark cluster, the confinement space of this six-quark cluster is naturally larger than the three-quark cluster (the nucleon) if the quark density is the same. Moreover, the calculations of Quark-Meson Coupling (QMC) model [57; 58; 59] and nuclear potential model [60; 61; 62] also give a small deconfinement of the quark in the nuclei. G. Miller analyzed the elastic electron-nucleus scattering under the Ward-Takahashi identity, and find that with the input of lattice QCD the off-shell nucleon expands the size [63]. There are two ways to apply the nucleon swelling effect to the quark distributions. (1) A bigger nucleon is equivalent to a higher resolution power of the photon probe in DIS. In the language of QCD evolution, the \(Q^{2}\)-rescaling [64; 65; 66; 67; 68] (an higher resolution power) is carried out to interpret the effect. (2) Due to the change of quark confinement space, the quark momentum distribution also varies according to the Heisenberg uncertainty principle. If the uncertainty of the spatial distribution becomes larger, the uncertainty of the valence quark distribution reduces accordingly [43; 44]. The uncertainty of a random variable is quantified with the width of the distribution. The width can be taken as the standard deviation of the distribution. Thus the widths of the valence distributions are given by, \[\begin{split}&\sigma(x_{u})=\sqrt{<x_{u}^{2}>-<x_{u}>^{2}},\\ &\sigma(x_{d})=\sqrt{<x_{d}^{2}>-<x_{d}>^{2}},\\ &<x_{u}>=\int_{0}^{1}x\frac{u_{v}(x,Q_{0}^{2})}{2}dx,\\ &<x_{d}>=\int_{0}^{1}xd_{v}(x,Q_{0}^{2})dx,\\ &<x_{u}^{2}>=\int_{0}^{1}x^{2}\frac{u_{v}(x,Q_{0}^{2})}{2}dx,\\ &<x_{d}^{2}>=\int_{0}^{1}x^{2}d_{v}(x,Q_{0}^{2})dx,\\ \end{split} \tag{3}\] In this work, we apply the second method to evaluate the PDFs and the structure function of the SRC nucleon. The quark confinement space of the six-quark bag from N-N SRC is twice of that of the nucleon bag, assuming that the quark density is the same. If we assume the quarks exchange completely freely between the two nucleons in SRC, the swelling factor of the quark confinement space also can be as large as two. Therefore, in this work we assume that the quark confinement space in SRC pair is twice of that in the free nucleon. Therefore quark confinement radius in SRC pair is then \((2)^{1/3}\) times of that in the free nucleon. According to the Heisenberg uncer Figure 1: (color online) The upper panel shows the valence quark distributions of the free proton and the SRC proton at the initial scale \(Q_{0}^{2}\). The lower panel shows the nuclear modification ratios of the valence quark distributions at the initial scale \(Q_{0}^{2}\). The change of the width of the valence quark distribution in SRC proton is made according to the Heisenberg uncertainty principle and the swelling of the quark confinement. tainty principle, the width of valence quark distribution in SRC nucleon is reduced by a factor of \((2)^{-1/3}\), which is written as, \[\frac{\sigma(x_{q}^{\text{SRC N}})}{\sigma(x_{q}^{\text{free N}})}=\left(\frac{1}{ 2}\right)^{1/3},\ \ (q=u,d). \tag{4}\] In the calculation, the valence quark distributions of the free nucleon and the SRC nucleon are all parameterized as the Beta function \(Ax^{B}(1-x)^{C}\). The momentum sum rule and the valence sum rule are also required at \(Q_{0}^{2}\), which are written as, \[\int_{0}^{1}x[u_{v}(x,Q_{0}^{2})+d_{v}(x,Q_{0}^{2})]dx =1,\] \[\int_{0}^{1}u_{v}(x,Q_{0}^{2})dx =2, \tag{5}\] \[\int_{0}^{1}d_{v}(x,Q_{0}^{2})dx =1.\] The benchmark valence quark distributions of the free nucleon are taken from Ref [51]. The valence quark distributions of the SRC nucleon are solved with Eq. (3) and Eq. (4). The input valence quark distributions at \(Q_{0}^{2}\) (\(\sim\) 0.1 GeV\({}^{2}\)) of the free proton and the SRC proton are shown in Fig. 1. One sees that the nuclear modification at \(Q_{0}^{2}\) on the valence quark distributions are strong for the SRC nucleon. With the obtained valence quark distributions at \(Q_{0}^{2}\), the PDFs and the structure function at high \(Q^{2}\) are given by the DGLAP evolution equations [48; 49; 50]. The initial scale \(Q_{0}^{2}\) and the strong coupling \(\alpha_{s}\) are taken from Refs. [51; 69]. The parton-parton recombination correction [70; 71] is included in order to slow down the fast splitting process due to the large \(\alpha_{s}\) at low \(Q^{2}\). For the calculations of the neutron PDFs and structure function, the isospin symmetry of nucleon is assumed, as \(u^{n}=d^{p}\) and \(d^{n}=u^{p}\). ## IV Results and Discussions The predicted EMC ratios based on the assumptions of SRC-driven EMC effect and the SRC nucleon swelling model are shown in Fig. 2 and Fig. 3, for light nuclei and heavy nuclei respectively. The number of SRC pairs in deuteron are estimated to be from 0.021 to 0.041. \(n_{\text{SRC}}^{\text{d}}=0.021\) is obtained from the fit to the correlation between the nuclear mass and the SRC scaling ratio \(a_{2}\)[31]. \(n_{\text{SRC}}^{\text{d}}=0.041\) is estimated by counting the nucleons of momentum above \(k_{F}\approx 275\) MeV/c [17; 20]. For light nuclei, one sees that the EMC effect from SRC nucleons can reproduce the experimental data within our nucleon swelling model and with \(n_{\text{SRC}}^{\text{d}}=0.041\). However, for the heavy nuclei, our model calculations from the swelling SRC nucleons are not enough to explain the experimental observations, with either \(n_{\text{SRC}}^{\text{d}}=0.021\) or \(n_{\text{SRC}}^{\text{d}}=0.041\). In order to explain the EMC effect of heavy nuclei, the parameter \(n_{\text{SRC}}^{\text{d}}\) in our model should be tuned up to 0.08. However, with \(n_{\text{SRC}}^{\text{d}}=0.08\) our model can not reproduce the EMC effect of light nuclei. More importantly, \(n_{\text{SRC}}^{\text{d}}=0.08\) is not consistent with the previous estimations by counting the high-momentum nucleons above Fermi motion region. In order to explain the contradiction, we speculate that the universality of SRC nucleon structure is violated, or there are more origins of the EMC effect for the heavy nuclei in order to agree with the experimental observations. And other origins for the EMC effect have nuclear dependence from light nuclei to heavy nuclei. The universal modification function of the SRC nucleon in the deuteron is calculated and shown in Fig. 4, based on the nucleon swelling model discussed in the previous section. The slope of the universal modifica Figure 2: (color online) Comparisons between our SRC-driven model calculations for the EMC effect and the experimental measurements of light nuclei. The swelling effect of the SRC nucleon is assumed to be the origin of the EMC effect in our calculations. The curves of different styles show the results with different input values for the parameter \(n_{\text{SRC}}^{\text{d}}\). See the main text for more explanations. tion function is also evaluated by CLAS collaboration from the experimental data at SLAC, JLab, and CLAS, which are shown in Fig. 4. The experimental extractions give the slope in a range from about 0.08 to 0.11, consistently. Our model predictions are weaker than the result from the experimental analysis in terms of the slope of the universal modification function, with \(n_{\rm SRC}^{\rm d}=0.021\) and \(n_{\rm SRC}^{\rm d}=0.041\). Therefore, one may conclude that either the assumption that the EMC effect only comes SRC nucleons is wrong, or the universality of SRC nu Figure 4: The universal modification function for the structure function of the SRC nucleon inside the deuteron calculated in the nucleon swelling model. The curves of different styles show the results with different input values for the parameter \(n_{\rm SRC}^{\rm d}\). In the right panel, the slopes of modification functions are shown. See the main text for more explanations. Figure 3: (color online) Comparisons between our SRC-driven model calculations for the EMC effect and the experimental measurements of heavy nuclei. The swelling effect of the SRC nucleon is assumed to be the origin of the EMC effect in our calculations. The curves of different styles show the results with different input values for the parameter \(n_{\rm SRC}^{\rm d}\). See the main text for more explanations. cleon structure is violated, or the nucleon swelling model for SRC nucleon needs improvement. ## V Summary We have tested the hypothesis that the N-N SRC is the dominant source for the nuclear EMC effect. Based on the nucleon swelling model for the SRC nucleon and that the number of SRC pairs in deuteron is about 0.041, we find that the nuclear corrections on the SRC nucleons more or less explain the nuclear EMC effect of the light nuclei. However, with the same model and inputs, only the nuclear modifications on the SRC nucleons can not reproduce the nuclear EMC effect of the heavy nuclei. We guess that the inner structure of the mean-field nucleon is also modified, or the SRC universality is violated, or there are more origins for the EMC effect beyond the N-N SRC. Although the SRC universality is favored in experiments, our analysis hints that the modification on the structure function of SRC nucleon may be stronger in the heavy nuclei compared to that of the light nuclei. Another explanation is that there are more origins for the EMC effect (such as 3N and 4N SRCs) and the number of these multi-nucleon SRC pairs does not linearly scale with the number of N-N SRC pairs. Based on the current knowledge of the number of p-n SRC pairs in deuteron and the nucleon swelling model for the modification of valence quark distributions, our obtained universal modification function of the SRC nucleon \(n_{\rm SRC}^{\rm d}(\Delta F_{2}^{\rm p}+\Delta F_{2}^{\rm n})/F_{2}^{\rm d}\) is not consistent with the analysis of the experimental data. The experimental extraction of universal modification function of SRC nucleon is performed with the assumption that the EMC effect is completely driven by N-N SRC. Based on the analysis in this work, we conclude that there is the correlation between the N-N SRC strength and the EMC effect, but there is not a causal relation between these two phenomena. This conclusion is consistent with the recent results from the calculations of the off-shellness correction [29] and the \(x\)-rescaling model [30] for the SRC nucleon. ###### Acknowledgements. This work is supported by the National Natural Science Foundation of China under the Grant NO. 12005266 and the Strategic Priority Research Program of Chinese Academy of Sciences under the Grant NO. XDB34030301. N.-N. Ma is supported by the National Natural Science Foundation of China under the Grant NO. 12105128.
2305.11888
Taking Advice from ChatGPT
A growing literature studies how humans incorporate advice from algorithms. This study examines an algorithm with millions of daily users: ChatGPT. In a preregistered study, 118 student participants answer 2,828 multiple-choice questions across 25 academic subjects. Participants receive advice from a GPT model and can update their initial responses. The advisor's identity ("AI chatbot" versus a human "expert"), presence of a written justification, and advice correctness do not significantly affect weight on advice. Instead, participants weigh advice more heavily if they (1) are unfamiliar with the topic, (2) used ChatGPT in the past, or (3) received more accurate advice previously. The last two effects -- algorithm familiarity and experience -- are stronger with an AI chatbot as the advisor. Participants that receive written justifications are able to discern correct advice and update accordingly. Student participants are miscalibrated in their judgements of ChatGPT advice accuracy; one reason is that they significantly misjudge the accuracy of ChatGPT on 11/25 topics. Participants under-weigh advice by over 50% and can score better by trusting ChatGPT more.
Peter Zhang
2023-05-11T15:03:15Z
http://arxiv.org/abs/2305.11888v3
# Taking Advice from ChatGPT + ###### Abstract A growing literature studies how humans incorporate advice from algorithms. This study examines an algorithm with millions of daily users: ChatGPT. In a preregistered study, 118 student participants answer 2,828 multiple-choice questions across 25 academic subjects. Participants receive advice from a GPT model and can update their initial responses. The advisor's identity ("AI chatbot" versus a human "expert"), presence of a written justification, and advice correctness do not significantly affect weight on advice. Instead, participants weigh advice more heavily if they (1) are unfamiliar with the topic, (2) used ChatGPT in the past, or (3) received more accurate advice previously. The last two effects--algorithm familiarity and experience--are stronger with an AI chatbot as the advisor. Participants that receive written justifications are able to discern correct advice and update accordingly. Student participants are miscalibrated in their judgements of ChatGPT advice accuracy; one reason is that they significantly misjudge the accuracy of ChatGPT on 11/25 topics. Participants _under-weigh_ advice by over 50% and can score better by trusting ChatGPT more. ChatGPT algorithm aversion human computer interaction ## 1 Introduction In late 2022, ChatGPT showed the world the power of large language models (LLMs) [1]. ChatGPT is a generative pretrained language model developed by OpenAI, an AI research lab. AI chatbots like ChatGPT and its cousins (BingChat, Bard, Jasper) achieve "surprisingly superior performance" [2] due to an instruction-tuning process that teaches them to do what humans want [3, 4]. Combined with pre-training at scale, LLMs are powerful interfaces for accessing knowledge [5, 6]. The most recent model GPT-4, which now underlies ChatGPT Plus, is much more powerful [7] and has been rigorously benchmarked on a variety of academic tests. According to OpenAI's internal testing, GPT-4 outperforms the median human test-taker on SATs, LSATs, GREs, and several AP exams [8]. Other researchers have found that ChatGPT can pass the bar [9], achieve medical certifications [10, 11, 12, 13], and even complete a college physics class [14]. The novel accessibility and broad capabilities of AI chatbots are likely to reshape education [5]. Many educators are scrambling to reconcile with ChatGPT with responses ranging from outright bans in school to welcome integration into curricula [15]. Some point towards risks to testing integrity [16] and plagiarism [17], while others argue that it provides personalized and immediate information [18, 19]. A recent meta-analysis finds that "the number of papers that see ChatGPT as a threat is almost equal to the number of those that view it as opportunity" [20]. Others still are rethinking traditional views of academic integrity and encouraging uses such as co-authorship [21, 22, 23]. The multiple choice (MC) exam is particularly vulnerable to a rethinking. MC questions continue to be a predominant format for assessing understanding, analysis and recall [24]. The strength of AI chatbots on multiple choice exams is worrying [25] because students most commonly cheat by consulting online sources [26]. While some have suggested workarounds [17], the fast-paced evolution of the underlying LLMs means that it "may not be long before [these] models become so intelligent that we can no longer exploit their weaknesses" [27]. This study seeks to document how students use information from ChatGPT on MC tests, contributing to a largely qualitative literature on how students empirically interact with AI chatbots [28]. While it takes MC tests as a starting point, the work has implications for broader research on algorithm aversion and appreciation, as well as on human-AI collaboration. The study is guided by two questions: First, what influences the weight humans place on chatbot advice? Second, are humans good at judging when AI chatbot advice is correct? ## 2 Literature Review A rich literature examines how people take advice from algorithms. Two core competing findings are algorithm aversion [29] (a tendency to disproportionately punish algorithms when they err) and algorithm appreciation [30] (a tendency to prefer algorithm advice prima facie). Numerous studies have explored mediating mechanisms, including task objectivity [31], perceived competence [32], human input [33], learning [34; 35], and time pressure [36], among others [31]. One literature review categorizes these effects into algorithm characteristics (agency, performance, capabilities, and human involvement) and human characteristics (expertise and social distance) [37]. Another analyzes broad themes of expectations and expertise, decision autonomy, incentivization, cognitive compatibility, and divergent rationalities [38]. Five types of explanations are relevant to this study. This study is a direct test of explanations about _social distance_. If algorithm aversion is truly a preference for humans, a natural remedy is to make algorithms more human-like [39]. Both adjacent literature [40] and experimental evidence suggests that people are more likely to accept advice from an anthropomorphized algorithm. In the business world, AI chatbots are now successful consumer-facing assistants [41], and the their perceived human-likeness is important to their success [42]. At the same time, other studies suggest that appearing too human can induce aversion if algorithms traverse into an uncanny valley [43]. ChatGPT's ability to provide natural language explanations comparable to humans [44] may cause humans to treat ChatGPT similarly to a human advisor and distinctly from other algorithms. Three other explanations are commonly cited. The first, _task difficulty_, suggests that increasing task difficulty causes people to rely more heavily on (algorithmic) advice [45; 46] and is supported by real-world evidence on teachers [47]. In this study, familiarity in the question topic an approximate measure of task difficulty. A second explanation, _algorithm familiarity_, reasons that people who are more familiar with using algorithms for some task will be less averse to the advice [31; 48], an effect that was confirmed in a real-world medical context. This study measures algorithm familiarity by asking questions about past usage. The third explanation, _experience_, argues that participants are rationally updating their beliefs about algorithm competence and that presenting their performance can reduce aversion and develop trust over time [49; 50; 51], although some studies find that accuracy matters less than expected [52]. This study uses a simple model of participant beliefs about advice accuracy as a measure of experience. Finally, a developing literature studies the effect of algorithm _interpretability_ on aversion. Interpretability is theorized as allowing "the user to rapidly calibrate their trust in the system's outputs, spotting flaws in its reasoning or seeing when it is unsure" [53]. Studies have found mixed effects of output interpretability [54; 55; 56; 57] and model transparency [58; 59], although field experiments on physicians find that they benefit from explainable AI advice [60; 61]. In this study, providing GPT model's text reasoning enables a test of whether interpretability makes a difference. Surprisingly few studies have examined the role of ChatGPT as an adviser. Some studies have explored potential problems with using ChatGPT for advice on health [62; 63; 64; 65], investing [66], and education [19]. Empirical studies have documented a corrupting effect of moral advice generated by GPT-2 [67], GPT-3[68], and ChatGPT [69]. On Twitter, GPT-3 generated texts appear to be more effective at convincing humans to believe (accurate and inaccurate) information [70]. Finally, humans appear to trust robots more with ChatGPT as an interface [71]. One recent study studies ChatGPT in the algorithm aversion context on a essay-writing task [72]. The authors find that while people may devalue the outputs of ChatGPT relative to a human author, they judge the content equally and are not deterred from sharing. Little is known about how humans judge the accuracy of ChatGPT. A plethora of studies have shown that humans tend to overestimate their own abilities [73] and misjudge the abilities of others [74]. Similarly, studies suggest that LLMs calibration is good after pre-training [75; 76; 77] but degrades after learning from human feedback [78]. Yet studies have not evaluated whether humans accurately estimate the accuracy of LLM outputs. One study suggests that humans may become miscalibrated on AI feedback because of misallocation of blame [79], the this study and others fail to explicitly document the level and nature of (mis)calibration. This study seeks to fill that gap by evaluating human confidence in LLM outputs on a broad set of topic areas. ## 3 Procedure OverviewThe study simulates an environment in which students receive aid from ChatGPT. MC questions are sourced from real academic tests and original outputs are obtain by quering GPT models. Participants attempt to make calibrated guesses before and after seeing the advice that is generated. An overview of the study design is displayed in Figure 1. All code and data except for survey responses are documented in the accompanying GitHub repository. The study methods are preregistered on AsPredicted under predictions #122800 and #126040. DatasetAnswers are drawn from the Massive Multitask Language Understanding (MMLU) dataset [80], a widely used benchmark [8] of LLM knowledge understanding that broadly encompasses academics. The dataset consists entirely of MC questions and draws from real tests such as the Advanced Placement exams. Participants answer questions from only 25 of the original 57 topics, topics which college students are expected to have a reasonable chance to succeed. A total of 688 questions are sampled from the topics. See Appendix A.1 for descriptions of the 25 topics and selection procedure. Model evaluationThe advice is generated by GPT-3.5, a LLM by OpenAI fine-tuned to follow human instructions, on the constructed dataset [81]. Specifically, calls are made to the Completions API with text-davinci-003 as the engine 3 Models are prompted with standard and chain-of-thought (CoT) prompts [82]. CoT prompts yield the same accuracy but better explanations. The advice used in the survey is generated using a zero-shot CoT prompt. See Appendix A.2 for a comparison to standard prompting and illustration of the prompt text. \begin{table} \begin{tabular}{l l l} \hline Supercategory & Topics & Example Question \\ \hline STEM & Clinical Knowledge, Physics, Elementary Mathematics, Formal Logic, APs (Biology, Chemistry, Comp. Sci, Physics, Statistics), Human Aging & In which situation can the expression 64 + 8 be used? \\ Social Science & APs (Human Geo, Government, Macro/Micro, Psych), Sociology, U.S. Foreign Policy, Global Facts & What does Berger (1963) describe as a metaphor for social reality? \\ Humanities & APs (US/World/European History), Philosophy, Misc. topics & Descartes argues against trusting the senses on the grounds that. \\ \hline \hline \end{tabular} \end{table} Table 1: MMLU topics included in this study. Figure 1: **Overview of study design. Both LLMs and human participants answer questions. The study focuses on how humans take LLM advice.** Lab experimentParticipant use this advice is a survey-based lab experiment.4 The setup models the well-studied judge-advisor system [83]. Participants are shown randomly selected questions and report their confidence in each answer choice before and after receiving advice. The advice is manipulated by varying the advisor's identity and selectively providing justifications. Participants receive advice from an advisor randomly identified as a generic "expert" or an "AI chatbot". They are also randomly assigned to receive a justification in addition to the answer. The manipulations are displayed in Figure 2. Footnote 4: The experiment is approved under UC Berkeley CPHS Protocol #2023-03-16125. The experiment is administered via a Qualtrics survey. A live link and full printout of the survey are available for readers. Participants * are assigned to the conditions; * must pass a simple attention check; * provide their level of familiarity ("comfortable", "neutral", or "uncomfortable") with 8 topic areas that are constructed by grouping topics, as well as their major(s); * complete an example that explains the judge-advisor setup, the concept of confidence, and identifies the advice format (advisor identity and presence of justification); * pass a manipulation check that reinforces the advisor identity; * complete at least 20 questions in which they: * are assigned a random question and provide an initial answer; * receive advice and update their answer; * discover the correct answer and the points they have earned; and * have the opportunity to out-opt once they have completed 20; * fill out a questionnaire about their usage of ChatGPT; and finally * exit the survey. The survey flow is displayed in Figure 3. Scoring and compensationParticipants are scored by a point system that rewards accurate and calibrated answers with cash prizes. The system is based on Brier scores, a widely used scoring rule for encouraging both accuracy and Figure 2: **Judge-advisor system.** Participants provide judgements about the probability that each answer is true. \(WoA\) is a measure of how advice changes the probability allocated to the advised answer. calibration [84]. Let \(f_{X,\mathrm{init}}\). denote the initial confidence and \(f_{X,\mathrm{adj}}\) denote the adjusted confidence for each answer choice \(X\in\{A,B,C,D\}\). Let \(o_{X}\) be an indicator for whether \(X\) is correct. Then, the score is: \[\mathrm{BS}(f)=\sum_{X\in\{A,B,C,D\}}\left[(f_{X}-o_{X})^{2}\right]\] The Brier score is scaled to give 0 points to a uniform (25% across choices) distribution and 750 points for a full-confidence correct answer. The score is asymmetric insofar as it penalizes a full-confidence incorrect answer by -1250 points. The score is centered at 0 so that participants are not able to earn points by merely completing more questions with uniform distributions. The re-scaled scoring rule evenly weights the initial and adjusted forecast: \[\text{Score}=\sum_{f\in\{f_{X,\mathrm{init}},f_{X,\mathrm{adj}}\}}750-1000 \cdot\mathrm{BS}(f)\] Participants can earn prizes by (1) placing among the top 5 scorers and earning 10 USD or (2) through a random drawing for 50 USD with tickets proportional to score. The former is designed to reward effort5 while the latter ensures that payout remains somewhat proportionate to score [85]. Footnote 5: Anecdotally, scoring dramatically improves participant engagement. Subjects reported feeling more interested and invested in the questions, particularly compared to a setup that does not reveal the correct answer. The effect appear to be even stronger when the reward is deterministic. ParticipantsA total of 142 undergraduate students at UC Berkeley are recruited through the Research Participant Pool (RPP) at the Haas School of Business. The participants are primarily business majors in their third and fourth years. Six small pilot sessions were conducted from 04/04/2023 to 04/11/2023 to debug the survey. The 12 sessions comprising the study dataset were administered from 04/13/2023 to 04/25/2023 and included 118 participants. All sessions were conducted at the Experimental Social Science Laboratory. Participants were compensated with course credit and performance-based monetary awards. Data ProcessingLetting \(\hat{X}\) denote choice of the advisor, weight on advice \(\mathrm{WoA}\) is computed as \[\mathrm{WoA}=\frac{f_{\hat{X},\mathrm{adj}}-f_{\hat{X},\mathrm{init}}}{1-f_{ \hat{X},\mathrm{init}}}\] The winsorization procedure replaces negative values with zeroes: Figure 3: **Qualtrics survey flow.** Survey blocks are color coded by element type and line pattern corresponds to reversibility. The survey begins with a consent notice and ends with a debriefing. Note that participants are _required_ to pass the attention and manipulation checks. Participants may return to previous instruction pages to pass the manipulation check. \[\mathrm{WoA}\leftarrow\max(0,\mathrm{WoA})\] The term "advice confidence" denotes \(\mathrm{AC}=f_{\mathcal{K},\mathrm{adj}}\), the adjusted confidence in the advisor's answer. Categorical variables (topic familiarity, chatbot usage) are converted to integer values using basic rules. A Beta-Bernoulli process is used to model beliefs in advice correctness [86]. See Appendix A.3 for some limitations of the approach and details of these decisions. ## 4 Analysis Participant answered 2,828 questions with an average of 23.97 questions per participant. After computing \(\mathrm{WoA}\), 166 questions (5.87%) with negative weight on advice are winsorized. Descriptive statistics are available in Appendix B.1. Qualitative findings about ChatGPT usage are presented in Appendix B.10. ### Weight on Advice HypothesisAsPredicted #126040) predicts that participants place greater weight on advice when the advisor is identified as an "AI chatbot." MethodWeight on advice is progressively regressed on a broader set of variables in each specification (see Figure 4). Weight on advice is regressed on the advisor identity (advisor), justification (give_justification), and their interaction in A. Controls, including topic familiarity (topic_familiarity) are included in B, past usage (usage_level) in C, advice quality (advice_is_correct) in D, and experiences (advice_accuracy_belief, question_num) in E. (AsPredicted #126040) preregisters regressions A-B and conduct regressions C-E as a non-preregistered exploratory analysis. All regressions include random effects for participants (participant_id) to account for unobservable differences in participants such as test-taking skill, propensity to trust, calibration skill, etc. Random effects are included for questions (question_id) to account for within-topic differences in question difficulty. For concision, additional analyses are reserved for Appendix B. ResultsThe results of the regression are displayed in Table 2. From specification A, there is no support for the initial hypothesis that \(\mathrm{WoA}\) is greater if the advisor is identified as a chatbot. The coefficients are directionally correct but not Figure 4: **Proposed causal mechanisms. The study examines several predictors of weight on advice. The random assignment of participant / question, experience over several questions, and varying advice quality permits a decomposition of what predicts weight on advice.** statistically significant. In this and other specifications, the random effects for participants and questions are highly significant. Appendix B.2 explores whether participant engagement might mediate the effect. After including topic familiarity in specification B, there is a highly significant increase in weight of advice when the participant is uncomfortable with the topic. Compared to a baseline of comfort in the topic, weight on advice is 6.1%, 95% CI [2.19-10.08%] higher when a participant is uncomfortable in the topic. The effect is persistent and similar in size across specifications C-E. Robustness checks are conducted in Appendix B.3. In specification C, past usage of chatbots has a marginally significant effect for participants in the "AI chatbot" advisor condition. The interpretation is that for each step increase in usage level (e.g. having used AI chatbots instead of merely hearing about them), weight on advice increases by 5.0 %, 95% CI [0.0%, 9.9%]. For participants in the "expert" advisor condition, the effect is not significant, suggesting that the result is driven by participant understanding of the advisor's capabilities. This effect is persistent and similar in size across specifications D-E. Appendix B.4 suggests that the result may mostly be driven by whether or not participants have used chatbots before. On face, specification D reveals a surprising absence of a direct effect from the quality of advice, measured as whether the advice is actually correct. The coefficient becomes significant in specification E, suggesting that participants place 2.9% more weight on advice if it is true. Additional exploratory analyses are performed in Appendix B.5 controlling for initial confidence; the results reveal a strong effect that is mediated by giving justifications. Finally, specification E identifies significant effect of experience that may be mediated by the advisor's identity. For every 10% increase in believed advice accuracy, participants with an AI chatbot advisor place 6.02% 95% CI [4.29%, 7.75%] greater weight on advice. If the advisor is a generic expert, the coefficient is 5.05% 95% CI [3.43%, 6.66%] per 10% increase in belief. Participants appear to place less weight on advice over time. For each additional question completed, participants place 0.4% 95% CI[0.362%, 0.6%] less weight on advice. Appendix B.6 shows that the deflated coefficient in the human expert condition is robust to beliefs. Moreover, there is a significant effect of the last advice's correctness that is mediated by advisor identity. ### Advice Confidence Hypothesis(AsPredicted #126040) predicts that (1) student's advice confidence will display overconfidence in language model accuracy and that (2) the overconfidence is mitigated by feedback. MethodFor choice \(X\) and question \(j\), let \(t_{j,X}\) denote whether \(X\) is correct and \(f_{j,X}\) denote the participant's confidence in the choice. Calibration curves are constructed by partitioning advice confidences over \([0,1]\) into 10 equal-width bins and plotting the average advice confidence \(e_{i}\equiv\mathbb{E}_{j\in I}[f_{j,X}]\) and accuracy \(o_{i}\equiv\mathbb{E}_{j\in I}[t_{j,X}]\) for each bin \(i\). This section focuses on the advised choice \(\hat{X}\) with the goal of evaluating how well the participants evaluate advice accuracy. Setting \(e_{i}\equiv\mathbb{E}_{j\in I}[f_{j,\hat{X},\mathrm{adj}}]\) and accuracy \(o_{i}\equiv\mathbb{E}_{j\in I}[t_{j,\hat{X}}]\), a calibration curve is constructed for participant's confidence in the advisor's answers. Miscalibration is measured by expected calibration error (\(\mathrm{ECE}\)), the average deviation from ideal calibration weighted by sample size. Letting \(P(i)\) denote the proportion of samples in bin \(i\), \(ECE\) is defined as: \[\mathrm{ECE}=\sum_{i=1}^{10}P(i)\cdot|o_{i}-e_{i}|\] To evaluate whether participants become more calibrated over time, \(ECE\) is computed over groups of 5 question numbers. To calculate standard errors, \(ECE\) is bootstrapped on 1000 samples within each question group. As preregistered, AC is measured for each topic and compared to the actual accuracy of the model on a larger evaluation set on the topic (discussed in Appendix A.2). Mistaken beliefs about per-topic confidence are identified by a Brunner-Munzel [87] comparisons between the mean accuracy \(T=\{t_{j}\}\) and confidence \(F=\{f_{j,X,\mathrm{adj}}\}\). The false discovery rate is controlled to 0.05 by using the Benjamin-Hochberg procedure [88]. Finally, participant performance is compared to a simple, proportionate update under different values of \(\mathrm{WoA}\). The optimal weight on advice \(\mathrm{WoA}^{*}\in[0,1]\) is the value that minimizes Brier Score. ResultsConsistent with an extensive prior literature [73], participants are overconfident in their own answers. Figure (a)a reveals (1) significant overconfidence in initial answers (\(\mathrm{ECE}_{\mathrm{init}}=0.183\)) for confidence levels greater than 0.5 and (2) attenuated but persistent overconfidence in adjusted answers (\(\mathrm{ECE}_{\mathrm{adj}}=0.137\)). Receiving advice appears to "lift" the actual accuracy of high-confidence answers and improve calibration. Considering only adjusted confidence in advised choices, participants are significantly more miscalibrated (\(\mathrm{ECE_{advised}}=0.201\)), displaying both overconfidence and underconfidence at different confidence levels. Participants dramatically underestimate the accuracy of advised choices at confidence levels below \(0.5\). For example, when participants place 0 to 10% confidence in the advisor's answer, the answer is actually correct 42.4%, 95% CI [29.2%, 55.5%] of the time. Moreover, the effect applies across both advisor conditions (Figure 4(b)) and is persistent (see Appendix B.8). The remainder of this section considers only participants in the AI chatbot condition. These participants are limited in their ability to predict differences in ChatGPT's advice quality across topics. Participants overestimate advice accuracy on Elementary Mathematics questions and underestimate accuracy for 10 topics displayed in Figure 6. Average beliefs in advice accuracy are only moderately correlated (Pearson's \(r\)=0.339, \(p\)=0.097) with actual advice accuracy when grouped by topic (Figure 6(b)). These errors are significant for knowing when to place weight on advice. \(WoA\) and \(AC\) are highly correlated at the question level (Pearson's \(r\)=0.639, \(p\)=1.69e-150) and participants place significantly more (Human Sexuality) or less weight (Elementary Mathematics) on some topics compared to others (Figure 6(a)). Overall, participants do not place enough weight on advice (Figure 8). The Brier score is minimized at \(\mathrm{WoA^{*}}=0.61\), achieving an average score of \(\mathrm{BS}=0.556\). The optimal weight on advice is over 50% higher than the average participant weight on advice, \(\overline{\mathrm{WoA}}=0.367\). Notably, participants also score (\(\mathrm{BS}=0.673\)) significantly worse in practice if they uniformly applied the same weight on advice with a proportionate update (\(\mathrm{BS}=0.588\)). The poor performance compared to a uniform baseline is due to (1) misallocation, the suboptimal proportioning of confidence to other answer choices and (2) extremism, participants' tendency to place no weight on advice or too much weight on advice, leading to excessive overconfidence. Appendix B.9 analyzes both effects and concludes that extremism is a larger factor. \begin{table} \begin{tabular}{l l l l l l} \hline \hline & A & B & C & D & E \\ \hline Intercept & 0.329*** & 0.302*** & 0.169** & 0.161** & -0.159* \\ & (0.044) & (0.045) & (0.081) & (0.082) & (0.096) \\ advice\_accuracy\_belief & & & & & 0.602*** \\ & & & & & (0.088) \\ advice\_accuracy\_belief:advisor[T.expert] & & & & & -0.097 \\ & & & & & (0.120) \\ advice\_is\_correct[T.True] & & & & 0.013 & 0.029** \\ & & & & (0.015) & (0.014) \\ advisor[T.expert] & -0.027 & -0.028 & 0.056 & 0.054 & 0.067 \\ & (0.059) & (0.059) & (0.118) & (0.118) & (0.137) \\ advisor[T.expert]:give\_justification[T.yes] & -0.027 & -0.021 & -0.025 & -0.025 & -0.023 \\ & (0.079) & (0.079) & (0.079) & (0.079) & (0.076) \\ give\_justification[T.yes] & 0.057 & 0.056 & 0.064 & 0.064 & 0.074 \\ & (0.058) & (0.058) & (0.057) & (0.057) & (0.055) \\ participant\_id Var & 0.364*** & 0.362*** & 0.354*** & 0.353*** & 0.337*** \\ & (0.056) & (0.055) & (0.055) & (0.055) & (0.052) \\ question\_id Var & 0.044** & 0.040** & 0.040** & 0.040** & 0.045** \\ & (0.018) & (0.018) & (0.018) & (0.018) & (0.018) \\ & & & & & -0.004*** \\ & & & & & (0.001) \\ topic\_familiarity[T.Neutral] & & 0.026 & 0.027 & 0.027 & 0.026 \\ & (0.018) & (0.018) & (0.018) & (0.017) \\ topic\_familiarity[T.Uncomfortable] & & 0.061*** & 0.062*** & 0.062*** & 0.063*** \\ & (0.020) & (0.020) & (0.020) & (0.020) \\ usage\_level & & 0.050* & 0.049* & 0.045* \\ & & (0.025) & (0.025) & (0.024) \\ usage\_level:advisor[T.expert] & & -0.033 & -0.033 & -0.019 \\ & & (0.036) & (0.036) & (0.035) \\ \hline \hline \end{tabular} \end{table} Table 2: Results of regression analyses examining weight on advice. Figure 5: **Calibration curves.** Error bars are 95% confidence intervals. Dotted line corresponds to a theoretically ideal calibration curve in which average predicted probability exactly equals ratio of positives. Figure 6: **Believed and actual advice accuracy.** Error bars are 95% confidence intervals. Blue corresponds average participant confidence in the correctness advised answer. Red corresponds to the results from an evaluation on 50 questions in each subject. **Bolded** topics have significantly different advice correct beliefs (blue) and advice accuracy (red), suggesting a discrepancy between participant beliefs and reality. Figure 8: **Brier score-optimal weight on advice.** The dot marks actual participant weight on advice and Brier score. Participants significantly underweigh advice (optimal \(61.0\%\) versus actual \(36.7\%\)) and under-perform the baseline even at the sub-optimal weight on advice. Figure 7: Figure 6(a) examines weight on advice for each topic and Figure 6(b) displays its relationship with advice confidence. ## 5 Discussion This study contributes to literature on algorithm aversion and human computer interaction by performing a study of how students incorporate advice from ChatGPT on MC tests. Weight on AdviceResults are summarized in Table 3. Advisor identity, presence of justification, and their interaction do not significantly affect weight on advice. The effect continues to be insignificant at a 95% level after including various controls. This result joins several other studies in finding a no effect of the advisor's identity on weight on advice [37] and agrees with another study on ChatGPT [72]. At the participant level, the sample is powered to detect a median effect size comparable found previously [30]. At the measured effect size, identifying an effect for advisor identity or presence of justification requires a sample size about four times larger (Appendix B.11). This might suggest that algorithm aversion or appreciation is less significant for ChatGPT than for other algorithms. Alternatively, the null result may be an artifact of the description of the non-algorithm advisor, which is vaguely introduced as an "expert" [32]. Moreover, in Appendix B.2, the effects are recovered by including an interaction term for optional questions, suggesting that participant engagement might be adding noise to the results. Limitations to generalizing the result are discussed the limitations section below. The analysis of controls confirms several prior results. Task difficulty in the form of topic familiarity mediates advice usage [45]. Harder tasks increase weight on advice more so for AI than human advice, but not significantly so (Appendix B.3). In exploratory analyses, prior usage of chatbots predicts greater weight on advice in the chatbot condition. The result agrees with prior studies showing that familiarity with the algorithm predicts greater usage of algorithms [31]. Given the relatively recent release of ChatGPT, students and educators still have different levels of familiarity with the technology [15]. These results could explain for why prior usage and ChatGPT are highly correlated [89]: new users underestimate the performance of the tool and gradually learn to trust and use it more. This reinforcing dynamic may predict how different people will use ChatGPT in the classroom and beyond. In a further exploratory analysis, performance on previous questions and corresponding beliefs about advice accuracy are predictive of WoA, agreeing with prior work [49; 50; 51]. The result is stronger for AI chatbots under different models of beliefs, particularly when examining the effect of the last piece of advice (see Appendix B.6). If the result holds, it suggests that people may be more perceptive or critical of AI performance. Finally, there is initially a small effect of advice correctness on weight on advice. Appendix B.5 finds that the effect is much larger after correcting for the participant's initial answer. Moreover, there is a satisfying explanation for how participants identify correct advice: the effect is only significant for participants in the condition that receives justifications. These results the theory that interpretability improves human adoption of algorithms and suggests that natural language reasoning is an effect medium for interpretability [53]. Advice ConfidenceWhile participants are indeed overconfident in their own answers, they err even worse in judging the correctness of AI advice. The calibration error is significant and persistent across 40 rounds of feedback. Exploring one potential explanation, participants misjudge advisor accuracy across topics and underestimate accuracy on 10 out of 25 topics. These misjudgements contribute to calibration error and affect weight on advice. Participants broadly misunderstand which tasks the AI advisor is good at. Participants generally overestimate accuracy on procedural topics such as mathematics and physics, while underestimating accuracy on social science topics such history, government, geography, and so on. Furthermore, by increasing the average weight of advice by over 50% and uniformly adjusting their answers, participants could have improved their score significantly. Participants don't use advice enough and are inefficient when they do, \begin{table} \begin{tabular}{l l c c c c} \hline \hline \multicolumn{2}{c}{_Explanation_} & \multicolumn{2}{c}{_Analysis_} & \multicolumn{2}{c}{_Result_} \\ \hline Explanation type & Study metric & Prereg? & Spec. & Appendix & WOA Effect & Interaction? \\ \hline _Social distance_ & Advisor identity & ✓ & A & — & — & — \\ _Task difficulty_ & Topic familiarity & ✓ & B & B.3 & + & — \\ _Algorithm familiarity_ & ChatGPT usage & ✗ & C & B.4 & + & + Advisor identity \\ _Experience_ & Past accuracy & ✗ & E & B.6 & + & + Advisor identity \\ _Interpretability_ & Justification & ✓ & A & B.5 & — & + Advice quality \\ \hline \hline \end{tabular} \end{table} Table 3: Summary of weight on advice results. providing an important caveat to the results of [90]: in order study, participants were not able to perform better than ChatGPT alone when they were unable to interact with the model (see Appendix B.3). Overall, students do not place enough trust in algorithms like ChatGPT. LimitationsThere are several limitations to this study. First, the study setup may be inefficient for the attempted regressions. Sampling from hundreds of questions introduces substantial variance to the data collection process that limits the efficiency of the estimators. Reducing randomization, for example by fixing a set of equally-difficult questions, may lead to better estimates. Otherwise, a study with a similar setup may require a larger sample size to identify the same effect. Simultaneously, the lengthy survey required by the design might limit participant engagement. Appendix B.2 suggests the possibility that the effects are real, but only for participants willing to do optional questions. The advice could have been improved. The generated advice may not be representative of "ChatGPT's output," as suggested to participants. Model output is quite sensitive to prompting format, particularly across topics (see Appendix A.2). The study design was constrained to using a similarly powerful InstructGPT model instead of OpenAI's ChatGPT API, which was released after collecting generations. Moreover, the advisor identity could benefit from clarification. Previous work shows that algorithm aversion and appreciation effects are sensitive to the description of the advisor [32]. It may have been worthwhile to more explicitly clarify the identity of the human expert, although Appendix B.7 addresses this criticism. The incentives could also be improved. By providing an additional payout to the top performers, the rewards could encourage excessive confidence for the sake of scoring higher on the leaderboard, a phenomenon documented in forecasting tournaments [91]. Finally, the study population limits the external validity of the results. These findings apply to a set of business students at UC Berkeley who might be much more informed on tests and familiar with ChatGPT compared to even the average student. Future WorkExtensions of this study could begin by addressing its limitations. First, introducing more advisor identity conditions could help contrast how people judge ChatGPT's advice compared to other algorithms or other human advisors. For example, identifying an advisor as "a previous participant" or a generic "statistical model" [30] might cause participants to weight the advice less than that of an expert or ChatGPT. The advice can be improved with better prompting techniques such as iterative bootstrapping [92] or self-consistency [93] for chain-of-thought. To further study mediators of weight on advice, the advice might include model output probabilities over answer choices [76] (with varying levels of calibration [94, 53]), enable live user interaction [90], or support modification of prompts [33]. In addition to using the ChatGPT API, a comparison of ChatGPT output and ChatGPT Plus outputs could illuminate the relative impact of different AI feedback. Researchers could study a much wider variety of tasks. Aside from the other 30+ topics in the MMLU benchmark, participants could provide quantitative answers, answer free response problems, or complete other natural language tasks (e.g. from BIG-Bench [95]). Less structured responses would require new and untested metrics of distance between answers such BLEU scores [96]. Moreover, multi-modal models are able to perform natural language reasoning over images [97, 98] and videos [99, 100], creating new opportunities for study. Further research might document how different populations take advice from LLM-based tools. For example, students from various grade levels or courses of study might take advice in different ways. Beyond education, many occupations will likely involve increasing collaboration with AI tools [101, 102]. Previous studies have documented differences in algorithm aversion across populations [48]. As LLMs influence a greater number of human decisions, a nuanced understanding of how different people take their advice will be increasingly important. ## Conflicts of Interest The authors declare that they have no conflict of interest. ## Acknowledgments The technical execution of the project was completed in close collaboration with Isabella Borkovic. The project was advised by Professor Don Moore, who provided helpful direction throughout; I placed great weight on his advice. Thanks to Sandy Campbell, Karin Garrett, and Sophia Li for feedback. Karin Garrett and Isabella Borkovic each led laboratory sessions. Members of the Moore Accuracy Lab tested and provided feedback on the survey. Critical support for running experiments was provided by the Research Participant Program and Experimental Social Science Laboratory. Funding for prizes was provided by the Michael and Chris Boskin Scholarship.
2306.08889
Dissecting Multimodality in VideoQA Transformer Models by Impairing Modality Fusion
While VideoQA Transformer models demonstrate competitive performance on standard benchmarks, the reasons behind their success are not fully understood. Do these models capture the rich multimodal structures and dynamics from video and text jointly? Or are they achieving high scores by exploiting biases and spurious features? Hence, to provide insights, we design $\textit{QUAG}$ (QUadrant AveraGe), a lightweight and non-parametric probe, to conduct dataset-model combined representation analysis by impairing modality fusion. We find that the models achieve high performance on many datasets without leveraging multimodal representations. To validate QUAG further, we design $\textit{QUAG-attention}$, a less-expressive replacement of self-attention with restricted token interactions. Models with QUAG-attention achieve similar performance with significantly fewer multiplication operations without any finetuning. Our findings raise doubts about the current models' abilities to learn highly-coupled multimodal representations. Hence, we design the $\textit{CLAVI}$ (Complements in LAnguage and VIdeo) dataset, a stress-test dataset curated by augmenting real-world videos to have high modality coupling. Consistent with the findings of QUAG, we find that most of the models achieve near-trivial performance on CLAVI. This reasserts the limitations of current models for learning highly-coupled multimodal representations, that is not evaluated by the current datasets (project page: https://dissect-videoqa.github.io ).
Ishaan Singh Rawal, Alexander Matyasko, Shantanu Jaiswal, Basura Fernando, Cheston Tan
2023-06-15T06:45:46Z
http://arxiv.org/abs/2306.08889v3
# Revealing the Illusion of Joint Multimodal Understanding in VideoQA Models ###### Abstract While VideoQA Transformer models demonstrate competitive performance on standard benchmarks, the reasons behind their success remain unclear. Do these models jointly capture and leverage the rich multimodal structures and dynamics from video and text? Or are they merely exploiting shortcuts to achieve high scores? We analyze this with _QUAG_ (QUadrant AverAGe), a lightweight and non-parametric probe that systematically ablates the model's coupled multimodal understanding during inference. Surprisingly, QUAG reveals that the models manage to maintain high performance even when injected with multimodal sub-optimality. Additionally, even after replacing self-attention in multimodal fusion blocks with "QUAG-attention", a simplistic and less-expressive variant of self-attention, the models maintain high performance. This means that current VideoQA benchmarks and their metrics do not penalize shortcuts that discount joint multimodal understanding. Motivated by this, we propose the _CLAVI_ (Counterfactual in LAnguage and VIdeo) benchmark, a diagnostic dataset for benchmarking coupled multimodal understanding in VideoQA through counterfactuals. CLAVI consists of temporal questions and videos that are augmented to curate balanced counterfactuals in language and video domains. Hence, it incentivizes, and identifies the reliability of learnt multimodal representations. We evaluate CLAVI and find that models achieve high performance on multimodal shortcut instances, but have very poor performance on the counterfactuals. Hence, we position CLAVI as a litmus test to identify, diagnose and improve the sub-optimality of learnt multimodal VideoQA representations which the current benchmarks are unable to assess. ## 1 Introduction Multimodal learning with videos and text is a challenging task. While both the modalities are sequential in nature, they possess unique underlying structures. Videos exhibit spatio-temporal dynamics in the pixel space, whereas language representation is composed of the syntax and semantics of word sequences. Hence, tasks like Video Question Answering (VideoQA) [1] present a considerable challenge as they necessitate the model to acquire accurate representations of both modalities, and establish meaningful connections between them. Transformers have demonstrated exceptional performance on VideoQA benchmarks [1]. Since they lack the intrinsic inductive biases for these representation, they must learn it from the data [2; 3]. However, does the good performance of Transformers on current VideoQA benchmarks necessarily mean that they learn to faithfully represent and leverage the modalities? Or do the current benchmarks and metrics fail to robustly evaluate the models for their multimodal understanding? This is a valid concern because deep learning models can learn shortcuts to achieve high performance scores without faithfully learning from the modalities. For example, seemingly spatio-temporal tasks, like some action classification problems, are shown to be solved without focusing much on temporal representations [4; 5]. Similarly, for VideoQA, the questions that seemingly require the model to jointly leverage the multimodal representations can be answered using shortcuts (see Figure 1). This raises questions if the models are actually learning to leverage jointly leverage and understand the modalities or is the performance on the current benchmarks an illusion of joint multimodal learning. First, we propose QUadrant AveraGe (QUAG), a lightweight and non-parametric probe to systematically gauge the reliance of the performance of a model on joint multimodal representations. We posit that joint multimodal understanding is enabled in the fusion layers by progressively attending to the informative tokens within and between the modalities. QUAG impairs components of modality fusion by systematic block-averaging of attention weights. We apply QUAG on multiple dataset-model combinations, and consistently find that the models manage to achieve high performance on the benchmarks without relying specific multimodal interactions. This is concerning because high performance on established benchmarks should be ideally indicative of coupled multimodal understanding. We validate the sub-optimality in multimodal representations by replacing self-attention in pretrained models with simple and less-expressive QUAG-attention. QUAG-attention augmented models manage to maintain the high performance on standard benchmarks. However, this raises a follow-up question - how then can we benchmark coupled multimodal understanding in the models? Thus, we create Counterfactual in LAnguage and VIsion (CLAVI), a diagnostic benchmark to robustly assess joint multimodal understanding in VideoQA models. Temporal understanding ideally requires coupled multimodal understanding. However, the standard benchmarks do not contain or assess performance on counterfactual instances. CLAVI contains balanced temporal counterfactuals in both question and video domains to accurately test if the models can jointly understand temporal cues in the question (temporal prepositions and adverbs) and the video (order of frames) domains. We develop consistent-accuracy metrics to precisely assess the contributions of shortcuts to circumvent joint multimodal understanding. We find that finetuned models have high-accuracy on shortcut instances in CLAVI, but have poor performance on the counterfactual instances that require coupled multimodal understanding. Hence, the performance of a model on CLAVI is indicative of joint multimodal understanding, which is overlooked by the existing benchmarks. In summary, our contributions are _(i)_ we develop QUAG, a systematic method to identify sub-optimalities in joint multimodal representations, _(ii)_ using QUAG and QUAG-attention, we demonstrate that high performance on established VideoQA benchmarks is not representative of faithful coupled multimodal understanding, and _(iii)_ we develop CLAVI, a new diagnostic benchmark that contains balanced temporal counterfactuals in videos and questions to confidently disambiguate the contributions of shortcuts in joint multimodal learning to benchmark the models. ## 2 Related work **Dataset Biases**: Works in NLP [6; 7; 8], vision [9] and vision-language [10; 11] demonstrate that models can achieve high performance without even understanding the sequence of the embeddings. This is partly because the current benchmarks have unintended biases that could potentially be exploited by models to learn shortcuts; hence accuracy is not always a faithful metric [5; 10; 12; 13]. For VideoQA, MovieQA [14] and TVQA [15] datasets are biased towards plot understanding or actor dialogue comprehension [16]. Biases are not always immediately apparent; for example, Social-IQ [17] contains sentiment-biased annotations [18]. Moreover, statistical regularities like answer length, answer frequency [19; 20] and co-occurrence [21; 22; 23] introduce spurious features. Overall, these Figure 1: Does answering this question require joint temporal understanding of the video and question? No. It can be correctly answered by eliminating the incorrect options by performing object detection in each frame independently without even looking at the question. biases allow the models learn shortcuts [24] that circumvent multimodal reasoning [25; 26]. While synthetic VideoQA benchmarks such as VQuAD [27], CLEVER [28], and MarioQA [29] have been carefully curated to mitigate many biases, they are unable to capture the intricate dynamics of the real world. We curate CLAVI by systematically augmenting real-world videos to faithfully represent the complexity of the physical world while controlling the biases to confidently evaluate multimodal temporal understanding. **Shortcut Learning**: Tangential to the bias amelioration methods [30; 31], Lei et al. [32] and Winterbottom et al. [16] achieve state-of-the-art performance with simple models by leveraging VideoQA dataset shortcuts in the model. ATP [33] demonstrates single frame bias by re-training the models with an informative frame-selection module to achieve competitive performance. Perceptual Score [18] quantifies modality bias in terms of relative performance drop under modality-permutation operation. QUAG combines these ideas to evaluate the dependence of models on shortcuts for circumventing multimodal understanding in terms of performance drop under multimodal representation collapse. Unlike other works, it assists in identifying sub-optimal learnt representations in a combined model-dataset approach. **Leveraging Counterfactuals**: We share our motivation for developing CLAVI with VQA-CP [34]: that iid train-test splits in the presence of strong priors leads to learning via shortcuts. However, rather than reducing the bias by mining new complementary image instances, CLAVI weakens prior of multimodal understanding in the first place with synthesized balanced video-question temporal hard-negatives. Concurrent to our work, Momeni et al. [35] and Wang et al. [36] have employed hard-negatives for improving verb-understanding in VideoQA models. Bagad et al. [37] use both - a real-world dataset by stitching two unrelated videos and a synthetic dataset for post-pretraining to improve the temporal understanding of video-language models. However, unlike CLAVI that uses synthesized negative video instance from the same video, stitched video dataset cannot be a robust diagnostic benchmark for temporal understanding because the incoherent contexts can be exploited as a static bias shortcut [38]. ## 3 Do VideoQA models learn to jointly leverage the modalities? We posit that coupled multimodal understanding is enabled in the fusion layers by progressively attending to the informative tokens within and between the modalities. Hence, we propose QUAG to systematically ablate the effects of multimodal attention. It impairs the joint multimodal representations in the pre-trained model by systematically block-averaging the attention weights to attend to all token uniformly. Based on the targeted modality-interactions, we define special cases of QUAG, collectively called short-circuit operations and analyze the performance drop. ### Video question answering setup The task is to predict the correct answer given a video-question tuple, \((\mathcal{V},\mathcal{T})\). A VideoQA model consists of a vision encoder \(F_{\mathcal{V}}:\mathcal{V}\rightarrow\mathbb{R}^{L_{\mathcal{V}}\times D}\), text encoder \(F_{\mathcal{T}}:\mathcal{T}\rightarrow\mathbb{R}^{L_{\mathcal{T}}\times D}\), and a multimodal fusion module \(M:(F_{\mathcal{V}}(\mathcal{V}),F_{\mathcal{T}}(\mathcal{T}))\rightarrow \mathbb{R}^{L\times D}\), where \(L_{\mathcal{V}}\) and \(L_{\mathcal{T}}\) are the sequence lengths of video and text respectively and \(D\) is the dimensionality of the fusion model. Consider \(M\) a composition of \(n\) attention-based multimodal fusion blocks, \(M=M_{n}\circ M_{n-1}\circ\cdots M_{1}\). Each fusion block consists of attention, normalization, and token-mixing modules. For our analysis, we consider \(M\) to be composed of self-attention transformer blocks. That is, query, key, and value are the transformations of the same input sequence. Hence, \(X_{\mathcal{V}\mathcal{T}}=[F_{\mathcal{V}}(\mathcal{V})\parallel F_{\mathcal{ T}}(\mathcal{T})]\in\mathbb{R}^{(L_{\mathcal{V}}+L_{\mathcal{T}})\times D}\) is the input for \(M\), where \(\parallel\) is concatenation operator. Since QUAG operates at inference time, we assume the VideoQA model to be finetuned and frozen. ### QUAG: Ablation of modality interactions Let \(X_{i-1}\) denote the input of the fusion block \(M_{i}\) and let \((Q_{i},K_{i},V_{i})\) be its query, key, and value transformations and \(X_{0}=X_{\mathcal{V}\mathcal{T}}\). Then, the token-mixing operation is given by \(T_{i}=A_{i}V_{i}\), where \(A_{i}=softmax(Q_{i}K_{i}^{\top})\) is the attention matrix (we omit the scaling factor \(\sqrt{d}\) for readability). For \(Q_{1u}\), \(K_{1u}\), and \(V_{1u}\) to denote the query, key, and value projections of modality \(u\) for the first fusion block, \(M_{1}\), we can simplify, \(A_{1}\) and \(T_{1}\) in terms of their partition blocks, referred to as quadrants henceforth, as: \[A_{1}=softmax\left(\left[\begin{array}{c|c|c}\hline\vspace{0.1cm}Q_{1\mathcal{V}} &K_{1\mathcal{V}}^{\top}&\vspace{0.1cm}Q_{1\mathcal{V}}&K_{1\mathcal{T}}^{ \top}\\ \hline\vspace{0.1cm}Q_{1\mathcal{T}}&K_{1\mathcal{V}}^{\top}&\vspace{0.1cm}Q_{ 1\mathcal{T}}&K_{1\mathcal{T}}^{\top}\end{array}\right]\right)\quad\mathrm{and} \quad T_{1}=\left[\begin{array}{c|c|c}A_{\mathcal{W}\mathcal{W}}^{1}&A_{ \mathcal{W}\mathcal{T}}^{1}\\ \hline\vspace{0.1cm}A_{\mathcal{W}\mathcal{T}}^{1}&A_{\mathcal{W}\mathcal{T}}^{ 1}\end{array}\right]\left[\begin{array}{c}V_{1\mathcal{W}}\\ \hline\vspace{0.1cm}V_{1\mathcal{T}}\end{array}\right]\] where \(A_{u_{1}u_{2}}^{1}\) represents the quadrant of \(A_{1}\) corresponding to (\(Q_{1u_{1}}K_{1u_{2}}^{\top}\)). Note that we skip layer normalization layers in the discussion for simplicity. Hence, we can simplify and write \(T_{1}\) as: \[T_{1}=\left[\begin{array}{c|c}A_{\mathcal{W}\mathcal{V}}^{1}V_{1\mathcal{W} }+A_{\mathcal{W}\mathcal{T}}^{1}V_{1\mathcal{T}}\\ \hline\vspace{0.1cm}A_{\mathcal{W}\mathcal{V}}^{1}V_{1\mathcal{W}}+A_{ \mathcal{W}\mathcal{T}}^{1}V_{1\mathcal{T}}\end{array}\right] \tag{1}\] We follow the same partition quadrants, as defined for \(A_{1}\) in \(M_{1}\), for \(A_{j}\) in the downstream fusion layer \(M_{j}\) and denote the quadrants as \(A_{u_{1}u_{2}}^{j}\). Next, we define row-wise average and replace operator \(\mathcal{R}\) that operates on a quadrant of a matrix to replace the values in the quadrant with the mean value of the respective partitioned-row. Note that the values in the other quadrants are unaffected. Given a matrix \(Z\) of size \(p\times q\) and let \(W\) be the quadrant of \(Z\) with indices \((p_{1}^{W}\cdots p_{2}^{W})\times(q_{1}^{W}\cdots q_{2}^{W})\). We use \([\,]_{ij}\) to index the element in row \(i\) and column \(j\). Then, \[[\mathcal{R}(Z,W)]_{ij}=\begin{cases}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm }\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm} \vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm} \vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm} \vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm} \vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm} \vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm} \vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm} \vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm} \vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm} \vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm} \vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm} \vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm} \vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm} \vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm} \vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm} \vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm} \vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm} \vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm} \vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm} \vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm} \vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm} \vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm} \vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm} \vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm} \vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm} \vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm} \vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm} \vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm} \vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm} \vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm} \vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm} \vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm} \vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm} \vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm} \vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm} \vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm} \vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm} \vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm} \vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm} \vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm} \vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm} \vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm}\vspace{0.1cm} \vspace{0.1cm}\vspace{0.1cm}\vspace{0. 2) \(\mathbf{S}=[\mathbf{\mathcal{V}}\mathbf{\mathcal{T}},\mathbf{\mathcal{V}}]\): Parallel to unimodal short-circuiting, \(\phi(A_{1},V_{1},[\mathbf{\mathcal{V}}\mathbf{\mathcal{T}},\mathbf{\mathcal{V}}])\) is equivalent to scaling the average values of \(V_{1\mathcal{T}}\) and \(V_{1\mathcal{V}}\) in the upper and lower blocks of \(T_{1}\) respectively. Video and text queries faithfully attend to video and text keys respectively while crossmodal attention in video-text is reduced to uniform attention. We term this effect as **crossmodal short-circuiting**. It is complementary to unimodal short-circuiting and assesses the importance of inter-modality token-mixing. It probes if the models actually learns by fusing the information between the two modalities or is it largely driven by unimodal biases within the modalities. 3) \(\mathbf{S}=[\mathbf{\mathcal{V}}\mathbf{\mathcal{V}},\mathbf{\mathcal{T}}\mathbf{\mathcal{V}}]\): This is equivalent to removing the effect of individual of video keys, resulting in averaging the components of video modality in the upper and lower blocks of all \(T_{i}\). We call this **video short-circuiting**. Similarly, \(\mathbf{S}=[\mathbf{\mathcal{T}}\mathbf{\mathcal{T}},\mathbf{\mathcal{V}}\mathbf{\mathcal{T}}]\) leads to **text short-circuiting**. ### QUAG-attention Along with an assessment of multimodal understanding, QUAG enables a detailed analysis of token mixing for identifying the sub-optimality of learnt representations. Hence, we use QUAG as an inspiration to propose QUAG-attention, a variant of self-attention that calculates similarities on already short-circuited sequences. Let us consider the case such that the performance of \(M\) under video short-circuiting operation is comparable to its performance without any perturbation. If the input of \(M\) is \(X_{0}=[F_{\mathcal{V}}(\mathcal{V})\,\|\,F_{\mathcal{T}}(\mathcal{T})]\), then during token-mixing we effectively average and scale the components in the upper-partition (\([1,\cdots,L_{\mathcal{V}}]\times D\)) of the value matrix in all the fusion blocks. This can be efficiently approximated by replacing the entire upper block with a single row-wise average token using \(\mathcal{R}\) before projecting to key and value domains. Note that the query remains unchanged. Similar to QUAG, we perform no fine-tuning and only modify the calculation of self-attention. We can generalize it to present new variants of self-attention: collectively known as QUAG-attention. QUAG-attention operates by consistently averaging the corresponding modality blocks within the input of each fusion block. The averaging process occurs prior to the transformation of the input into keys and values. Depending on the sub-optimalities in representation, QUAG-attention can be applied to only text, video or both the modalities. It reduces the number of keys and values tokens from \((L_{\mathcal{V}}+L_{\mathcal{T}})\) to either \((L_{\mathcal{T}}+1)\) (text-average), \((L_{\mathcal{V}}+1)\) (video-average) or \(2\) (text-video-average). The number of tokens in video and text modalities are generally different. However, due to block averaging, QUAG-attention reduces the effective number of tokens of the modality in key and value domains to one. The token-length mismatch would interfere with softmax operation in attention. Hence, we scale the components of dot-product similarity scores of the averaged keys by the logarithm of the number constituting tokens (that is, the original number of tokens in the block). This is similar to proportional attention used by Bolya et al. [39] for token-merging. ### Experimental setting **Models and Datasets**: We evaluate QUAG and QUAG-attention on JustAsk [40] and FrozenBiLM [41] models. We evaluate it on the following datasets _(i)_**ActivityNet-QA**[42]: contains 58K open-ended questions on 5.8K sampled videos from ActivityNet _(ii)_**MSRVTT-QA**[43]: contains 244K open-ended questions on 10K MSRVTT videos _(iii)_**NeXT-QA**[44]: contains 47K 5-way multiple choice questions with one-correct answer from 5.4K videos. We also report results on the **ATP-Hard** subset of NeXT-QA [33] that contains a higher concentration of temporally challenging data requiring multi-frame understanding. **Implementation Details**: All our experiments were performed on 4 NVIDIA A5000 GPUs. We use the official open-source code of the models on GitHub and modify only the self-attention modules. We use the official evaluation code and checkpoints. For NeXT-QA, we use the official dataset and fine-tune the models with the default parameters. More details in the supplementary material. ### Analysis The results are shown in Table 1. For comparison to the unperturbed model, we specify the baseline, language-only (performance without any video input) and video-only (performance without any text input) accuracies. Evidently, high performance in language-only setting, relative to the basline, in most of the cases is indicative of unimodal bias towards language. The performance of FrozenBiLM on ActivityNet-QA and MSRVTT-QA drops by over 10% (43.6% to 32.3%; 46.6% to 32.8%) with crossmodal short-circuiting, and by 40% with both unimodal (43.6% to 2.4%; 46.6% to 1.0%) and text short-circuiting (43.6% to 1.4%; 46.6% to 1.0%). Furthermore, the drop is less than 1% under video short-circuiting (43.6% to 43.1%; 46.6% to 45.7%). This means that the model is leveraging unimodal interactions within the text and cross-modality interactions between video (query) and text (key). However, for NeXT-QA and ATP-Hard, since the performance does not drop under crossmodal short-circuiting, the model is not leveraging any crossmodal interactions. The performance drops to chance level, that is 20%, only under text and unimodal short circuiting operations and not video short-circuiting, which is indicative of strong unimodal bias towards the text modality. Similarly, for JustAsk, the performance does not drop by more than 1% for any of the datasets for any short-circuiting operation. This shows that JustAsk achieves competitive performance on the benchmarks without even leveraging the rich representations within and between the modalities. We use the results from QUAG to apply QUAG-attention on FrozenBiLM and JustAsk that reduce the number of multiplication operations by **13.6%** and **68.0%** respectively, for a less than 1% drop in performance consistently for all the datasets. However, this raises serious concerns because models can learn to _hack_ their way around the accuracy metrics for leveraging shortcuts. The supposedly multimodal datasets contain biases and the evaluation metrics do not penalize shortcut learning and provide a false confidence about the abilities of the model. This raises the follow-up question: "**How can we confidently benchmark multimodal understanding in VideoQA models?**" ## 4 Clavi We propose CLAVI as a diagnostic dataset with balanced counterfactual in time for benchmarking joint coupled multimodal understanding in VideoQA. CLAVI consists of 6,018 videos and 114,342 questions (72,770 train and 41,572 test). The simple yes-no questions probe the absolute temporal location of a single action (beginning/end) or the occurrence sequence for a pair of non-overlapping actions (before/after). Using yes-no questions with balanced negative instances allows us to have questions that are **unambiguous**, and answers that are **mutually exclusive** and **equally informative** to not be eliminated by prior biased knowledge. To create temporal negatives in the question domain, we replace _before_ with _after_ and _beginning_ with _end_ and vice versa. Further, we create temporal negatives in the video domain by swapping only the action-segments in the video. We exhaustively consider all the compositions of temporal negatives in video and question domains to create balanced negative instances for systematic assessment of temporal understanding in videos. ### Dataset Creation We curate CLAVI by leveraging Charades-STA 1[45], containing 9,848 videos of humans performing actions based on a short script written by composing predefined vocabulary that describe multiple \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{4}{c}{JustAsk} & \multicolumn{4}{c}{FrozenBiLM} \\ \cline{2-9} & A-QA & M-QA & N-QA & ATP-H & A-QA & M-QA & N-QA & ATP-H \\ \hline Baseline & 38.7 & 41.8 & 53.8 & 44.0 & 43.6 & 46.6 & 55.8 & 55.7 \\ Language-only & 28.2 & 29.9 & 42.2 & 42.0 & 4.8 & 33.2 & 55.7 & 55.8 \\ Video-only & 2.6 & 6.7 & 39.1 & 23.0 & 0.1 & 0.0 & 20.2 & 20.1 \\ \hline SC: unimodal & 38.5 & 41.5 & 53.6 & 43.6 & 2.4 & 1.0 & 19.8 & 21.4 \\ SC: crossmodal & 38.3 & 41.3 & 53.5 & 44.3 & 32.3 & 32.8 & 56.0 & 55.6 \\ SC: video & 38.2 & 41.3 & 53.4 & 44.3 & 43.1 & 45.7 & 55.8 & 55.7 \\ SC: text & 38.6 & 41.5 & 53.7 & 43.6 & 1.4 & 1.0 & 20.5 & 21.1 \\ \hline QUAG-atten* & 38.0 & 41.0 & 53.5 & 44.1 & 43.0 & 45.8 & 55.6 & 55.9 \\ \hline \hline \end{tabular} \end{table} Table 1: Short-circuit (SC) and QUAG-attention accuracies for JustAsk and FrozenBiLM models on ActivityNet-QA (A-QA), MSRVTT-QA (M-QA), NeXT-QA (N-QA) and ATP-Hard (ATP-H) datasets (*video-average for FrozenBiLM and video-text-average for JustAsk) daily actions. The videos are annotated with the start and end times of each action. The action category, the start, and the end of each action segment are referred to as the _action tuple_. Each video may contain more than two action tuples. We select pairs of action tuples based on the uniqueness of the action category and complete exclusivity (that is no overlap between the occurrence of the actions). In a given selected pair of action tuples, the two actions along with the inter-action region constitute the video segment. We ensure that the two action categories in the pair are distinct. Additionally, to address temporal boundary ambiguities in the annotations, we filter out segments where either of the selected action classes occurs in close proximity to the segment boundaries We also extend the boundaries of the two actions in the pair. We define two boundary extensions - out-extension and in-extension. The out-extension encompasses regions that are not a part of the selected segment but extend outwards in both directions into the original video. Similarly, in-extension extends inwards into the inter-action segment. To avoid temporal position bias [46; 47], the lengths of the extension boundaries are selected randomly. However, since inter-action separation can affect their recognition [37], we constraint the inter-action separation in the original and the corresponding negative video to be the same. That is, the sum of out-extension boundaries is always equal to the sum of in-extension boundaries. We trim each boundary-extended contiguous segment from the original video to curate a positive video instance. To create the counterfactual video, we swap the boundary-extended action regions as shown in Figure 2. Note that the region between the boundary-extended actions is unaffected. Swapping operation preserves the actions but only alters their chronology, and can be applied independently to question negatives (unlike manipulations like video reversal [36]). This independence provides fine-grained control to create a balanced benchmark for comprehensive analysis. We create three types of questions using pre-defined templates and action-class annotations: 1) **Existence (E) type**: The E-type questions for both the action classes follow the template _"Was someone <A>?"_, where <A> is one of two action classes in video. We use it as a positive control to verify if the model is able to correctly recognize the action classes. We use the exact same question for negative video instance as well, totalling to 4 control (questions, video, answer) instances for a Charades-extracted video segment. 2) **Beginning/End (BE) type**: BE type questions the absolute location of the action in the video. The question is of the form, _"Was the person <A> at the [beginning/end]?"_ where <A> is one of two action classes in the video, and we select one of _beginning_ and _end_. Hence, for a given video and its negative, we have, in total, 8 instances of BE (questions, video, answer) tuples combined. Note that the answer for a given BE question is complemented in the negative video. 3) **Before/After (BA) type**: BA type comprises of questions on the relative order of occurrence of actions. The question is of the form _"Did <A1> happen [after/before] <A2>?"_, where <A1> and <A2> are the selected action classes. We consider all the permutations of action classes. Hence, we have a total of 8 instances of BA type (questions, video, answer) tuples per extracted video. Similar to BE type, the answer is complemented in the negative video. Figure 2: Illustrative example of the creation of CLAVI. In the original video (V), the action “turning on the light” (Action A) follows “holding on to some clothes” (Action B). To create a counterfactual video (V’), we swap the boundary-extended action segments. The questions (Q), along with their counterfactual (Q’) are curated for each of the videos. Note that the color of the question panel reflects the correct answer (green for “yes”, pink for “no”). Further, we add negative controls for E and BA type questions. A negative control action is an action that does not occur in the video. Since we want to probe only for temporal understanding, we keep the negative control action-class easy to detect by randomly selecting an action-class that does not contain any of the objects or actions in the original video. Hence, answering the negative control does not require understanding temporal cues in language and video and can be answered by object elimination. It serves the dual purpose of sanity check of learning and a baseline for learning by temporal shortcuts. The answer of negative control questions is always false. This adds two E type and sixteen BA type negative control questions for the video and its negative combined. Hence, including the negative control questions, each video in CLAVI is associated with 19 questions: 2 E, 4 BE, 4 BA, 1 E negative control and 8 BA negative controls. The ratio of "yes":"no" answers is 6:13. We want to evaluate the sensitivity of the model to the temporal cues in language and video independently. Hence, we define consistent accuracies. If the model predicts the answers for a given question correctly for both - the video and its counterfactual, it is called video-consistent. Similarly, for a given video, if the model predicts the answers to the question and its counterfactual question correctly, it is called text-consistent. The proportion of video and question consistent predictions are reported as **video-consistent accuracy (CAcc\({}_{\mathcal{V}}\))** and **text-consistent accuracy (CAcc\({}_{\mathcal{T}}\))** respectively. ### Experiment We fine-tune and evaluate 4 recent models: JustAsk [40], FrozenBiLM [41], Singularity-Temporal [32] and All-In-One+ [48] on CLAVI using the official fine-tuning instructions. We follow the same experimental settings as discussed in Section 3.5. To account for class imbalance in the answers, we use balanced accuracy for validation and testing. We summarize the results in Table 2. All the models have greater than 70% performance on balanced accuracy metric. However, consistent accuracies for both videos and text are lower than balanced accuracy. We analyze the consistent accuracies on counterfactual and control subsets. Text and video consistent accuracies are greater than 80% the control benchmarks. This is because, unlike the counterfactual subset, performance on the control subset does not require coupled understanding of time in both video and text domains. That is, irrespective of the context of the negative control action in the question and the location of the object in the frame sequence, the models can learn to answer it correctly by relying on object detection. However, for achieving high consistent accuracies on the counterfactual subset, the model needs to jointly understand the order of the events and the temporal words in the question along with the order of the events in the video. We get significantly lower consistent accuracies (less than 15%) for the counterfactual subset, except for FrozenBiLM, which means that the other models are unable to learn and leverage joint multimodal representations in CLAVI. How can we be sure that FrozenBiLM performs good because it learns faithful multimodal representations and not some spurious shortcuts? We find that the video-average variant of QUAG-attention on FrozenBiLM cause the CAcc\({}_{\mathcal{T}}\)-counter and CAcc\({}_{\mathcal{V}}\)-counter to drop to 28% and 3.55% respectively, while CAcc\({}_{\mathcal{T}}\)-control and CAcc\({}_{\mathcal{V}}\)-control are still close to the original near-perfect values. Since other models have a very low performance on CLAVI but are able to achieve high performance on standard benchmarks, it makes the reliability of existing VideoQA datasets questionable. \begin{table} \begin{tabular}{l r r r r} \hline \hline Metric & JustAsk & FrozenBiLM & Singularity-T & All-In-One+ \\ \hline Balanced Acc & 72.2 & 82.0 & 75.8 & 73.8 \\ \hline CAcc\({}_{\mathcal{V}}\) & 56.1 & 75.3 & 51.2 & 54.7 \\ CAcc\({}_{\mathcal{T}}\) & 58.0 & 75.1 & 52.7 & 55.9 \\ \hline CAcc\({}_{\mathcal{V}}\)-control & 94.0 & 90.7 & 87.5 & 93.3 \\ CAcc\({}_{\mathcal{T}}\)-control & 100 & 98.0 & 95.0 & 100 \\ \hline CAcc\({}_{\mathcal{V}}\)-counter & 3.9 & 52.2 & 1.4 & 1.6 \\ CAcc\({}_{\mathcal{T}}\)-counter & 15.9 & 52.8 & 11.4 & 11.9 \\ \hline \hline \end{tabular} \end{table} Table 2: Test performance (% accuracy) on CLAVI after fine-tuning ## 5 Limitations and Future Work Our dataset is intentionally simple, so as to focus the benchmark only on simple temporal sequence understanding, which preempts spatio-temporal referential understanding. We plan to include more complex temporal organizations of action classes like containment and partial-overlap that are defined using prepositions like _during_ and _while_ in future work. As the current state-of-the-art models catch-up to our benchmark, our future plan is to curate a more complex dataset with more natural questions that include temporal referring expressions with similar balanced doubly-negative strategy. _Potential Negative Societal Impact_: We do not analyze the Charades videos, and hence neither the Charades-derived CLAVI, for systemic biases against race, gender and age which might introduce unfair biases in the model. ## 6 Conclusion We introduced QUAG for conducting a systematic analysis of learnt multimodal representations. It provides interesting insights into _how_ the models are able to infer the answers from the videos and questions. The fine-grained analysis of the fusion of the modalities through QUAG helps to identify the sub-optimality in leveraging the multimodal representations of text and videos jointly. Further, we introduce a new diagnostic benchmark, CLAVI, that penalizes the lack of joint multimodal understanding which is overlooked by the existing datasets. Our methods of probing the multimodal interactions and diagnosing through counterfactuals are generic and can be extended to other multimodal tasks to get valuable insights. We are positive that CLAVI and QUAG can be employed to systematically evaluate, diagnose and ultimately improve, not just the performance but the representations learnt by the VideoQA models. ## Appendix A Quag ### Toy Example Consider the toy example in Fig 3. The left-most matrix is the input matrix. As per the definition of \(\phi\), we can write, \(\phi(Z,[\mathcal{TT},\mathcal{VV}])=\mathcal{R}_{\mathcal{TT}}\circ\mathcal{ R}_{\mathcal{VV}}(Z)\). We demonstrate the successive application of \(\mathcal{R}\) operator in the example. Note that the padding is ignored; this is equivalent to applying \(\mathcal{R}\) to the padding-free sub-partition of the quadrant. Also, as illustrated in the example, since the quadrants cannot overlap, the sequence of application of \(\mathcal{R}\) does not matter. Figure 3: Toy example of \(\phi(Z,[\mathcal{TT},\mathcal{VV}])\), where \(Z\) is the input (left-most matrix) \(\mathcal{R}\) is the row-wise average and replace operator and hatching denotes padding. The quadrants that are operated on are highlighted in bright yellow box. Note that \(L_{\mathcal{V}}=3\) and \(L_{\mathcal{T}}=2\), such that video embeddings are pre-concatenated to question embeddings (as in the main manuscript). The cells are colored as per their quadrants (\(\mathcal{VV}:\text{red},\mathcal{VT}:\text{yellow},\mathcal{TV}:\text{blue}, \mathcal{TT}:\text{green}\)) ### Code Below is the implementation of QUAG as an augmentation of the existing self-attention function. We use row-wise average and replace operation in each if-clause statements, while ignoring the padding, to ablate the effect of the quadrant. ``` 1defself_attention(inputs,mask,dim_model,l_v,l_t,quads): 2#Inputs: 3#inputs:Tensorofshape(batch_size,sequence_length,dim_model) 4#mask:Tensorofshape(batch_'size,sequence_length) 5#dim_model:Dimensionofthemodel(e.g.,512) 6#l_v:intmaximumlengthofvideotokens 7#l_t:intmaximumlengthofquestiontokens 8#quads:listcontainingelementsfrom{'VV','VT','TV','TT'} 9query=linear_transform_query(inputs) 10key=linear_transform_key(inputs) 11value=linear_transform_value(inputs) 12attention_scores=compute_attention_scores(query,key,mask) 13apply_quag(attention_scores,mask,l_v,l_t,quads) 14attended_output=apply_attention_scores(attention_scores,value) 15returnattended_output 16 17defcompute_attention_scores(query,key,mask): 18scaled_dot_product=dot_product(query,key)/sqrt(dim_model) 19attention_scores=softmax(scaled_dot_product+(1-mask)*-1e9) 20returnattention_scores 21 22defapply_quag(attention_scores,mask,l_v,l_t,quads): 23if'VV'isinquads: 24replace_with_rowwise_average(attention_scores[:,:l_v,:l_v],mask[:,:l_v,:l_v]) 25if'VT'isinquads: 26replace_with_rowwise_average(attention_scores[:,:l_v,-l_t:]) 27if'VV'isinquads: 28replace_with_rowwise_average(attention_scores[:,-l_t:,:l_v],mask[:,-l_t:,:l_v]) 29if'TT'isinquads: 30replace_with_rowwise_average(attention_scores[:,-l_t:,-l_t:]) 31 32defreplace_with_rowwise_average(scores,mask): 33rowwise_sum=sum(scores,axis-1) 34rowwise_mean=rowwise_sum/sum(mask,axis-2) 35expanded_rowwise_mean=expand_dims(rowwise_mean,axis-1) 36replace_elements(scores,expanded_rowwise_mean) 37 38defapply_attention_scores(attention_scores,value): 39attended_output=dot_product(attention_scores,value) 40returnattended_output ``` Next, we provide the code for QUAG-attention. QUAG-attention modifies the existing self-attention block in the fusion module by replacing the block with the block average. We also demonstrate the normalizing the softmax function so that the each single average sequence is representative of the constituent sequences. ``` 1defquag_attention(inputs,mask,dim_model,l_v,l_t,type): 2#Inputs: 3#inputs:Tensorofshape(batch_size,sequence_length,dim_model) 4#mask:Tensorofshape(batch_size,sequence_length) #dim_model: Dimensionofthemodel(e.g.,512) #l_v:intmaximumlengthoftwidoctokens #l_t:intmaximumlengthofquestiontokens #type:oneof'text','video','text-video'query=linear_transform_query(inputs) avg_input=compute_avg_input(inputs,l_v,l_t,type) key=linear_transform_key(avg_input) value=linear_transform_value(avg_input) mask=apply_mask(mask,l_v,l_t,type) scaled_dot_product=compute_scaled_dot_product(query,key, dim_model,mask) attention_scores=softmax(scaled_dot_product) attended_output value) returnattended_output defcompute_avg_input(inputs,l_v,l_t,type): iftype=="video": avg_upper_block=sum(inputs[:,:l_v,:],axis=-2) avg_upper_block=expand_dims(avg_upper_block,axis=1) avg_input=concatenate((avg_upper_block,inputs[:,:-l_t,:]),axis=1) eliftype=="text": avg_lower_block=sum(inputs[:,:-l_t,:],axis=-2) avg_lower_block=expand_dims(avg_lower_block,axis=1) avg_input=concatenate((inputs[:,:l_v,:], avg_lower_block),axis=1) eliftype=="text-video": avg_upper_block=sum(inputs[:,:l_v,:],axis=-2) avg_upper_block=expand_dims(avg_upper_block,axis=1) avg_lower_block=sum(inputs[:,:-l_t,:],axis=-2) avg_lower_block=expand_dims(avg_lower_block,axis=1) avg_input=concatenate((avg_upper_block,avg_lower_block),axis=1) returnavg_input 3 4defapplymask(mask,l_v,l_t,type): mask=expand_dims(mask,axis=-1) mask=tile(mask,[1,1,sequence_length]) 5 6if"video"intype: video_length=sum(mask[:,:l_v,0],axis=1) video_length=expand_dims(video_length,axis=-1) scaled_dot_product[:,:,0]=scaled_dot_product[:,:,0] 7 8log(video_length) upper_mask=ones(mask.shape[0],mask.shape[1],1) mask=concatenate((upper_mask,mask[:,:,l_v:]),axis=-1) 9 10if"text"intype: text_length=sum(mask[:,:-l_t,0],axis=1) text_length=expand_dims(text_length,axis=-1) scaled_dot_product[:,:,-1]=scaled_dot_product[:,:,-1]*log(text_length) lower_mask=ones(mask.shape[0],mask.shape[1],1) mask=concatenate((mask[:,:,:-l_t],lower_mask),axis=-1) returnmask 11 12defcompute_scaled_dot_product(query,key,dim_model,mask): scaled_dot_product=dot_product(query,key)/sqrt(dim_model) returnscaled_dot_product 13 14defapply_attention_scores(attention_scores,value): 15 [MISSING_PAGE_POST] * [61]**attended_output = dot_product(attention_scores, value)** * [62]**return attended_output** ### Experiment Details As mentioned in the main manuscript, we use the official checkpoints and code of JustAsk [website] and FrozenBiLM [website]. For all the experiments with JustAsk, we use the checkpoints of the model pretrained on HowToVQA69M and WebVidVQA3M. For FrozenBiLM, we use the WebVid10M-pretrained checkpoint for all our experiments. Since QUAG operates at inference time, we do not need to perform any training. Since the model owners do not report results on NeXT-QA, we fine-tune the models with the official recipe to achieve performance similar to that independently reported by others [49]. While FrozenBiLM can alsi take subtitles as the input, for fair comparison, we do not pass it in any of the experiments. We provide the hardware details in the main manuscript. ## Appendix B Clavi ### Comprehensive List of Questions We provide a comprehensive list of the questions for the example presented in Fig 2 of the main paper. We define the actions as: **A**: _turning on a light_**B**: _holding some clothes_**C**: _washing a mirror_, where action A occurs before action B in the original video and action C does not occur anywhere in the original video. Enlisted below are the questions and its negatives (Q and Q' respectively) for the video (V) (that is event A occurs after event B):Note that the color of the panel is representative of the answer of the question (red: "yes", green: "no"). **E-Type**: **Q :** Was someone turning on a light? **Q :** Was someone holding some clothes? **E-Type (negative control)**: **Q :** Was someone washing a mirror? **BE-Type** **Q :** Was the person turning on a light at the **beginning**? **Q':** Was the person turning on a light at the **end**? **Q :** Was the person holding some clothes at the **end**? **Q':** Was the person holding some clothes at the **beginning**? **BA-Type** **Q :** Did turning on a light happen **before** holding some clothes? **Q':** Did turning on a light happen **after** holding some clothes? **Q :** Did holding some clothes happen **after** turning on a light? **Q':** Did holding some clothes happen **before** turning on a light? **BA-Type (negative-control)** **Q':** Did washing a mirror happen **before** turning on a light? **Q':** Did washing a mirror happen **after** turning on a light? **Q':** Did turning on a light happen **before** washing a mirror? ### Dataset Metrics The duration of individual action in CLAVI lies in the range [4.0 sec, 36.0 sec]; the average length of action is \(\mathbf{7.7\pm 3.42}\) sec. The average video length is \(\mathbf{19.95\pm 7.34}\) secs and the range is [8.67 sec, 65.73 sec]. We plot the distribution of the action and video durations in Fig. 4. CLAVI cons sits of **141** unique action classes. Each action class is composed of noun (objects) and verb. There are **37** unique noun classes and **28** unique verb classes. We show the frequency distributions of action, verb and noun classes in Fig 5. ### Experiment Details As mentioned in the main manuscript, we use the official checkpoints, finetuning code and hyperparameters of JustAsk [website], FrozenBiLM [website], Singularity-Temporal [website], and All-in-one+ [website]. For JustAsk, we use the checkpoint of the model pretrained on HowToVQA69M and WebVidVQA3M. For FrozenBiLM, we use the WebVid10M-pretrained checkpoint. All-in-one+ is pretrained on eight datasets comprising of both images and videos (videos: Webvid, YT-Temporal-180M, HowTo100M and images: CC3M, CC12M, COCO, Visual Genome, SBU Captions). Figure 4: Distribution of length of (a) action and (b) video durations Figure 5: Metrics of the dataset (a) distribution of question types (same for training and testing set), (b) histogram plot of frequencies of action classes (c) histogram plot of frequencies of verb classes (d) histogram plot of frequencies of noun classes. Singularity-Temporal is pretrained on a 17.28M images and video subset (images: COCO, Visual Genome, SBU Captions, CC3M, CC12M and videos: WebVid).
2307.07893
Anomaly Detection in Automated Fibre Placement: Learning with Data Limitations
Conventional defect detection systems in Automated Fibre Placement (AFP) typically rely on end-to-end supervised learning, necessitating a substantial number of labelled defective samples for effective training. However, the scarcity of such labelled data poses a challenge. To overcome this limitation, we present a comprehensive framework for defect detection and localization in Automated Fibre Placement. Our approach combines unsupervised deep learning and classical computer vision algorithms, eliminating the need for labelled data or manufacturing defect samples. It efficiently detects various surface issues while requiring fewer images of composite parts for training. Our framework employs an innovative sample extraction method leveraging AFP's inherent symmetry to expand the dataset. By inputting a depth map of the fibre layup surface, we extract local samples aligned with each composite strip (tow). These samples are processed through an autoencoder, trained on normal samples for precise reconstructions, highlighting anomalies through reconstruction errors. Aggregated values form an anomaly map for insightful visualization. The framework employs blob detection on this map to locate manufacturing defects. The experimental findings reveal that despite training the autoencoder with a limited number of images, our proposed method exhibits satisfactory detection accuracy and accurately identifies defect locations. Our framework demonstrates comparable performance to existing methods, while also offering the advantage of detecting all types of anomalies without relying on an extensive labelled dataset of defects.
Assef Ghamisi, Todd Charter, Li Ji, Maxime Rivard, Gil Lund, Homayoun Najjaran
2023-07-15T22:13:36Z
http://arxiv.org/abs/2307.07893v2
# Anomaly Detection in Automated Fibre Placement: Learning with Data Limitations ###### Abstract Conventional defect detection systems in Automated Fibre Placement (AFP) typically rely on end-to-end supervised learning, necessitating a substantial number of labelled defective samples for effective training. However, the scarcity of such labelled data poses a challenge. To overcome this limitation, we present a comprehensive framework for defect detection and localization in Automated Fibre Placement. Our approach combines unsupervised deep learning and classical computer vision algorithms, eliminating the need for labelled data or manufacturing defect samples. It efficiently detects various surface issues while requiring fewer images of composite parts for training. Our framework employs an innovative sample extraction method leveraging AFP's inherent symmetry to expand the dataset. By inputting a depth map of the fibre layup surface, we extract local samples aligned with each composite strip (tow). These samples are processed through an autoencoder, trained on normal samples for precise reconstructions, highlighting anomalies through reconstruction errors. Aggregated values form an anomaly map for insightful visualization. The framework employs blob detection on this map to locate manufacturing defects. The experimental findings reveal that despite training the autoencoder with a limited number of images, our proposed method exhibits satisfactory detection accuracy and accurately identifies defect locations. Our framework demonstrates comparable performance to existing methods, while also offering the advantage of detecting all types of anomalies without relying on an extensive labelled dataset of defects. Automated Fibre Placement, Anomaly Detection, Computer Vision, Unsupervised Learning, Convolutional Autoencoder ## 1 Introduction Automated Fibre Placement (AFP) is an advanced composite manufacturing method for forming strong and lightweight components from strips of reinforced fibres known as rows. It is commonly used in quality-critical industries such as aerospace, where quality inspection and assurance are paramount [1, 2]. Most existing inspection techniques are implemented via manual human examination strip by strip, which is time-consuming, and thus a major production bottleneck [3]. To address this problem, recent research seeks to automate defect detection by using artificial intelligence (AI), computer vision (CV) and deep learning (DL) methodologies, reducing manual effort in AFP inspection, and expediting the production [4]. As a widely adopted methodology, supervised learning with explicit labelling has been applied for inspection in AFP and similar industries. These methods leverage convolutional neural networks (CNNs) to train from extensive labelled datasets of surface imaging of manufactured parts. These surface images take a variety of forms, including photographs [5], thermal images [6], and depth maps from many types of profilometry sensors [7, 8]. Sacco et al. (2020) [7] review the applications of machine learning in composite manufacturing processes and present a case study of state-of-the-art inspection software for AFP processes. The presented inspection method uses a deep convolutional neural network for semantic segmentation to classify defects on a per-pixel basis. They use about 800 scans which is relatively a large dataset in this domain, yet the results show their method often misses some defects. Object detection is a well-developed subfield of computer vision in which models learn to recognize specific objects from a large dataset of labelled bounding boxes. Zhang et al. (2022) [8] offer an alternate approach using object detection. This work implements a modified YOLOv5 network, which is a popular and commercially available object detection model. With a large dataset of 3000 images containing five different defect types, their proposed model demonstrated effective performance in detecting those five defect types. To achieve better real-time inspection, Meister et al. [9] evaluate the use of convolutional and recurrent neural network architecture for analyzing laser-scanned surfaces line by line as 1D signals. The different network structures are assessed on both real and synthetic datasets, demonstrating sufficient performance. Through experimentation, the authors evaluate the effects of training and testing on differing data types (real or synthetic), realizing that deviations between the training and testing domain have a greater potential to impact the results of their proposed 1D analysis methodology. These supervised learning methods, however, can be impractical in industrial projects because they require large, unambiguously labelled training datasets which are not typically available. There are three key reasons for the lack of labelled training data. First, collecting real-world data from production machines is expensive and disruptive to existing production schedules. Second, defects and anomalies in real-world production are rare. To collect enough defect and anomaly samples for the models to learn from, one must collect a very large amount of data, adding to the training cost. Third, real-world defects can take many different forms, and there is no universally accepted standard of how a human inspector should delineate and record anomalies and defects, not to mention how to label them for machine learning [10]. Industrial practitioners of AFP manufacturing typically rely on organizational-specific standards and individual professional practices to identify and correct AFP anomalies. These separate standards and practices cannot be easily translated into a well-defined labelling strategy for labelling training sets. One solution is to create synthetic datasets which can be utilized for training supervised models. Using AI models for synthetic dataset creation has been explored in many applications with varying success. To address the limited defect data in AFP, [11] compares different data synthesis techniques for generating defect data useful to AFP inspection applications. The paper compares synthetic image datasets generated by various GAN-based models, even implementing a CNN-based defect classifier for analysis. However, there is a lack of comparison of the generated datasets to real-world data regarding image diversity and realism [11]. In a similar problem, for the task of machine fault detection where faulty data is scarce and normal data is abundant, [12] implemented a Conditional-GAN for generating fault data in different conditions from normal data samples. This paper provides a more in-depth analysis comparing generated data to real ground truth fault data, showcasing that generated feature distributions are similar to those of real faults. Such data generation approaches have demonstrated effectiveness, though they still require sufficiently large and representative datasets, and cannot generalize to unseen defects or anomalies. Circumventing the need for large datasets with labelled defects, unsupervised anomaly detection methods focus on learning the high-level representations of non-defective, normal data to identify outlying anomalies. In AFP manufacturing, normal data is typically well-defined thanks to the simple, invariant structure of the tows (narrow strips of composite material) and limited layup manners used. In this study, we utilize the non-defective samples, which constitute the majority of any real-world AFP datasets, to train a classifier capable of discerning normal and abnormal composite structures. Autoencoders, known for their capability of reconstructing input data, have emerged as potent tools for detecting anomalies within images [13]. An autoencoder works by learning to encode normal input samples into a lower dimensional latent vector that can be decoded to reconstruct the original sample. The reconstruction is compared with the original, and a reconstruction error metric is calculated. When an abnormal sample is provided, the reconstruction errors will be high since the autoencoder was never trained with similar images. A threshold is then applied to the reconstruction errors to determine if the sample is normal, with a higher error indicating the sample is more likely to be abnormal or defective. There is a lack of research applying these methods in the AFP industry, however, the approach has demonstrated success in other similar defect detection tasks. Ulger et al. (2021) [14] employ convolutional and variational autoencoders (CAE and VAE) for solder joint defect detection. Reconstruction errors guide classification, applying a threshold to differentiate between normal and abnormal inputs. Tsai et al. (2021) [15] also employ CAE and VAE for textured surface defect detection, favouring CAE in a Receiver Operator Characteristic (ROC) analysis. The proposed anomaly detector is tested on various textured and patterned surface types, including wood, liquid crystal displays, and fibreglass. Additionally, Chow et al. (2020) [16] implement a CAE for concrete defect detection, introducing a window-based approach for high-resolution images. Their window-based implementation enables pixel-wise anomaly maps to provide localization and contextual understanding of anomalies. We propose a comprehensive framework for anomaly detection in AFP based on the autoencoder methodology. Compared to the existing methods that require a large labelled dataset including manufacturing defects, our approach is compatible with a small training dataset of normal samples. Our autoencoder-based anomaly detector uses data collected from the AFP setup shown in Figure 0(a). The autoencoder is trained on a collection of local samples taken from depth images of the composite carbon fibre surfaces. The depth images are obtained with an Optical Coherence Tomography (OCT) sensor installed on the AFP layup head, which captures high-resolution point clouds of the layup tow surfaces [17, 18]. A picture of the AFP head with the OCT sensor installed on it is provided in Figure 0(b). To simplify and enhance efficiency in processing information, the 3D point clouds are converted into 2D depth maps. Since the point cloud is a measurement of the surface elevation, it can be projected onto a 2D depth map without the loss of information. These depth maps are presented as grayscale images, where the brightness of each pixel corresponds to the surface elevation on the composite part. Fig. 2 provides various representations of a sample composite part, highlighting that defects are less discernible in the photograph due to reflections and low visual contrast. However, defects become more evident in the depth map, facilitating defect detection. The work offers the following contributions. 1. We introduce a novel, end-to-end framework for anomaly detection and localization in Automated Fibre Placement, circumventing the data limitations in this industry. Our proposed framework has several advantages compared to the existing methods. It detects all types of anomalies, without the need for manual data labeling or defect samples. Also, it works with a small number of composite images. 2. Another major contribution of this work is an efficient data extraction methodology that can convert a limited number of composite images into a large dataset of local samples. Utilizing classical computer vision algorithms to detect the boundaries of composite tapes, this method generates a dataset that exploits the inherent symmetry of AFP composite materials. Figure 1: The industrial AFP setup is shown in two separate views. On the left is an overall view of the fibre placement machine (a), and on the right is a close-up shot of the robotic tool applying carbon fibre tows. The OCT sensor is visible to the upper left of the roller. Figure 2: Different representations of a composite part manufactured with an AFP machine. Above (a) is a 3D point cloud measured using OCT Technology. The bottom left (b) shows the depth map generated from the 3D point cloud, and a real photograph of the same composite part is shown in the bottom right (c) for comparison. 3. We design and validate an autoencoder with the optimal size of the latent domain that can identify the best distinctive features to differentiate between normal and defective samples. 4. The proposed framework generates a map representing the local anomaly score of the AFP-manufacture parts and visualizes this map on the original composite scan. This visual representation serves as a valuable tool for AFP technicians, aiding them in the identification and resolution of anomalies within the composite structure. The organization of this manuscript is as follows: Section 2 describes, the whole procedure of anomaly detection, including data prepossessing, training of the AI model, and implementation details. In Section 3, the evaluation results of both the anomaly detector and the localization system are presented. Finally, concluding remarks of this work are provided in Section 4. ## 2 Methodology Figure 3 summarizes our anomaly detection framework. First, a composite scan is processed and local samples are extracted from the images. Then, the trained autoencoder generates an anomaly map, used to detect and locate the defects in the image. ### Data Preparation The raw depth maps contain impulse artifacts, also known as salt and pepper noise, which can be detrimental. To remove the noise, we used a median filter with the kernel size \(3\times 3\) applied to the whole depth image [19]. Compared to a Gaussian filter with a small radius (a low-pass filter) [20], a median filter has less risk of losing high-frequency features. Another data preparation step is needed because different raw depth-map images have inconsistent ranges of values depending on the distance of the laser origin to the composite surface. This effect is commonly caused when the OCT sensor is mounted to a fixed location behind the AFP head while scanning a contoured surface. These variations can cause undesired behaviour in the defect detection methodology. To address this, all the images undergo a min-max Figure 3: An overview of the defect detection process shows the necessary steps. normalization so that the minimum depth value is mapped to zero and the maximum value is mapped to one. By applying this linear transformation, the visual contrast of the images is improved while keeping the original depth ratio. The normalization function is provided in Eq. 1 in which \(z_{i,j}\) is the original depth value, \(p_{i,j}\) is the normalized pixel value, and \(Z\) is the whole depth map matrix. \[p_{i,j}=\frac{z_{i,j}-\min(Z)}{\max(Z)-\min(Z)} \tag{1}\] ### Local Sample Extraction The training dataset used in this work is composed of depth maps from 42 non-defective composite surfaces. Developing an effective end-to-end network for defect detection using such a limited number of scans presents a significant challenge. However, we address this issue by leveraging the consistent uniformity along the composite tows and extracting cropped windows of the scans to form a dataset with many more localized samples. This is possible under the assumption that each cropped section conforms to a similar distribution, given that defect-free rows should exhibit minimal to no disparity across their segments. Consequently, the analysis of smaller regions allows us to employ a more compact neural network to learn from a broader spectrum of local samples rather than relying on a larger network to process full scans. Moreover, by extracting localized samples along the tows, our network gains exposure to a greater variation of tow structure. One of the most basic methods to detect local objects in an image is to move a window on the image and classify the smaller region inside the window [21]. This method is known as sliding window in computer vision literature and has its own limitations. For example, the scale of the object may vary depending on how close the object is to the camera. Consequently, multiple sizes of sliding windows are required which can be computationally expensive. In our current dataset, on the other hand, most of the defects are localized to one tow, and therefore approximately the same relative size and there is no wide variation in perspective or orientation of the objects. Consequently, only one scale of sliding windows is sufficient for this use case. This also helps to keep computation complexity relatively low for this approach. Besides, there is preliminary knowledge of the composite part scans, like the number of tows and the general direction they follow. This enables a customized sliding window method that makes use of the known information. Moreover, the depth maps generated from our OCT scans have a specific structure. For example, all tows are placed straight and horizontal in the images, the number of tows is known, and their width is also known. To incorporate this predetermined knowledge, a line detector algorithm based on Hough Transform [22, 23] detects the vertical and horizontal edges of the tows. After detecting the boundaries of the tows, the center of them (centerlines) are calculated by averaging each two consecutive horizontal lines, bounded within the detected vertical lines. This process of centerline detection is illustrated in Figure 4. Finding the center of the tows makes it possible to directly focus on the regions that are candidates for defect instead of scanning the whole image. In other words, it creates a skeleton that directs and constrains the region of interest. This can reduce additional effort on the classifier side. Based on the detected centerlines, a square window slides across the to extract cropped regions. We select a window size of \(32\times 32\) pixels to cover approximately \(1.5\) times the width of composite tapes. In this implementation, a stride of \(8\) pixels is used to move the window and sample the information cropped inside. This combination of window size and stride allows enough overlap between the nearby samples while keeping the samples sufficiently Figure 4: The centerline detection procedure contains two main steps: detecting horizontal and vertical lines (a) and estimating tow centerlines from the detected lines (b). distinctive. Some of the extracted samples are presented in Figure 5. At the time of inference, each window that is detected as an anomaly is considered to contain a defect, while the windows that are not anomalies are assumed to have normal tow structures. The next sections explain the approach to distinguishing between normal and abnormal samples. ### Anomaly Detection As mentioned in Section 1, autoencoders have shown great success at identifying anomalies in images. An autoencoder is an unsupervised learning model that reconstructs the given input by learning to minimize the error between the input and reconstructed output. They do this by encoding the input to a vector of latent features, also known as the bottleneck, and then decoding those latent features to reconstruct the input. Convolutional Autoencoders (CAEs) are a group of autoencoders that use convolution layers in their network structure. Convolutional neural networks (CNN) are more popular for image-based autoencoders than basic fully-connected networks. This is because CNNs incorporate receptive fields using kernels that maintain the spatial relationships of the data. CNNs are also computationally efficient with sparse connectivity of neurons. If only normal samples are used to train the autoencoder, it will be able to reconstruct similar normal samples accurately, and the reconstruction results for abnormal samples will be poor. Therefore, reconstruction error can be used as an indicator of how anomalous each input is. For inference, each cropped window of a composite material depth map is fed into the trained autoencoder. The reconstruction error of each window is then used as an anomaly score to create an anomaly map for the entire image. Reconstruction error of a window centred at \((x,y)\) is calculated using Eq. 2 in which \(p_{i,j}\) and \(p_{i,j}^{i}\) are the pixel value of the input and reconstructed output, respectively, and \(b\) is half of the size of each window. \[m_{x,y}=\frac{1}{(2b)^{2}}\sum_{i=x-b}^{x+b-1}\sum_{j=y-b}^{y+b-1}[p_{i,j}-\hat {p}_{i,j}]^{2} \tag{2}\] In this work, a CAE is designed and used as the anomaly detector. The design incorporates symmetric encoder and decoder structures shown in figures 5(a) and 5(b) respectively. For training the model, mean squared error is employed as the loss function. Although the full method uses continuous valued anomaly maps to identify the defects rather than a binary prediction, binary classification can still be useful in validating the model performance. For this, a threshold parameter is introduced to classify samples based on their reconstruction error. To select the threshold value, a Receiver Operating Characteristic (ROC) curve is applied. The ROC curve plots the true positive rate against the false positive rate while varying the threshold value. In an ideal case, the selected threshold would give a true positive rate of 1 and a false positive rate of 0. In the ROC plot, this corresponds to the upper left corner and hence the best threshold value is selected from the curve at the point closest to that corner. ### Defect Localization The anomaly detection generates an array of anomaly scores for each tow, which can be considered as a 1D digital signal. Any area of this signal with a concentration of high values indicates the presence of a defect. In Computer Vision literature, these areas are called blobs [24, 25]. For detecting the blobs, we use the Difference of Gaussian (DoG) method [26]. In this approach, the signal (\(f(x)\)) is filtered using Gaussian kernels with increasing values for standard deviations (\(\sigma\)) as described in Eq. 3. Then, the subtractions of each two successively filtered signals are Figure 5: A dataset is created from cropped sections of the depth maps, using the sliding window method. Normal samples are shown on top and abnormal samples are shown below. calculated. The local maxima of \(g(x,\sigma)\) represents the blobs. In such maxima points, \(x\) and \(\sigma\) correspond to the location and characteristic scale (size) of the blob, respectively. \[g(\sigma,x)=\sigma^{2}\frac{\partial n_{\sigma}^{2}}{\partial x^{2}}*f(x) \tag{3}\] For each defect, two parameters are detected, radius and center. With this information the detected blobs can be transferred from anomaly map to image space. ## 3 Results and Discussion The performance of the anomaly detection and localization system depends on two factors. First is the number of samples the anomaly detector is correctly classifying as normal or abnormal. Second is the size and location accuracy of the predicted defects. This section evaluates these two aspects using a test dataset with an additional two composite surfaces containing defects. ### Anomaly Detection The network structure proposed in Figure 6 is implemented using three different latent dimensions 2, 16, and 128 for comparison. Each network is trained with a dataset consisting of 27406 only normal samples. An Adam optimizer is employed with an MSE loss function to train the network. The batch size is set to 128. Each autoencoder undergoes training for 50 epochs, completing in under 5 minutes on a computer with the following specifications: * **Processor (CPU):** Intel(R) Xeon(R) E5-1607 v4 @ 3.10GHz * **Graphics (GPU):** NVIDIA GeForce GTX 1080 * **Memory (RAM):** 32.0 GB Figure 6: A graphic depicting the network structure of the proposed autoencoder. Above is the encoder structure (a), and below is the decoder structure (b). The curves in Figure 7 demonstrate the training losses of each autoencoder. Comparing the training loss curves shows that the models' reconstruction ability improves with higher dimensional latent space. The curves also show that the models are learning relatively quickly with tiny improvement in the later epochs. Reconstruction results for the autoencoders are demonstrated in Figure 8. The original samples are randomly selected from normal and abnormal classes in test set. These results clearly show the improved reconstruction performance with a higher dimensional latent space. It shows that the autoencoder with a 128-dimensional latent vector is able to produce good reconstructions for both normal and abnormal samples. The 16-dimensional autoencoder, on the other hand, produces relatively good reconstructions for normal samples and poorer reconstructions of defect samples. This is ideal for the classification method to distinguish anomalies. Finally, the autoencoder with only a 2-dimensional latent space is unable to make good reconstructions for any of the input samples. Figure 9 shows comparisons of the reconstruction error. In figure 8(a) the distributions of mean square error are shown for the 16-dimensional autoencoder on the training set and test set. Note that the training set only includes normal samples whereas the test set contains both normal and abnormal samples, separated accordingly. As the figure suggests, the normal samples have a similar distribution in both the training and test sets. On the other hand, the abnormal samples have a generally higher MSE with a slight overlap on normal sample distribution. In an ideal case, if there were no overlap between these two distributions we could find a perfect threshold as the decision boundary to classify the samples into normal and abnormal categories. With existing overlap, however, an ROC curve can help to select Figure 8: The resulting reconstructions from the autoencoders with various latent sizes are compared for both normal and abnormal test samples. Figure 7: The training MSE losses of the three autoencoders are plotted in comparison over 50 epochs. the decision boundary that makes the best trade-off between true and false positive rates. Taking a closer look at the difference between the three autoencoders, Figure 8(b) shows boxplots of the MSE for normal and abnormal samples in the test set, using different number of latent features. Here it shows how separable the two classes are based on reconstruction error alone. For the 2-dimensional autoencoder, the interquartile ranges are separable, but there is a significant overlap when considering the whiskers. For the 16-dimensional autoencoder, the separation is greatly improved with minimal overlap between the whiskers. The 128-dimensional autoencoder, however, does not show significant separation and would be impossible to accurately classify the two classes based on MSE alone. Figure 10 shows the ROC curves for each of the three autoencoders and the selected best threshold points shown as stars. These results further demonstrate that classification performance does not correspond with reconstruction performance, as the 2D and 128D autoencoders' ROC curves have worse classification performance than the 16D model. The best-performing model is the autoencoder with a 16-dimensional latent space, achieving a high true positive rate with a low false positive rate. Classification results of each autoencoder with the selected threshold values found by the ROC curves are summarised in Table 1. The table reports precision, recall, F1 score, area under the ROC curve (AUC), and selected threshold for each classifier. Figure 11 shows the effect of the latent vector size on the performance of the model. Figure 10(a) suggests that larger latent dimensions will produce lower reconstruction errors. However, an accurate classification model does not require Figure 10: ROC curves of the test set are plotted for the three autoencoder classifiers. Figure 9: Distributions of MSE for different input types and latent sizes are presented for comparison. the lowest reconstruction error, but a moderate reconstruction that leads to better classification performance. In Figure 10(b) the best latent dim is found by calculating the minimum AUC of the ROC curve while varying the latent dim. Figure 12 shows the classification confusion matrix while using the optimal latent size. The off-diagonal values in this matrix are low which shows that most of the samples from both normal and abnormal classes are classified correctly. ### Defect Localization Figure 13 illustrates the results of the anomaly detector on a 2D depth map. The colour of each point indicates the normalized MSE for reconstructing a small window around the point with the anomaly detector. As can be seen, the defective areas have a large density of points with a higher MSE. This information can be used to detect these areas while ignoring individual outlier values. In Figure 14 the process of detecting the defects from the anomaly map is illustrated. The elevation in each curve represents the MSE value for one tow (represented by colour in the previous figure). The arrows show the detected blobs after applying the Derivative of Gaussians method. It is observed that only the areas with an extended length of high MSE values are detected as blobs. Figure 15 shows the final output of the computer vision pipeline, comparing the annotated defect bounding boxes (ground truth) with the predicted bounding boxes. ### Qualitative Comparison The proposed framework is unique and to the best of our knowledge, no other studies have implemented an end-to-end unsupervised defect detection method for AFP inspection. Unfortunately, there are no publicly available datasets in this domain to serve as a benchmark for AFP inspection tasks. Additionally, the dataset used in this work is insufficient for training supervised learning models, which constitute the majority of current studies in this field. For Figure 14: Anomaly scores are visualized as 1D signals for blob detection. Figure 13: An anomaly map is generated from the MSE of individual cropped windows. these reasons, an explicit quantitative comparison of our method with other state-of-the-art approaches is not possible. However, a qualitative comparison of the most relevant studies is presented in Table 2, showcasing the advantages of our framework. In regard to other defect detection methods, the main advantages of our proposed approach stem from the unsupervised learning process which enables learning with data limitations. Foremost, our method detects all types of surface anomalies in AFP, whereas existing methods are limited to specific defect types. Additionally, unlike other methods, ours does not require labelling which is time-consuming and prone to errors. Moreover, our proposed framework works with fewer composite scans and it does not need any samples of defects. Some methods implement semantic segmentation which requires explicit pixel-wise labelling. This is not necessarily needed, as our method provides sufficient localization with bounding boxes and anomaly maps. Unlike other methods, ours does not classify the defects, however, a separate classification module could easily be integrated using the detected bounding boxes. Besides, in an industry where the majority of the inspected parts are non-defective, directly detecting the defective parts can reduce most of the effort. \begin{table} \begin{tabular}{l c c c c c c c c c c c} \hline \hline Reference & & & & & & & & & & Results & \\ \cline{5-12} & & & & & & & & & & & & & & \\ \hline Meister and Wermes (2023) [9] & ✓ & ✓ & - & - & - & - & - & 469 & Acc\({}^{1}\): 95.12\% & - \\ Zhang et al. (2022) [8] & ✓ & ✓ & ✓ & - & - & - & - & 3,000 & - & mAP\({}^{4}\): 93.1\% \\ Tang et al. (2022) [27] & ✓ & ✓ & ✓ & ✓ & - & - & - & 43 & Pr\({}^{2}\): 84.4\%, Re\({}^{3}\): 77.1\% & mIoU\({}^{5}\): 0.776 \\ Sacco et al. (2020) [7] & ✓ & ✓ & ✓ & ✓ & - & - & - & 800 & Average Acc: 49.1\% & - \\ Schmidt et al. (2019) [6] & ✓ & ✓ & - & - & - & - & - & 12,000 & Acc: \(\langle 92^{2}\%\) & - \\ This work & ✓ & - & ✓ & - & ✓ & ✓ & ✓ & 44 & Acc: 98.7\% & mIoU: 0.708 \\ \hline \hline \end{tabular} 1: Accuracy, 2: Precision, 3: Recall, 4: mean Average Precision, 5: mean Intersection over Union \end{table} Table 2: A Comparison of Existing Learning-based AFP Defect Detection Methods with the Proposed Framework Figure 15: Predicted bounding boxes are displayed on the original depth map in comparison with the ground truth bounding boxes. Also, the values of Intersection over Union are displayed. ## 4 Conclusions This paper introduces a practical and novel method for the inspection of composite materials manufactured by Automated Fibre Placement (AFP). The AFP process is susceptible to various types of defects which can significantly impact the final product's quality, necessitating thorough inspection of the composite parts. Manual human inspection has traditionally been employed for this purpose, but it is time-consuming, labour-intensive, and prone to human errors. To enhance the efficiency, accuracy and reliability of AFP, the development of an automated inspection system is crucial. Current inspection procedures mostly utilize profilometry technologies like laser scanning, thermal imaging, and optical sensors to generate visual measurements of the part's surface. The data used in this work is obtained from a laser scanner that operates based on OCT technology, though the framework presented is general and can be adapted to other types of profilometry data. In AFP inspection, robust and generalized supervised learning methods are infeasible due to limitations in available labelled data. Anomaly detection methods, on the other hand, can circumvent this challenge by focusing on learning the structure of normal samples to identify any abnormalities. The proposed computer vision framework detects individual tos in AFP composites and creates a dataset of sub-images by sliding a window along the center of each tow. The extracted data is then used to train an autoencoder designed to detect anomalies. Using the same sliding window procedure, the autoencoder produces anomaly scores for local regions of the composite part. These scores are aggregated to form an anomaly map of the full image. This anomaly map can then be used as an explicit indication tool by an operator. We further process this anomaly map using a 1D blob detection algorithm to generate bounding boxes around defects. Compared to other state-of-the-art automated inspection methods in AFP, our approach offers several advantages. Since the autoencoder learns the inherent structure of normal tos, it is capable of detecting all anomalies, unlike other methods which can only detect defects specific to their training data. Furthermore, our suggested framework operates with fewer composite scans and eliminates the requirement for defect samples. For verification purposes the autoencoder is evaluated in a classification task, achieving over 98% classification accuracy. Additionally, the overall framework is implemented on a set of test samples where bounding boxes generated by the method achieve an Intersection over Union (IoU) of 0.708. This demonstrates sufficient accuracy in the localization of detected defects. This paper outlines a novel defect detection approach for AFP and emphasizes its practicality, particularly in addressing data limitations. There are several potential directions for future research to enhance the inspection system's capabilities and extend its relevance to broader domains. To improve dataset quality and quantity, we suggest investigating data engineering techniques like data augmentation and synthetic data generation. Additionally, while the current system identifies anomalies, it lacks the ability to classify specific defect types. To address this, we recommend utilizing the generated bounding boxes to collect training data for developing a classification model. We also recommend adapting our framework for use in industries that share similar tape-by-tape structures. This can lead to enhancements in defect identification and quality assurance across different sectors. ## Statements and Declarations ### Acknowledgements We would like to acknowledge the financial support of LlamaZOO Interactive Inc. and Natural Sciences and Engineering Research Council (NSERC) Canada under the Alliance Grant ALLRP 567583 - 21. In addition, we would like to recognize the research collaboration of LlamaZOO Interactive Inc., the National Research Council of Canada (NRC), and Fives Lund LLC. ### Conflict of Interest The authors declare that they have no conflicting interests or financial motives related to the work in this article. ### Data Availability The data generated and analyzed in this paper is proprietary and due to its commercialization potential, is not made publicly available. The information is protected to maintain its commercial value. We regret any inconvenience this may cause and appreciate your understanding of the importance of preserving its proprietary nature.
2308.00979
Fully Dynamic Maximum Independent Sets of Disks in Polylogarithmic Update Time
A fundamental question is whether one can maintain a maximum independent set in polylogarithmic update time for a dynamic collection of geometric objects in Euclidean space. Already, for a set of intervals, it is known that no dynamic algorithm can maintain an exact maximum independent set in sublinear update time. Therefore, the typical objective is to explore the trade-off between update time and solution size. Substantial efforts have been made in recent years to understand this question for various families of geometric objects, such as intervals, hypercubes, hyperrectangles, and fat objects. We present the first fully dynamic approximation algorithm for disks of arbitrary radii in the plane that maintains a constant-factor approximate maximum independent set in polylogarithmic expected amortized update time. Moreover, for a fully dynamic set of $n$ disks of unit radius in the plane, we show that a $12$-approximate maximum independent set can be maintained with worst-case update time $O(\log n)$, and optimal output-sensitive reporting. This result generalizes to fat objects of comparable sizes in any fixed dimension $d$, where the approximation ratio depends on the dimension and the fatness parameter. Further, we note that, even for a dynamic set of disks of unit radius in the plane, it is impossible to maintain $O(1+\varepsilon)$-approximate maximum independent set in truly sublinear update time, under standard complexity assumptions.
Sujoy Bhore, Martin Nöllenburg, Csaba D. Tóth, Jules Wulms
2023-08-02T07:22:21Z
http://arxiv.org/abs/2308.00979v2
# Fully Dynamic Maximum Independent Sets of Disks ###### Abstract A fundamental question in computational geometry is for a dynamic collection of geometric objects in Euclidean space, whether it is possible to maintain a maximum independent set in polylogarithmic update time. Already, for a set of intervals, it is known that no dynamic algorithm can maintain an exact maximum independent set with sublinear update time. Therefore, the typical objective is to explore the trade-off between update time and solution size. Substantial efforts have been made in recent years to understand this question for various families of geometric objects, such as intervals, hypercubes, hyperrectangles, and fat objects. We present the first fully dynamic approximation algorithm for disks of arbitrary radii in the plane that maintains a constant-factor approximate maximum independent set in polylogarithmic update time. First, we show that for a fully dynamic set of \(n\) unit disks in the plane, a \(12\)-approximate maximum independent set can be maintained with worst-case update time \(O(\log^{2}n)\), and optimal output-sensitive reporting. Moreover, this result generalizes to fat objects of comparable sizes in any fixed dimension \(d\), where the approximation ratio depends on the dimension and the fatness parameter. Our main result is that for a fully dynamic set of disks of arbitrary radii in the plane, an \(O(1)\)-approximate maximum independent set can be maintained in polylogarithmic expected amortized update time. Our results build on two recent technical tools: (i) The MIX algorithm by Cardinal et al. (ESA 2021) that can smoothly transition from one independent set to another; hence it suffices to maintain a family of independent sets where the largest one is a constant-factor approximation of a maximum independent set. (ii) A dynamic nearest/farthest neighbor data structure for disks by Kaplan et al. (DCG 2020) and Liu (SICOMP 2022), which generalizes the dynamic convex hull data structure by Chan (JACM 2010), and allows us to quickly find a "replacement" disk (if any) when a disk in one of our independent sets is deleted. ## 1 Introduction The maximum independent set (MIS) problem is one of the most fundamental problems in theoretical computer science, and it is one of Karp's 21 classical NP-complete problems [14]. In the MIS problem, we are given a graph \(G=(V,E)\), and the objective is to choose a subset of the vertices \(S\subseteq V\) of maximum cardinality such that no two vertices in \(S\) are adjacent. The intractability of MIS carries even under strong algorithmic paradigms. For instance, it is known to be hard to approximate, i.e., no polynomial-time algorithm can achieve an approximation factor \(n^{1-\varepsilon}\) (for \(|V|=n\) and a constant \(\varepsilon>0\)) unless P=ZPP[20]. In fact, even if the maximum degree of the input graph is bounded by \(3\), no polynomial-time approximation scheme (PTAS) is possible [1]. **Geometric Independent Set.** In geometric settings, the input to the MIS problem is a collection \(\mathcal{L}=\{\ell_{1},\ldots,\ell_{n}\}\) of geometric objects, e.g., intervals, disks, squares, rectangles, etc., and we wish to compute a maximum independent set in their intersection graph \(G\). That is, each vertex in \(G\) corresponds to an object in \(\mathcal{L}\), and two vertices form an edge if and only if the two corresponding objects intersect. The objective is to choose a maximum cardinality subset \(\mathcal{L}^{\prime}\subseteq\mathcal{L}\) of independent (i.e., pairwise disjoint) objects. A large body of work has been devoted to the MIS problem in geometric settings, due to its wide range of applications in scheduling [1], VLSI design [14], map labeling [1], data mining [11, 1], and many others. Stronger theoretical results are known for the MIS problem in the geometric setting, in comparison to general graphs. For instance, even for unit disks in the plane, the problem remains NP-hard [13] and W[1]-hard [15], but it admits a PTAS[14]. Later, PTASs were also developed for arbitrary disks, squares, and more generally hypercubes and fat objects in constant dimensions [14, 1, 15]. In a seminal work, Chan and Har-Peled [11] showed that for an arrangement of pseudo-disks1, a local-search-based approach yields a PTAS. However, for non-fat objects, the scenario is quite different. For instance, it had been a long-standing open problem to find a constant-factor approximation algorithm for the MIS problem on axis-aligned rectangles. In a recent breakthrough, Mitchell [13] answered this question in the affirmative. Through a refined analysis of the recursive partitioning scheme, a dynamic programming algorithm finds a constant-factor approximation. Subsequently, Galvez et al. [1] improved the approximation ratio to \(3\). Footnote 1: A set of objects is an arrangement of pseudo-disks if the boundaries of every pair of them intersect at most twice. **Dynamic Geometric Independent Set.** In dynamic settings, objects are inserted into or deleted from the collection \(\mathcal{L}\) over time. The typical objective is to achieve (almost) the same approximation ratio as in the offline (static) case while keeping the update time (i.e., the time to update the solution after insertion/deletion) as small as possible. We call it the _Dynamic Geometric Maximum Independent Set_ problem (for short, DGMIS); see Section 1 for a formal problem statement. Henzinger et al. [14] studied DGMIS for various geometric objects, such as intervals, hypercubes, and hyperrectangles. Many of their results extend to the weighted version of DGMIS, as well. Based on a lower bound of Marx [15] for the offline problem, they showed that any dynamic \((1+\varepsilon)\)-approximation for squares in the plane requires \(\Omega(n^{1/\varepsilon})\) update time for any \(\varepsilon>0\), ruling out the possibility of sub-polynomial time dynamic approximation schemes. On the positive side, they obtained dynamic algorithms with update time polylogarithmic in both \(n\) and \(N\), where the corners of the objects are in a \([0,N]^{d}\) integer grid, for any constant dimension \(d\) (therefore their aspect ratio is also bounded by \(N\)). Gavruskin et al. [1] studied DGMIS for intervals in \(\mathbb{R}\) under the assumption that no interval is contained in another interval and obtained an optimal solution with \(O(\log n)\) amortized update time. Bhore et al. [1] presented the first fully dynamic algorithms with polylogarithmic update time for DGMIS, where the input objects are intervals and axis-aligned squares. For intervals, they presented a fully dynamic \((1+\varepsilon)\)-approximation algorithm with logarithmic update time. Later, Compton et al. [1] achieved a faster update time for intervals, by using a new partitioning scheme. Recently, Bhore et al. [1] studied the MIS problem for intervals in the streaming settings, and obtained lower bounds. For squares in the plane, Bhore et al. [1] presented a randomized algorithm with an expected approximation ratio of roughly \(2^{12}\) (generalizing to roughly \(2^{2d+5}\) for \(d\)-dimensional hypercubes) with amortized update time \(O(\log^{5}n)\) (generalizing to \(O(\log^{2d+1}n)\) for hypercubes). Moreover, Bhore et al. [1] studied the DGMIS problem in the context of dynamic map labeling and presented dynamic algorithms for several subfamilies of rectangles that also perform well in practice. Cardinal et al. [1] designed dynamic algorithms for fat objects in fixed dimension \(d\) with sublinear worst-case update time. Specifically, they achieved \(\tilde{O}(n^{3/4})\) update time2 for disks in the plane, and \(\tilde{O}(n^{1-\frac{1}{d+2}})\) for Euclidean balls in \(\mathbb{R}^{d}\). Footnote 2: The \(\tilde{O}(\cdot)\) notation ignores logarithmic factors. However, in spite of the remarkable progress on the DGMIS problem in recent years, the following question remained unanswered. **Question 1**.: _Does there exist an algorithm that, for a given dynamic set of disks in the plane, maintains a constant-factor approximate maximum independent set in polylogarithmic update time?_ In what follows, we define the problem formally and summarize our contributions. Problem Description.We focus on the intersection graphs of a collection \(\mathcal{L}=\{\ell_{1},\ldots,\ell_{n}\}\) of geometric objects. In the intersection graph \(G_{\mathcal{L}}=(V_{\mathcal{L}},E_{\mathcal{L}})\) of \(\mathcal{L}\), each object \(\ell_{i}\in\mathcal{L}\) is represented by a vertex \(v_{i}\in V_{\mathcal{L}}\) and any pair of objects \(\ell_{i}\cap\ell_{j}\neq\varnothing\) in \(\mathcal{L}\) intersect if and only if we have an edge \((v_{i},v_{j})\in E_{\mathcal{L}}\). An independent set in \(G_{\mathcal{L}}\) corresponds to a set of pairwise disjoint objects in \(\mathcal{L}\). We work in a fully dynamic model, where objects are inserted in and deleted from \(\mathcal{L}\) over time. We aim to maintain a data structure that can efficiently maintain an independent set \(\mathcal{S}\subset\mathcal{L}\) whose size is a constant-factor approximation of the MIS of \(\mathcal{L}\) at any point in time, with polylogarithmic update times for insertions and deletions. ### Our Contributions In this paper, we answer Question 1 in the affirmative (Theorems 1-3). As a first step, we address the case of unit disks in the plane. **Theorem 1**.: _For a fully dynamic set of unit disks in the plane, a 12-approximate MIS can be maintained with worst-case update time \(O(\log^{2}n)\), and optimal output-sensitive reporting._ We prove Theorem 1 in Section 3. Similarly to classical approximation algorithms for the static version [13], we lay out four shifted grids such that any unit disk lies in a grid cell for at least one of the grids. For each grid, we maintain an independent set that contains at most one disk from each grid cell, thus we obtain four independent sets \(S_{1},\ldots,S_{4}\) at all times. Moreover, the largest of \(S_{1},\ldots,S_{4}\) is a constant-factor approximation of the MIS (Lemma 4). Using the MIX algorithm for unit disks, introduced by Cardinal et al. [1], we can maintain an independent set \(S\subset\bigcup_{i=1}^{4}S_{i}\) of size \(\Omega(\max\{|S_{1}|,|S_{2}|,|S_{3}|,|S_{4}|\})\) at all times, which is a constant-factor approximation of the MIS. Moreover, our dynamic data structure for unit disks easily generalizes to fat objects of comparable sizes in \(\mathbb{R}^{d}\) for any constant dimension \(d\in\mathbb{N}\), as explained in Section 4. **Theorem 2**.: _For every \(d,f\in\mathbb{N}\) and real parameters \(0<r_{1}<r_{2}\), there exists a constant \(C\) with the following property: For a fully dynamic collection of \(f\)-fat sets in \(\mathbb{R}^{d}\), each of size between \(r_{1}\) and \(r_{2}\), a \(C\)-approximate MIS can be maintained with worst-case update time \(O(\log^{d}n)\), and optimal output-sensitive reporting._ Our main result is a dynamic data structure for MIS over disks of arbitrary radii in the plane. **Theorem 3**.: _For a fully dynamic set of disks of arbitrary radii in the plane, a constant-factor approximate maximum independent set can be maintained in polylogarithmic expected amortized update time._ We extend the core ideas developed for unit disks with several new ideas, in Section 5. Specifically, we still maintain a constant number of "grids" such that every disk lies in one of the grid cells. For each "grid", we maintain an independent set \(S_{i}\) that contains at most one disk from each cell. Then we use the MIX algorithm for disks in the plane [10] to maintain a single independent set \(S\subset\bigcup_{i}S_{i}\), which is a constant-factor approximation of MIS. However, we need to address several challenges that we briefly review here. 1. First, each disk should be associated with a grid cell of comparable size. This requires several scales in each shifted grid. The cells of a standard quadtree would be the standard tool for this purpose (where each cell is a square, recursively subdivided into four congruent sub-squares). Unfortunately, shifted quadtrees do not have the property that every disk lies in a cell of comparable size. Instead we subdivide each square into \(3\times 3\) congruent sub-squares, and obtain a _nonatree_. The crux of the proof is that 2 and 3 are relatively prime, and a shift by \(\frac{1}{2}\) and a subdivision by \(\frac{1}{3}\) are compatible (see Lemma 6). 2. For the subset of disks compatible with a nonatree, we can find an \(O(1)\)-approximate MIS using bottom-up tree traversal of the nonatree (using the well-known greedy strategy [11]). We can also dynamically update the greedy solution by traversing an ascending path to the root in the nonatree. However, the height of the nonatree (even a compressed nonatree) may be \(\Theta(n)\) for \(n\) disks, and we cannot afford to traverse such a path in polylogarithmic time, since our total complexity budget is polylogarithmic. We address this challenge with the following four ideas. 1. We split each nonatree into two trees, combining alternating levels in the same tree and increasing the indegree from \(3\cdot 3=9\) to \(9^{2}=81\). This ensures that for any two disks in cells that are in ancestor-descendant relation, the radii differ by a factor of at least 3. 2. We maintain a "clearance" around each disk in our independent set, in the sense that if we add a disk \(d\) of radius \(r\) to our independent set in a cell \(c\), then we require that the disk \(3d\) (of the same center and radius \(3r\)) is disjoint from all larger disks that we add in any ancestor cell \(c^{\prime}\) of \(c\). This "clearance" ensures that when a new disk is inserted, it intersects _at most one_ larger disk that is already in our independent set (Lemma 14). 3. When we traverse an ascending path of the (odd or even levels of the) nonatree, we might encounter an alternating sequence of insertions and deletions: We call this a _cascade sequence_. We stop each cascade sequence after a constant number of changes in our independent set and show that we still maintain a constant-factor approximation of a MIS. 4. Finally, when we traverse an ascending path in the (odd or even levels of the) nonatree, we need a data structure to find the next required change: When we insert a disk \(d\), we need to find the next level where \(d\) intersects a larger disk in the current independent set; when we delete a disk, we need to find the next level when we can add another disk of the same or larger size instead. For this purpose, we use a dynamic nearest/farthest neighbor data structure by Kaplan et al. [20] (which generalizes Chan's famous dynamic convex hull data structure [14, 15]), that supports polylogarithmic query time and polylogarithmic expected amortized update time. One bottleneck in this framework is the nearest/farthest neighbor data structure [20, 11]. This provides only _expected amortized_ polylogarithmic update time, and it works only for families of "nice" objects in the plane (such as disks or homothets of a convex polygon, etc.). This is the only reason why our algorithm does not guarantee deterministic update time, and it does not extend to balls in \(\mathbb{R}^{d}\) for \(d\geq 3\), or to arbitrary fat objects in the plane. All other steps of our machinery support deterministic polylogarithmic (albeit amortized) update time, as well as balls in \(\mathbb{R}^{d}\) for any constant dimension \(d\in\mathbb{N}\), and fat objects in the plane. Another limitation for generalizing our framework is the MIX function, which smoothly transitions from one independent set to another. Cardinal et al. [20] established MIX functions for fat objects in \(\mathbb{R}^{d}\) for any constant \(d\in\mathbb{N}\) and their proof heavily relies on separator theorems. However, they show, for example, that a sublinear MIX algorithm is impossible for rectangles in the plane. Finally, in Section 5.5, we note that, even for a dynamic set of unit disks in the plane, it is impossible to maintain a \((1+\varepsilon)\)-approximate MIS with amortized update time \(n^{O((1/\varepsilon)^{1-\delta})}\) for any \(\varepsilon\), \(\delta>0\), unless the Exponential Time Hypothesis (ETH) fails. This follows from a reduction to a result by Marx [13]. ## 2 Preliminaries Fat Objects.Intuitively, _fat_ objects approximate balls in \(\mathbb{R}^{d}\). Many different definitions have been used for fatness; we use the definition due to Chan [14] as a MIX function (defined below) that has been designed for fat objects using this notion of fatness. The _size_ of an object in \(\mathbb{R}^{d}\) is the side length of its smallest enclosing axis-aligned hypercube. A collection of (connected) sets in \(\mathbb{R}^{d}\) is _\(f\)-fat_ for a constant \(f>0\), if in any size-\(r\) hypercube \(R\), one can choose \(f\) points such that if any object in the collection of size at least \(r\) intersects \(R\), then it contains one of the chosen points. In particular, note that every size-\(r\) hypercube \(R\) intersects at most \(f\) disjoint objects of size at least \(r\) from the collection. A collection of (connected) sets in \(\mathbb{R}^{d}\) is _fat_ if it is \(f\)-fat for some constant \(f>0\). MIX Algorithm.A general strategy for computing an MIS is to maintain a small number of _candidate_ independent sets \(S_{1},\ldots,S_{k}\) with a guarantee that the largest set is a good approximation of an MIS, and each insertion and deletion incurs only constantly many changes in \(S_{i}\) for all \(i=1,\ldots,k\). To answer a query about the size of the MIS, we can simply report \(\max\{|S_{1}|,\ldots,|S_{k}|\}\) in \(O(k)\) time. Similarly, we can report an entire (approximate) MIS by returning a largest candidate set. However, if we need to maintain a single (approximate) MIS at all times, we need to smoothly switch from one candidate to another. This challenge has recently been addressed by the MIX algorithm introduced by Cardinal et al. [20]: **MIX algorithm**: The algorithm receives two independent sets \(S_{1}\) and \(S_{2}\) whose sizes sum to \(n\) as input, and smoothly transitions from \(S_{1}\) to \(S_{2}\) by adding or removing one element at a time such that at all times the intermediate sets are independent sets of size at least \(\min\{|S_{1}|,|S_{2}|\}-o(n)\). Cardinal et al. [2] constructed an \(O(n\log n)\)-time MIX algorithm for fat objects in \(\mathbb{R}^{d}\), for constant dimension \(d\in\mathbb{N}\). Assume that \(\mathcal{D}\) is a fully dynamic set of disks in the plane, and we are given candidate independent sets \(S_{1},\ldots,S_{k}\) with the guarantee that \(\max\{|S_{1}|,\ldots,|S_{k}|\}\geq c\cdot\mathrm{OPT}\) at all times, where \(\mathrm{OPT}\) is the size of the MIS and \(c\) is a constant; further assume that the size of \(S_{i}\), \(i\in\{1,\ldots,k\}\), changes by at most a constant \(u\) for each insertion or deletion in \(\mathcal{D}\). We wish to maintain a single approximate MIS \(S\) at all times, where we are allowed to make up to \(10u\) changes in \(S\) for each insertion or deletion in \(\mathcal{D}\). Initially, we let \(S\) be the largest candidate, say \(S=S_{i}\). While \(|S_{i}|\geq\frac{1}{2}\max\{|S_{1}|,\ldots,|S_{k}|\}\), we can keep \(S=S_{i}\), and it remains a \(\frac{c}{2}\)-approximation. When \(2|S_{i}|<|S_{j}|\), where \(|S_{j}|=\max\{|S_{1}|,\ldots,|S_{k}|\}\), we start switching from \(S=S_{i}\) to \(S=S_{j}\). Let \(\alpha=|S_{i}|\) (hence \(|S_{j}|>2\alpha\)) at the start of this process. We apply the MIX algorithm for the current candidates \(S_{i}\) and \(S_{j}\), which replaces \(S_{i}\) with \(S_{j}\) in \(O(\alpha\log\alpha)\) update time distributed over the next \(\frac{1}{10u}\,\alpha\) dynamic updates in \(\mathcal{D}\), and it maintains an independent set \(S_{\text{MIX}}\) of size \(|S_{\text{MIX}}|\geq(1-o(1))\,\alpha\), [2]. If \(|S_{i}|\leq 5u\), we can swap \(S_{i}\) to \(S_{j}\) in a single step, so we may assume \(|S_{i}|>5u\) and \(|S_{\text{MIX}}|\geq(1-o(1))\,\alpha\geq\frac{1}{2}\,\alpha\) for a sufficiently large constant \(u\). In general, we would like to maintain \(S=S_{\text{MIX}}\). Note, however, that while running the MIX algorithm, the dynamic changes in \(\mathcal{D}\) may include up to \(\alpha/10\) deletions from each of \(S_{i}\) and \(S_{j}\). Furthermore, \(\mathrm{OPT}\) may also increase by at most \(\alpha/10\). We perform all deletions directly in \(S\); create a FIFO queue for all insertions into \(S_{j}\), and add these elements to \(S\) after the completion of the MIX algorithm. Overall, we have \(\mathrm{OPT}\leq\frac{1}{c}\,2\alpha+\frac{1}{10}\alpha\leq\frac{21}{10c}\,\alpha\) at all times, and we maintain an independent set \(S\) of size \(|S|\geq S_{\text{MIX}}-\frac{2}{10}\alpha\geq\left(\frac{1}{2}-\frac{1}{5} \right)\alpha=\frac{3}{10}\,\alpha\geq\frac{3}{10}\,\frac{.10c}{21}\,\mathrm{ OPT}=\frac{c}{7}\,\mathrm{OPT}\) at all times, and so \(S\) remains a \(\frac{c}{7}\)-approximate MIS at all times. Overall, we switch from \(S=S_{i}\) to \(S=S_{j}\) in two phases (i.e., the MIX algorithm followed by adding any new elements of \(S_{j}\) to \(S\)), spread over \(\frac{1}{10u}\,(\alpha+2\alpha)\cdot\frac{1}{1-1/10}=\frac{1}{3u}\alpha\) dynamic updates in \(\mathcal{D}\). When this process terminates, we have \(S=S_{j}\) with \(|S_{j}|\geq 2\alpha-\frac{1}{3}\,\alpha=\frac{5}{3}\,\alpha\) and \(\max\{|S_{1}|,\ldots,|S_{k}|\}\leq 2\alpha+\frac{1}{3}\alpha=\frac{7}{3}\,\alpha\). That is, we have \(|S_{j}|\geq\frac{5}{7}\,\max\{|S_{1}|,\ldots,|S_{k}|\}\), which means that there is no need to switch \(S_{j}\) to another independent set at that time. We can summarize our result as follows. **Lemma 1**.: _For a collection of candidate independent sets \(S_{1},\ldots,S_{k}\), the largest of which is a \(c\)-approximate MIS at all times, we can dynamically maintain a \(\frac{c}{7}\)-approximation \(S\) with \(O(1)\) changes in \(S\) per update._ Dynamic Nearest/Farthest Neighbor Data Structures.Given a set of functions \(\mathcal{F}=\{f_{1},\ldots,f_{n}\}\), \(f_{i}:\mathbb{R}^{2}\to\mathbb{R}\) for \(i=1,\ldots,n\), the _lower envelope_ of \(\mathcal{F}\) is the graph of the function \(L:\mathbb{R}^{2}\to\mathbb{R}\), \(L(q)=\min\{f_{i}(q)\mid 1\leq i\leq n\}\). Similarly, the _upper envelope_ is the graph of \(U:\mathbb{R}^{2}\to\mathbb{R}\), \(U(q)=\max\{f_{i}(q)\mid 1\leq i\leq n\}\). A _vertical stabbing query_ with respect to the lower (resp., upper) envelope, for query point \(q\in\mathbb{R}^{2}\), asks for the function \(f_{i}\) such that \(L(q)=f_{i}(q)\) (resp., \(U(q)=f_{i}(q)\)). Given a set \(\mathcal{D}\) of \(n\) disks in the plane, we can use this machinery to find, for a query disk \(d_{q}\), the disk in \(\mathcal{D}\) that is closest (farthest) from \(d_{q}\). Specifically, for each disk \(d\in\mathcal{D}\) centered at \(c_{d}\) with radius \(r_{d}\), define the function \(f_{d}:\mathbb{R}^{2}\to\mathbb{R}\), \(f_{d}(p)=|pc_{d}|-r_{d}\). Note that \(f_{d}(p)\) is the _signed_ Euclidean distance between \(p\in\mathbb{R}^{2}\) and the disk \(d\); that is, \(f_{d}(q)=0\) if and only if \(p\) is on the boundary of \(q\), \(f_{d}(p)<0\) if \(p\) is in the interior of \(d\), and \(f_{d}(p)>0\) equals the Euclidean distance between \(p\) and if \(q\) is in the exterior of \(d\). For a query point \(p\in\mathbb{R}^{2}\), \(L(p)=f_{d}(p)\) for a disk \(d\in\mathcal{D}\) closest to \(p\) (note that this holds even if \(p\) lies in the interior of some disks \(d\in\mathcal{D}\), where the Euclidean distance to \(d\) is zero but \(f_{d}(p)<0\)). Similarly, we have \(U(q)=f_{d}(q)\) for a disk \(d\in\mathcal{D}\) farthest from \(p\). Importantly, for a query disk \(d_{q}\), we can find a closest (farthest) disk from \(d_{q}\) by querying its center. In the fully dynamic setting, functions are inserted and deleted to/from \(\mathcal{F}\), and we wish to maintain a data structure that supports vertical stabbing queries w.r.t. the lower or upper envelope of \(\mathcal{F}\). For linear functions \(f_{i}\) (i.e., hyperplanes in \(\mathbb{R}^{3}\)), Chan [10] devised a fully dynamic randomized data structure with polylogarithmic query time and polylogarithmic amortized expected update time; this is equivalent to a _dynamic convex hull_ data structure in the dual setting (with the standard point-hyperplane duality). After several incremental improvements, the current best version is a deterministic data structure for \(n\) hyperplanes in \(\mathbb{R}^{3}\) with \(O(n\log n)\) preprocessing time, \(O(\log^{4}n)\) amortized update time, and \(O(\log^{2}n)\) worst-case query time [10]. Kaplan et al. [14] generalized Chan's data structure for dynamic sets of functions \(\mathcal{F}\), where the lower (resp., upper) envelope of any \(k\) functions has \(O(k)\) combinatorial complexity. This includes, in particular, the signed distance functions from disks [1]. In this case, the orthogonal projection of the lower envelope of \(\mathcal{F}\) (i.e., the so-called _minimization diagram_) is the Voronoi diagram of the disks. Their results is the following. **Theorem 4**.: _([14, Theorem 8.3]) The lower envelope of a set of \(n\) totally defined continuous bivariate functions of constant description complexity in three dimensions, such that the lower envelope of any subset of the functions has linear complexity, can be maintained dynamically, so as to support insertions, deletions, and queries, so that each insertion takes \(O(\lambda_{s}(\log n)\log^{5}n)\) amortized expected time, each deletion takes \(O(\lambda_{s}(\log n)\log^{9}n)\) amortized expected time, and each query takes \(O(\log^{2}n)\) worst-case deterministic time, where \(n\) is the number of functions currently in the data structure. The data structure requires \(O(n\log^{3}n)\) storage in expectation._ Subsequently, Liu [11] improved the deletion time to \(O(\lambda_{s}(\log n)\log^{7}n)\) amortized expected time. Here \(\lambda_{s}(t)\) is the maximum length of a Davenport-Schinzel sequence [1] on \(t\) symbols of order \(s\). For signed Euclidean distances of disks, we have \(s=6\)[14] and \(\lambda_{6}(t)\ll O(t\log t)\ll O(t^{2})\). For simplicity, we assume \(O(\log^{9}n)\) expected amortized update time and \(O(\log^{2}n)\) worst-case query time. Overall, we obtain the following for disks of arbitrary radii. **Lemma 2**.: _For a dynamic set \(\mathcal{D}\) of \(n\) disks in the plane, there is a randomized data structure that supports disk insertion in \(O(\log^{7}n)\) amortized expected time, disk deletion in \(O(\log^{9}n)\) amortized expected time; and the following queries in \(O(\log^{2}n)\) worst-case time. **Disjointness query**: For a query disk \(d_{q}\), find a disk in \(\mathcal{D}\) disjoint from \(d_{q}\), or report that all disks in \(\mathcal{D}\) intersect \(d_{q}\). **Intersection query**: For a query disk \(d_{q}\), find a disk in \(\mathcal{D}\) that intersects \(d_{q}\), or report that all disks in \(\mathcal{D}\) are disjoint from \(d_{q}\)._ Proof.: We use the dynamic data structure in [14, Theorem 8.3] with the update time improvements in [11] for the signed Euclidean distance from the disks in \(\mathcal{D}\). Given a disk \(d_{q}\) centered at \(c_{q}\), we can answer disjointness and intersection queries as follows. For disjointness, the vertical stabbing query for the upper envelope at point \(c_{q}\) returns a disk \(d\in\mathcal{D}\) farthest from \(c_{q}\). If \(d_{q}\cap d=\varnothing\), then return \(d\), otherwise report that all disks in \(\mathcal{D}\) intersect \(d_{q}\). For intersection, the vertical stabbing query for the lower envelope at point \(c_{q}\) returns a disk \(d\in\mathcal{D}\) closest to \(c_{q}\). If \(d_{q}\cap d\neq\varnothing\), then return \(d\), otherwise report that all disks in \(\mathcal{D}\) are disjoint from \(d_{q}\). We refer to the data structure in Lemma 2 as _dynamic nearest neighbor_ or _dynamic farthest neighbor_ data structure, for short, DNN or DFN data structures, respectively. We remark that Chan [1] improved the update time when the functions \(\mathcal{F}=\{f_{1},\ldots,f_{n}\}\) are distances from \(n\)_point sites_ in the plane. De Berg and Staals [1] generalized these results to dynamic \(k\)-nearest neighbor data structures for \(n\) point sites in the plane. ## 3 Unit Disks in the Plane We first consider the case where the fully dynamic set \(\mathcal{D}\) consists of disks of the same size, namely unit disks (with radius \(r=1\)). Intuitively, our data structure maintains multiple grids, each with their own potential solution. For each grid, disks whose interior is disjoint from the grid lines contribute to a potential solution. We show that at any point in time, the grid that finds the largest solution holds a constant-factor approximation of MIS. Shifted grids.We define four axis-aligned square grids \(G_{1},\ldots,G_{4}\), in which each grid cell has side length 4. For \(G_{1}\) the grid lines are \(\{x=4i\}\) and \(\{y=4i\}\) for all \(i\in\mathbb{Z}\). For \(G_{2}\) and \(G_{3}\), respectively, the vertical and horizontal grid lines are shifted with respect to \(G_{1}\): for \(G_{2}\) the vertical lines are \(\{x=4i+2\}\), while for \(G_{3}\) the horizontal lines are \(\{y=4i+2\}\), again for all \(i\in\mathbb{Z}\). Finally, \(G_{4}\) is both horizontally and vertically shifted, having lines \(\{x=4i+2\}\) and \(\{y=4i+2\}\) for all \(i\in\mathbb{Z}\) (see Figure 0(a)). **Lemma 3**.: _Every unit disk in \(\mathbb{R}^{2}\) is contained in a grid cell of at least one of the shifted grid \(G_{1},\ldots,G_{4}\). Consequently, for a set \(S\) of unit disks, the cells of one of the grids jointly contain at least \(|S|/4\) disks from \(S\)._ Proof.: The distance between two vertical lines \(\{x=4i\}\) and \(\{x=4j+2\}\), for any \(i,j\in\mathbb{Z}\), is at least two. A unit disk \(d\) has diameter 2, so its interior cannot intersect two such lines. Consequently, the vertical strip \(\{4i\leq x\leq 4i+4\}\) or \(\{4i-2\leq x\leq 4i+2\}\) contains \(d\) for some \(i\in\mathbb{Z}\). Similarly, the horizontal strip \(\{4j\leq x\leq 4j+4\}\) or \(\{4j-2\leq x\leq 4j+2\}\) contains \(d\) for some \(i\in\mathbb{Z}\). The intersection of these strips is a cell in one of the grids, which contains \(d\). This proves the first claim; the second claim follows from the pigeonhole principle. Figure 1: **(a)** The four shifted grids \(G_{1}\), \(G_{2}\), \(G_{3}\), and \(G_{4}\), which respectively do not intersect the blue, green, yellow, and red disks. **(b)** The radius-1 squares inside grid cells of the four grids, along with the center points of the disks that lie completely inside grid cells, as crosses. In the bottom right, besides red squares for \(G_{4}\), the squares of all other grids are added to show that the squares together partition the plane. Because of Lemma 3 we know that one of the grids contains at least a constant fraction of an optimum solution \(\mathrm{OPT}\), namely at least \(\frac{1}{4}\left|\mathrm{OPT}\right|\) disks. To maintain an approximate MIS over time, we want to store information about the disks, such that we can efficiently determine the disks inside a particular grid cell, and given a disk, which grid cell(s) it is contained in. Each disk \(d\in\mathcal{D}\) is represented by its center \(p\) and we determine whether \(d\) is inside a cell by checking whether \(p\) is inside the \(2\times 2\) square centered inside each grid cell (see Figure 0(b)): Since we deal with unit disks, when a center is inside a grid cell and at least unit distance from the boundary, the corresponding disk is completely inside the grid cell. By making these \(2\times 2\) square regions closed on the top and left, and open on the bottom and right, we can ensure that the union of these regions, over all cells of all four grids, partitions the plane; see Figure 0(b) (bottom right). As a result, every disk is assigned to exactly one cell of exactly one grid. For each grid cell that contains at least one disk, we add an arbitrary disk to the independent set of that grid. This yields an independent set \(S_{i}\) for each grid \(G_{i}\). **Lemma 4**.: _Let \(S_{1},\ldots,S_{4}\) be the independent sets in the set \(\mathcal{D}\) of unit disks computed for \(G_{1},\ldots,G_{4}\), respectively. The largest of \(S_{1},\ldots,S_{4}\) is a 12-approximation of a MIS for \(\mathcal{D}\)._ Proof.: Let \(\mathrm{OPT}\subset\mathcal{D}\) be a MIS. By Lemma 3, there is a grid \(G_{i}\) whose cells jointly contain a subset \(\mathrm{OPT}_{i}\subset\mathrm{OPT}\) of size \(\left|\mathrm{OPT}_{i}\right|\geq\frac{1}{4}\left|\mathrm{OPT}\right|\). Two unit disks are disjoint if the distance between their centers is more than 2. Consider one of the \(2\times 2\) squares inside a cell of \(G_{i}\). Recall that it is open on the bottom and right, and hence the \(x\)- or \(y\)-coordinates of two centers in this square differ by less than 2. Thus, at most three centers fit in a \(2\times 2\) square and each grid cell can therefore contain at most three unit disks from \(OPT_{i}\). Consequently, at least \(\frac{1}{3}\left|OPT_{i}\right|\geq\frac{1}{12}\left|OPT\right|\) cells of the grid \(G_{i}\) each contain at least one disk of \(\mathcal{D}\). Therefore our algorithm returns an independent set of size at least \(\frac{1}{12}\left|OPT\right|\), as required. We previously considered only how to compute a constant-factor approximation of a MIS of \(\mathcal{D}\). Next we focus on how to store the centers, such that we can efficiently deal with dynamic changes to the set \(\mathcal{D}\). Our dynamic data structure consists of multiple 2D range trees \(T_{\mathcal{D}}\), and \(T_{1}\), \(T_{2}\), \(T_{3}\), and \(T_{4}\). The former stores the centers of all disks in \(\mathcal{D}\), while the latter four store only the centers of disks in \(S_{1},\ldots,S_{4}\), respectively. When a unit disk \(d\) is added to or deleted from \(\mathcal{D}\), we use these trees to update the sets \(S_{1},\ldots,S_{4}\). **Lemma 5**.: _Using 2D range trees \(T_{\mathcal{D}}\) and \(T_{1},\ldots,T_{4}\), containing at most \(n\) elements, we can maintain the independent sets \(S_{1},\ldots,S_{4}\) in grids \(G_{1},\ldots,G_{4}\), in \(O(\log^{2}n)\) update time per insertion or deletion._ Proof.: For the insertion of a disk \(d\), we find the unique \(2\times 2\) square \(s\) inside a cell \(c\in G_{i}\), for \(i\in\{1,2,3,4\}\), that contains \(d\), using the coordinates of the center of \(d\). We do an orthogonal range query in the range \(s\) on \(T_{i}\) (or equivalently, on \(T_{\mathcal{D}}\)), and report whether a single point is found. Since we only care for finding a single point, this takes \(O(\log^{2}n+1)\) time. In case a point is found, we already have selected a disk for cell \(c\) in the potential solution \(S_{i}\). We therefore insert the center of \(d\) only in \(T_{\mathcal{D}}\). However, if no point is found, \(c\) must be empty and the independent set \(S_{i}\) can grow by one by adding \(d\) to this set. We hence insert \(d\) into both \(T_{\mathcal{D}}\) and \(T_{i}\). For the deletion of a disk \(d\), we again find the square \(s\) inside some cell \(c\in G_{i}\) that contains \(d\), and do an orthogonal range query in the range \(s\) on \(T_{i}\). Regardless of whether we found \(d\), we now delete \(d\) from both \(T_{\mathcal{D}}\) and \(T_{i}\). If we found \(d\) in \(T_{i}\), then we have to check whether we can replace it with another disk in \(c\). We do this by another orthogonal range query with the range on \(T_{\mathcal{D}}\). If we find any points, we insert the first such point \(p\) into \(T_{i}\), so that the corresponding disk replaces \(d\) in \(S_{i}\). All insertions and deletions into the 2D range trees take \(O(\log^{2}n)\) time, while all orthogonal range queries take \(O(\log^{2}n+1)\) time, since we always report at most one point. Thus we never spend more than \(O(\log^{2}n)\) time per update. If we need to report the (approximate) size of the MIS, we simply report \(\max\{|S_{1}|,\ldots,|S_{4}|\}\), which is a 12-approximation. To output an (approximate) maximum independent set, we can simply choose a largest solution out of \(S_{1},\ldots,S_{4}\) and output all disks in the corresponding range tree \(T_{i}\) in time linear in the number of disks in this solution. Thus, Lemmata 4 and 5 together show that our dynamic data structure can handle dynamic changes in worst-case polylogarithmic update time, and report a solution in optimal output-sensitive time. **Theorem 1**.: _For a fully dynamic set of unit disks in the plane, a 12-approximate MIS can be maintained with worst-case update time \(O(\log^{2}n)\), and optimal output-sensitive reporting._ To explicitly maintain an independent set \(S\) of size \(\Omega(|OPT|)\) at all times, we can use the MIX function for unit disks [10] to (smoothly) switch between the sets \(S_{1},\ldots,S_{4}\). In particular, \(S\) is a subset of \(S_{1}\cup\ldots\cup S_{4}\), and \(|S|\geq\Omega(\max\{|S_{1}|,\ldots,|S_{4}|\})\) by Lemma 1. Using the MIX function for unit disks [10], we can hence explicitly maintain a constant-factor approximation of a MIS. ## 4 Fat Objects of Comparable Size in Higher Dimensions Our algorithm to maintain an constant-factor approximation of a MIS of unit-disks readily extends to maintaining such an MIS approximation for fat objects of comparable size in any constant dimension \(d\). Remember that the size of a (fat) object is determined by the side length of its smallest enclosing (axis-aligned) hypercube. We define fat objects to be of comparable size, if the side length of their smallest enclosing (axis-aligned) hypercube is between real values \(r_{1}\) and \(r_{2}\). **Theorem 2**.: _For every \(d,f\in\mathbb{N}\) and real parameters \(0<r_{1}<r_{2}\), there exists a constant \(C\) with the following property: For a fully dynamic collection of \(f\)-fat sets in \(\mathbb{R}^{d}\), each of size between \(r_{1}\) and \(r_{2}\), a \(C\)-approximate MIS can be maintained with worst-case update time \(O(\log^{d}n)\), and optimal output-sensitive reporting._ Proof.: Similar to the unit-disk case, we define \(2^{d}\)\(d\)-dimensional shifted (axis-aligned and square) grids \(G_{1},\ldots,G_{2^{d}}\) with side length \(2\cdot r_{2}\): One base grid \(G_{1}\) and \(2^{d}\) - 1 grids that (distinctly) shift the base grid in (at least one of) the \(d\)-dimensions by \(r_{2}\). Since each object \(o\) has a size of at most \(r_{2}\), and grid lines defined by the union of all grids are at distance \(r_{2}\) from one another in every dimension, there is a grid cell in one of the grids that contains \(o\). By the pigeonhole principle, one grid \(G_{i}\) must therefore contain at least \(2^{-d}\) of all objects, and hence the same fraction of a MIS (analogously to Lemma 3). Furthermore, as the objects are of size at least \(r_{1}\), there is some constant \(c\), for which it holds that no more than \(c\) fat objects fit in a single grid cell, and observe that \(c>\lfloor(\frac{r_{2}}{r_{1}})^{d}\rfloor\). Following the unit-disk algorithm, we take a single object per grid cell in our independent set \(S_{i}\) of a grid \(G_{i}\), and hence each \(S_{i}\) is of size at most \(\frac{1}{c}\) times the size of the MIS in \(G_{i}\). Since each \(S_{i}\) maintain a \(c\)-approximation for the MIS of the set of disks in \(G_{i}\), and at least one of the \(2^{d}\) grids holds a \(2^{d}\)-approximation of the global MIS, we get a \(C\)-approximation of MIS, with \(C=c\cdot 2^{d}\) (analogously to Lemma 4). We again use (\(d\)-dimensional) range trees, in which we store the center points of the bounding hypercube of each object, for the solution of each grid and for the union of objects, analogously to the unit-disk case: Insertions, deletions, and reporting are handled exactly as in the unit-disk case, and hence we can prove, similarly to Lemma 5, that insertions and deletions are handled in \(O(\log^{d}n+1)\) time and reporting is done in time linear in the size of the reported set \(S_{i}\). As in the unit-disk case, we can use the MIX function [11] (which works for fat objects) to switch between the independent sets \(S_{1},\ldots,S_{2^{d}}\) of the individual grids, and hence explicitly maintain an approximate MIS at all times. We know by Lemma 1 that this results in an independent set of size \(\Omega(\max\{|S_{1}|,\ldots,|S_{2^{d}}|\})\). **Remark 1**.: There exist efficient data structures for dynamic orthogonal range searching that can speed up the update and query times in Theorems 1 and 2. However, these techniques require amortization: using dynamic fractional cascading [14] in our range trees, we can replace one \(\log n\) factor by \(\log\log n\) in the update and query time complexity, by accepting amortized update time. The current state-of-the-art [10] in dynamic orthogonal range searching improves the time complexities further, but requires amortization for both update and query times. ## 5 Disks of Arbitrary Radii in the Plane In this section, we study the DGMIS problem for a set of disks of arbitrary radii. The general idea of our new data structure is to break the set of disks \(\mathcal{D}\) into subsets of disks of comparable radius. We will use several instances of the shifted grids \(G^{i}_{1},\ldots,G^{i}_{4^{\prime}}\), as we used in the unit disk case, where the grid cells have side length \(3^{i}\), and are shifted by \(\frac{3^{i}}{2}\), for \(i\in\mathbb{Z}\). We say that the grids \(G^{i}_{1},\ldots,G^{i}_{4}\) form the set \(\mathcal{G}_{i}\). In Section 5.1, we explain how hierarchical grids can be used for computing a constant-factor approximation for static instances. Then, in Section 5.2, we make several changes in the static data structures, to support efficient updates, while maintaining a constant factor approximation. In Section 5.3, we describe hierarchical nearest/farthest neighbor data structures. Finally, in Section 5.4, we stitch all these ingredients together to show how to maintain a constant-factor approximate maximum independent set in a fully dynamic setting, with expected amortized polylogarithmic update time. ### Static Hierarchical Data Structures Dividing disks over buckets.In the grids of set \(\mathcal{G}_{i}\) we store disks with radius \(r\), where \(\frac{3^{i-1}}{4}<r\leq\frac{3^{i}}{4}\). We refer to the data structures associated with one value \(i\) as the _bucket_\(i\). Compared to the unit disk case, where we considered only disks of radius \(\frac{1}{4}\) times the side length of the grid cells, we now have to deal with disks of varying sizes even in one set \(\mathcal{G}_{i}\) of shifted grids. However, every disk is still completely inside at least one grid cell. To see this, observe that no two vertical or two horizontal grid lines in one grid of bucket \(i\) can intersect a single disk with a radius lying in the range \((\frac{3^{i-1}}{4},\frac{3^{i}}{4}]\). Indeed, such disks have a diameter at most \(\frac{3^{i}}{2}\), while grid lines are at least \(3^{i}\) apart. Furthermore, our choice for side length \(3^{i}\) for bucket \(i\) was not arbitrary: Consider also adjacent bucket \(i-1\) and observe that each cell \(c\) of grid \(G^{i}_{1}\) is further subdivided into nine cells of grid \(G^{i-1}_{1}\), in a \(3\times 3\) formation. We say that \(c\) is _aligned_ with the nine cells in bucket \(i-1\). We define the same parent-child relations as in a quadtree: If a grid cell \(c\) in a lower bucket is inside a cell \(c_{p}\) of an adjacent higher bucket, we say that \(c\) is a child (cell) of \(c_{p}\), or that \(c_{p}\) is the parent (cell) of \(c\). In general, we write \(c_{1}\prec c_{2}\) if cell \(c_{1}\) is a descendant of cell \(c_{2}\); \(c_{1}\preceq c_{2}\) if equality is allowed. We call the resulting structure a _nonatree_, and we will refer to the nonatree that relates all grids \(G_{1}^{j}\) as \(N_{1}\). Crucially, all grids \(G_{2}^{j}\) also align, and the same holds for \(G_{3}^{j}\) and \(G_{4}^{j}\). This happens because horizontally and vertically, grid cells are subdivided into an odd number of cells (three in our case), and the shifted grids are displaced by half the side length of the grid cells. Thus, for \(G_{2}^{j}\) and \(G_{4}^{j}\), the horizontal shift in buckets \(i\) and \(i-1\) ensures that every third vertical grid line of bucket \(i-1\) aligns with a vertical grid line of bucket \(i\). The exact same happens for the horizontal grid lines of \(G_{3}^{j}\) and \(G_{4}^{j}\), due to the vertical shift. Thus, the horizontally shifted grids also form a nonatree \(N_{2}\), and similarly, we define \(N_{3}\) and \(N_{4}\). For each bucket \(i\), we maintain the four \(2\)D range trees. Let \(\mathcal{D}_{i}\subseteq\mathcal{D}\) be the subset of disks stored in \(\mathcal{G}_{i}\) and let \(S_{1},\ldots,S_{4}\) be an independent set in \(G_{1}^{i},\ldots,G_{4}^{i}\), then we maintain in \(T_{\mathcal{D}}^{i}\) all disks in \(\mathcal{D}_{i}\) and in \(T_{1}^{i},\ldots,T_{4}^{i}\) the disks in \(S_{1},\ldots,S_{4}\). Approximating a maximum independent set.We will now use the data structures to compute an approximate MIS for disks with arbitrary radii. Note that, we defined buckets for \(i\in\mathbb{Z}\), but we will use only those buckets that store any disks, which we call _relevant_ buckets. Within these buckets, we call grid cells that contain disks the _relevant_ grid cells. Let \(B\) be the sequence of relevant buckets, ordered on their parameter \(i\). To compute a solution, we will consider the buckets in \(B\) in ascending order, starting from the lowest bucket, which holds the smallest disk, and has grids with the smallest side length, up to the highest bucket with the largest disks, and largest side lengths. We follow a greedy bottom-up strategy for finding a constant-factor approximation of an MIS of disks. To prevent computational overhead in this approach, our nonatrees are _compressed_, similar to compressed quadtrees [11, Chapter 2]: Each nonatree consists of a root cell, all relevant grid cells, and all cells that have relevant grid cells in at least two subtrees. As such, each (non-root) internal cell of our nonatrees either contains a disk, or merges at least two subtrees that contain disks, and hence the total number of cells in a compressed nonatree is linear in the number of disks it stores, which is upper bounded by \(O(n)\). Specifically, two high-level steps can be distinguished in our approach: 1. In the lowest relevant bucket, we simply select an arbitrary disk from each relevant grid cell. In other relevant buckets, we consider for each grid cell \(c\in G_{k}^{i}\) the subdivision of \(c\) in \(G_{k}^{j}\) in the preceding relevant bucket \(j<i\). We try to combine the independent set from the relevant child(ren) of \(c\) (that we call _obstacle disk(s)_) with at most one additional disk in \(c\). Once all relevant cells have been handled, we output the largest independent set among the four sets computed for the shifted nonatrees \(N_{1},\ldots,N_{4}\). This produces a constant-factor approximation, as shown in Lemmata 6-8. 2. The obstacle disk in the previous step may cover more area than the disks in the independent set of the children of \(c\). Hence, we consider computing the obstacle disk only for independent sets originating from a single child cell. In this case, we choose as the obstacle the smallest disk covering the contributing child cell in question. The obstacle will then be of comparable size to that child cell, and hence also comparable to the contributed disk, intersecting at most a constant number of disks in the parent cell \(c\). Otherwise, if the independent set of the children originates from more than one child, we simply do not add a disk from \(c\), even if that may be possible. Lemmata 9 and 10 show that we still obtain a constant-factor approximate MIS under these constraints. We will now elaborate on the high-level steps, and provide a sequence of lemmas that can be combined to prove the approximation ratio of the computed independent set. In the first step, we deviate from an optimal solution in three ways: We follow a greedy bottom-up approach, we take at most one disk per grid cell, and we do not combine the solutions of the shifted nonatrees. Focusing on the latter concern first, we extend Lemma 3 to prove the same bound for our shifted nonatrees. Before we can prove this lemma, we first define the intersection between a disk and a nonatree, as follows. We say that a disk \(d\) intersects (the grid lines of) a nonatree \(N_{k}\), if and only if its radius \(r_{d}\) is in the range \((\frac{3^{i-1}}{4},\frac{3^{i}}{4}]\) and it intersects grid lines of \(G_{k}^{i}\). **Lemma 6**.: _For a set \(S\) of disks in \(\mathbb{R}^{2}\), the grid lines of at least one nonatree, out of the shifted nonatrees \(N_{1},\ldots,N_{4}\), do not intersect at least \(|S|/4\) disks._ Proof.: Consider the subset \(D_{1}\subseteq S\) of disks intersecting \(N_{1}\), If \(|D_{1}|<\frac{3|S|}{4}\) then at least \(|S|/4\) disks are not intersected by \(N_{1}\), and the lemma trivially holds. Now assume that \(|D_{1}|\geq\frac{3|S|}{4}\) and consider the partitioning of \(D_{1}\) into \(D_{2}\subseteq D_{1}\), \(D_{3}\subseteq D_{1}\), and \(D_{4}\subseteq D_{1}\) which respectively intersect only vertical lines, only horizontal lines, or both vertical and horizontal lines of \(N_{1}\). By definition of the grids that make up the nonatrees \(N_{2},N_{3},N_{4}\), the disks in \(D_{2}\) do not intersect \(N_{2}\), and similarly \(D_{3}\) and \(D_{4}\) do not intersect \(N_{3}\) and \(N_{4}\), respectively. Let \(D^{*}\) be the largest set out of \(D_{2}\), \(D_{3}\), and \(D_{4}\). Since \(|D_{1}|=|D_{2}|+|D_{3}|+|D_{4}|\) and \(|D_{1}|\geq\frac{3|S|}{4}\), \(D^{*}\) must have size at least \(|S|/4\). Hence, the nonatree corresponding to \(D^{*}\) does not intersect at least \(|S|/4\) disks in \(S\). Similarly, we can generalize Lemma 4 to work for the newly defined grids in \(\mathcal{G}_{i}\), that is, for disks with different radii in a certain range. We show that taking only a single disk per grid cell into our solution is a \(35\)-approximation of a MIS. **Lemma 7**.: _If \(S\) is a maximum independent set of the disks in a grid cell of a nonatree \(N_{k}\), then \(|S|\leq 35\)._ Proof.: Since we store disks of smaller radius compared to the unit disk case, the largest independent set inside a single grid cell increases from three to \(35\): In a bucket \(i\) the disks have radius \(r>\frac{3^{i-1}}{4}\) and the grid cells have side length \(3^{i}\). The grid cells are therefore just too small to fit \(3\cdot 4=12\) times the smallest disk radius horizontally or vertically. Hence, we cannot fit a grid of \(6\times 6=36\) disjoint disks in one grid cell (which is the tightest packing for a square with side length \(3^{i}\) and disks with radius \(r=\frac{3^{i-1}}{4}\)[12]; see also [13]). To round out the first step, we prove that our greedy strategy contributes at most a factor \(5\) to our approximation factor. **Lemma 8**.: _Let \(S\) be a maximum independent set of the disks in a nonatree \(N_{k}\) such that each grid cell in \(N_{k}\) contributes at most one disk. An algorithm that considers the grid cells in \(N_{k}\) in bottom-up fashion, and computes an independent set \(S^{\prime}\) by greedily adding at most one non-overlapping disk per grid cell to \(S^{\prime}\), is a \(5\)-approximation of \(S\)._ Proof.: Every disk \(d\) can intersect at most 5 pairwise disjoint disks, that have a radius at least as large as the radius of \(d\). Thus, a greedily selected disk \(d\in S^{\prime}\setminus S\) can overlap with at most five larger disks in \(S\). These five disks are necessarily located in higher buckets (or one disk can be located in the same cell as \(d\)), since all grid cells of one bucket in \(N_{k}\) are disjoint, and lower buckets contain smaller disks. As such, the greedy algorithm will not find these five disks before considering \(d\), and cannot add them after greedily adding \(d\) to \(S^{\prime}\). Thus, \(S^{\prime}\) is a \(5\)-approximation of \(S\). For the second step, we use several data structures and algorithmic steps that help us achieve polylogarithmic update and query times in the dynamic setting. For now we analyze solely the approximation factor incurred by these techniques. We start by analyzing the approximation ratio for not taking any disk from a cell \(c\), if multiple of its children contribute disks to the computed independent set. **Lemma 9**.: _Let \(S\) be a maximum independent set of the disks in a nonatree \(N_{k}\), such that each grid cell in \(N_{k}\) contributes at most one disk. The independent set \(S^{\prime}\), that contains all disks in \(S\) except disks from cells that have two relevant child cells, is a 2-approximation of \(S\)._ Proof.: Consider the tree structure \(T\) of nonatree \(N_{k}\). Every cell that is a leaf of \(T\) contributes its smallest disk to both \(S\) and \(S^{\prime}\). Contract every edge of \(T\), that connects a cell that does not contribute a disk to \(S\), to its parent. The remaining structure \(T^{\prime}\) is still a tree, and every node of the tree corresponds to a cell that contributes exactly one disk to \(S\), and hence \(|S|=|T^{\prime}|\). Internal nodes of \(T^{\prime}\) either have two children, in which case they do not contribute a disk to \(S^{\prime}\), or they have one child, in which case they do contribute a disk to \(S^{\prime}\). If we add an additional leaf to every internal node of \(T^{\prime}\) that has only one child, then we get a tree \(T^{\prime}_{2}\), where every internal node has at least two children, and every leaf corresponds to a disk in \(S^{\prime}\): For internal nodes that contribute a disk, the newly added leaf corresponds to the contributed disk. Since every node has at least two children, the number of leaves of \(T^{\prime}_{2}\) is strictly larger than \(|T^{\prime}_{2}|/2\). It follows that \(|S^{\prime}|>|T^{\prime}_{2}|/2\). Finally, the size of \(T^{\prime}_{2}\) is at least as large as \(T^{\prime}\), meaning \(|T^{\prime}_{2}|\geq|T^{\prime}|\). The approximation ratio of \(S^{\prime}\) compared to \(S\) is then \(\frac{|S^{\prime}|}{|S|}>\frac{|T^{\prime}_{2}|/2}{|T^{\prime}|}\geq\frac{1}{2}\). Next we consider the obstacle disk that we compute when only one child cell contributes disks to the independent set. Before we elaborate on the approximation ratio of this algorithmic procedure, we first explain the steps in more detail. For the leaf cells of a nonatree, it is unnecessary to compute an obstacle disk, since these cells contribute at most a single disk, which can act as its own obstacle disk. For a cell \(c\) that is an internal node of the nonatree, with at most one relevant child that contributes to the independent set, we have two options for the obstacle disk of \(c\). We use the obstacle disk of the child cell to determine whether there is a disk in \(c\) disjoint from the child obstacle, to either find a disjoint disk \(d\) or not. If we find such a disk \(d\), we compute a new obstacle disk for \(c\), by taking the smallest enclosing disk of \(c\). If there is no such disk \(d\), then we use the obstacle disk of the child as the obstacle disk for \(c\). This ensures that the obstacle disk does not grow unnecessarily, which is relevant when proving the following approximation factor. **Lemma 10**.: _Let \(c\) be a cell in bucket \(i\) of nonatree \(N_{k}\) that contributes a disk to an independent set. The computed obstacle disk \(d_{o}\) can overlap with no more than \(23\) pairwise disjoint disks in higher buckets._ Proof.: Let \(c\) be an obstacle cell at level \(i\in\mathbb{Z}\). Then \(c\) has side length \(3^{i}\), and the disk associated with \(c\) has radii in the range \((\frac{1}{4}\,3^{i-1},\frac{1}{4}\,3^{i}]\). Obstacle disk \(d_{o}\) therefore has a radius of \(r=\frac{\sqrt{2}}{2}\cdot 3^{i}\), and \(\operatorname{area}(d_{o})=\pi r^{2}=\frac{\pi}{2}\,3^{2i}\) (see Figure 1(a). The radius of any disk \(d\in\mathcal{D}_{k}\) in a higher bucket is at least \(\frac{1}{4}\,3^{i}\). Let \(\mathcal{A}\) be the set of pairwise disjoint disks in \(\mathcal{D}_{k}\) in higher buckets that intersect \(d_{o}\). Scale down every disk \(d\in\mathcal{D}\) from a point in \(d\cap d_{o}\) to a disk \(\widehat{d}\) of radius \(\widehat{r}=\frac{1}{4}\,3^{i}\); and let \(\widehat{\mathcal{A}}\) be the set of resulting disks. Note that \(|\widehat{\mathcal{A}}|=|\mathcal{A}|\), the disks in \(\widehat{\mathcal{A}}\) are pairwise disjoint, and they all intersect \(d_{o}\). Let \(D\) be a disk concentric with \(d_{o}\), of radius \(R=3^{i}/\sqrt{2}+\frac{1}{2}\,3^{i}=\frac{1+\sqrt{2}}{2}\,3^{i}\). By the triangle inequality, \(D\) contains all (scaled) disks in \(\widehat{\mathcal{A}}\) (see Figure 1(b)). Since the disks in \(\widehat{\mathcal{A}}\) are pairwise disjoint, then \(|\widehat{\mathcal{A}}|\cdot\pi\widehat{r}^{2}=\sum_{\widehat{d}\in\widehat{ \mathcal{A}}}\operatorname{area}(\widehat{d})\leq\operatorname{area}(D)=\pi R ^{2}\). This yields \(|\mathcal{A}|=|\widehat{\mathcal{A}}|\leq\operatorname{area}(D)/\operatorname {area}(\widehat{d})=R^{2}/\widehat{r}^{2}=(2+2\sqrt{2})^{2}\approx 23.31\), as claimed. **Lemma 11**.: _For a set of disks in the plane, one of our shifted nonatrees \(N_{1},\dots,N_{4}\) maintains an independent set of size \(\Omega(|\mathrm{OPT}|)\), where \(\mathrm{OPT}\) is a MIS._ Proof.: By Lemma 6 we know that at least \(|\mathrm{OPT}|/4\) disks of \(\mathrm{OPT}\) are stored in one of the four nonatrees, say \(N_{\mathrm{OPT}}\). Lemma 7 tells us that at most 35 disks in \(\mathrm{OPT}\) can be together in a single cell of such a nonatree. Since the maintained independent set takes at most a single disk from each cell, it is at least a \(4\cdot 35=140\) approximation of \(\mathrm{OPT}\). By considering the cells in bottom-up fashion when constructing the independent set, Lemma 8 shows that a \(5\)-approximation of the \(140\)-approximation will be found, leading to an approximation factor of \(5\cdot 140=700\). Lemma 9 allows us to remove those disks in cells that have two relevant child cells, to find a \(2\)-approximation of the independent set before removing the disks, leading to a \(2\cdot 700=1400\) approximation. Finally, we use an obstacle disk, instead of the actual disks in the independent set of a child cell to check for overlap with disks in the parent cell. Lemma 10 tells us that we disregard at most 23 disks in higher buckets for overlapping with the obstacle. Since it is unclear whether these 23 disks are really overlapping with disks in the independent set of the child cell, and since the obstacle disk is computed only when a child contributes at least 2 disks to the independent set, this leads Figure 2: **(a)** A yellow obstacle disk for a cell in bucket \(i\) along with disks in bucket \(i+1\). The grid lines for bucket \(i\) are drawn in grey, except for the cell with the obstacle disk. **(b)** The dashed disks of radius larger than \(3^{i}/4\) are scaled down such that all white disks have radius \(3^{i}/4\) and intersect the yellow obstacle disk \(d_{o}\). All white disks are contained in the blue disk \(D\) with a radius \(3^{i}/2\) larger than \(d_{o}\). to an approximation factor of \(\frac{25}{2}\). The maintained solution in \(N_{OPT}\) is hence a \(\frac{25}{2}\)-approximation of a \(1400\)-approximation. ### Modifications to Support Dynamic Maintenance In Section 5.1, we defined four hierarchical grids (nonatrees) \(N_{1},\ldots,N_{4}\), described a greedy algorithm that computes independent sets \(S_{1},\ldots,S_{4}\) that are consistent with the grids, and showed that a largest of the four independent sets is a constant-factor approximation of the MIS. In this section, we make several changes in the static data structures, to support efficient updates, while maintaining a constant-factor approximation. Then in Section 5.4, we show that the modified data structures can be maintained dynamically in expected amortized polylogarithmic update time. We start with a summary of the modifications: * **Sparsification.** We split each nonatree \(N_{i}\), \(i\in\{1,\ldots,4\}\), into two trees \(N_{i}^{\mathrm{odd}}\) and \(N_{i}^{\mathrm{even}}\), one containing the odd levels and the other containing the even levels. As a result, the radii of disks at different (nonempty) levels differ by at least a factor of 3. * **Clearance.** For a disk \(d\) of radius \(r\), let \(3d\) denote the concentric disk of radius \(3r\). Recall that our greedy strategy adds disks to an independent set \(S\) in a bottom-up traversal of a nonatree. When we add a disk \(d\in\mathcal{D}\) to \(S\), we require that we do not add any larger disk to \(S\) that intersects \(3d\). A simple volume argument show that this modification still yields a constant-factor approximation. As a result, if a new disks is inserted, it intersects at most one larger disk in \(S\), which simplifies the update operation in Section 5.4. * **Candidate Disks.** The naive approach for a dynamic update of the independent set \(S\) in a nonatree \(N\) would work as follows: When a new disk \(d\) is inserted or deleted, we find a nonatree \(N\) and a cell \(c\in N\) associated with \(d\); and then in an ascending path of \(N\) from \(c\) to the root, we re-compute the disks in \(S\) associated with the cells. Unfortunately, the height of the nonatree may be linear, and we cannot afford to traverse an ascending path from \(c\) to the root. Instead, we run the greedy process only locally, on an ascending paths of \(N\) between two cells \(c_{1}\prec c_{2}\) that contain disks \(s_{1},s_{2}\in S\), respectively. The greedy process guarantees that new disks added to \(S\) are disjoint from any smaller disk in \(S\), including \(s_{1}\). However, the new disks might intersect the larger disk \(s_{2}\in S\). In this case, we remove \(s_{2}\) from \(S\), keep it as a "placeholder" in the set \(B\), and ensure that \(S\cup B\) remains a dominating set of \(\mathcal{D}\). Sparsification.Recall that for a set \(\mathcal{D}\) of \(n\) disks, \(\mathcal{D}_{i}\) denoted the subset of disks of radius \(r\), where \(\frac{3^{i-1}}{4}<r\leq\frac{3^{i}}{4}\), for all \(i\in\mathbb{Z}\). Let \(N_{1},\ldots,N_{4}\), be the four nonatrees defined in Section 5.1. For every \(k\in\{1,\ldots,4\}\), we create two copies of \(N_{k}\), denoted \(N_{k}^{\mathrm{even}}\) and \(N_{k}^{\mathrm{odd}}\). For \(i\) even (resp., odd), we associate the disks in \(\mathcal{D}_{i}\) to the nonatrees \(N_{k}^{\mathrm{even}}\) (resp., \(N_{k}^{\mathrm{odd}}\)). For simplicity, we denote the eight nonatrees \(N_{k}^{\mathrm{odd}}\) and \(N_{k}^{\mathrm{even}}\) as \(N_{1},\ldots,N_{8}\). We state a simple corollary to Lemma 11. **Lemma 12**.: _For a set of disks in the plane, one of our shifted nonatrees \(N_{1},\ldots,N_{8}\) maintains an independent set of size \(|\mathrm{OPT}|/C\), where \(\mathrm{OPT}\) is a MIS and \(C\) is an absolute constant._ Proof.: Let \(S\subset\mathcal{D}\) be a MIS of a set \(\mathcal{D}\) of disks. We can partition \(\mathcal{D}\) into \(\mathcal{D}^{\mathrm{even}}=\bigcup_{i}\) even \(\mathcal{D}_{i}\) and \(\mathcal{D}^{\mathrm{odd}}=\bigcup_{i}\) odd \(\mathcal{D}_{i}\). Let \(S^{\mathrm{even}}=S\cap\mathcal{D}^{\mathrm{even}}\) and \(S^{\mathrm{odd}}=S\cap\mathcal{D}^{\mathrm{odd}}\). Clearly, \(|S^{\mathrm{even}}|\geq\mathrm{OPT}^{\mathrm{even}}\), \(|S^{\mathrm{odd}}|\geq\mathrm{OPT}^{\mathrm{odd}}\), and \(\max\{|S^{\mathrm{even}}|,|S^{\mathrm{odd}}|\}\geq\frac{1}{2}\,|S|=\frac{1}{2} \,\mathrm{OPT}\). Now Lemma 11 completes the proof. The advantage of partitioning the nonatrees into odd and even levels is the following. **Lemma 13**.: _Let \(d_{1},d_{2}\in\mathcal{D}\) be disks of radii \(r_{1},r_{2}>0\), respectively, associated with cells \(c_{1}\) and \(c_{2}\) in a nonatree \(N_{k}\), \(k\in\{1,\ldots,8\}\). If \(c_{1}\prec c_{2}\), then \(3\,r_{1}\prec r_{2}\)._ Proof.: By construction \(N_{k}\) contains disks at odd or even levels. Then \(3^{i-1}/4<r_{1}\leq 3^{i}/4\) and \(3^{i^{\prime}-1}/4<r_{2}\leq 3^{i^{\prime}}/4\) for some integers \(i<i^{\prime}\) of the same parity. Since \(i\) and \(i^{\prime}\) have the same parity, then \(i+2\leq i^{\prime}\), which gives \(r_{1}\leq 3^{i}/4<3^{i+1}/4<r_{2}\), hence \(r_{2}/r_{1}>3\). Clearance.The guiding principle of the greedy strategy is that if we add a disk \(d\) to the independent set, we exclude all larger disks that intersect \(d\). For our dynamic algorithm, we wish to maintain a stronger property: **Definition 1**.: _Let \(S\) be an independent set of the disks in a nonatree \(N_{k}\) such that each grid cells in \(N_{k}\) contributes at most one disk. For \(\lambda\geq 1\), we say that \(S\) has \(\lambda\)**-clearance** if the following holds: If \(d_{1},d_{2}\in S\) are associated with cells \(c_{1}\) and \(c_{2}\), resp., and \(c_{1}\prec c_{2}\), then \(\lambda d_{1}\) is disjoint from \(d_{2}\)._ An easy volume argument shows that a modified greedy algorithm that maintains 3-clearance still returns a constant-factor approximate MIS (Lemma 15). The key advantage of an independent set with 3-clearance is the following property, which will be helpful for our dynamic algorithm: **Lemma 14**.: _Let \(S\) be an independent set of the disks in a nonatree \(N_{k}\) such that each grid cells in \(N_{k}\) contributes at most one disk; and assume that \(S\) has 3-clearance. Then every disk that lies in a cell in \(N_{k}\) intersects at most one larger disk in \(S\)._ Proof.: Let \(d_{0}\) be an arbitrary disk in a cell \(c_{0}\) of \(N_{k}\), and assume that \(d_{0}\) intersects two or more disks in \(S\). Let \(d_{1}\) and \(d_{2}\) be the smallest and the 2nd smallest disks in \(S\) that (i) intersect \(d_{0}\) (ii) and are larger than \(d_{0}\). Clearly, if \(d_{1}\) and \(d_{2}\) are associated with cells \(c_{1}\) and \(c_{2}\), then we have \(c_{0}\prec c_{1}\prec c_{2}\). Since \(d_{0}\) intersects the larger disk \(d_{1}\), then \(d_{0}\subset 3d_{1}\). Since \(S\) has 3-clearance, then \(d_{2}\) is disjoint from \(3d_{1}\). Consequently, \(d_{2}\) cannot intersect \(d_{0}\): a contradiction that completes the proof. Candidate Disks.For a set of disks \(\mathcal{D}\), we will maintain an independent set \(S\subset\mathcal{D}\), and a set \(B\subset\mathcal{D}\) of _candidate disks_. When a disk \(d\) associated with a cell \(c\in N_{k}\) is inserted or deleted from \(\mathcal{D}\), we re-run the greedy process on the nonatree locally, between the cells \(c_{1}\leq c\prec c_{2}\) that contain disks \(s_{1},s_{2}\in S\). If any of the new disks added to \(S\) intersects \(s_{2}\), then we remove \(s_{2}\) from \(S\), and add it to \(B\) as a _candidate disk_. The clearance (defined above) guarantees that the new disks added to \(S\) in this process do not intersect any disk in \(S\) larger than \(s_{2}\). Importantly, we maintain the properties that (i) the union \(S\cup B\) is a dominating set for \(\mathcal{D}\), that is, all disks in \(\mathcal{D}\) intersect a neighborhood (clearance) of a disk in \(S\) or \(B\); and the number of candidate disks is bounded by \(|B|\leq O(|S|)\). We maintain an upper bound \(|B|\leq O(|S|)\) using a simple load balancing strategy: We maintain a partition of the candidate disks \(B=\bigcup_{d\in S}B(d)\). All candidate disks in \(B(d)\) lie in cells of the nonatree along an ascending path \(P(d)\) between the cell of \(c\) to the cell of the next highest disk in \(S\). The size of the sets \(B(d)\), \(d\in S\), may change during the dynamic updates. We can control the average size of \(B(d)\), \(d\in S\), by a _cleanup_ subroutine: if we re-run the greedy algorithm locally on a path \(P(d)\), we eliminate all candidate disks in \(B(d)\) and increase \(B(d^{\prime})\) by one for another disk \(d^{\prime}\in S\). The bound \(|B|\leq O(|S|)\) can be maintained by applying \(O(1)\) cleanup steps after each disk insertion and deletion, as shown in Section 5.4. Invariants.We are now ready to formulate invariants that guarantee that one of eight possible independent sets is a constant-factor approximation of MIS. In Section 5.4, we show how to maintain these independent sets and the invariants in polylogarithmic time. For a set of disks \(\mathcal{D}\), we maintain eight nonatrees \(N_{1},\ldots N_{8}\), and for each \(k\in\{1,\ldots,8\}\) we maintain two sets of disks \(S_{k}\) and \(B_{k}\), that satisfy the following invariants. 1. Every disk \(d\in\mathcal{D}\) is associated with a cell of at least one nonatree \(N_{k}\), \(k\in\{1,\ldots,8\}\). 2. In each nonatree \(N_{k}\), only odd or only even levels are associated with disks in \(\mathcal{D}\). Let \(\mathcal{D}_{k}\) be the set of disks associated with the cells in \(N_{k}\). 3. For every \(k\in\{1,\ldots,8\}\), 1. \(S_{k}\) and \(B_{k}\) are disjoint subsets of \(\mathcal{D}_{k}\); 2. \(S_{k}\) is an independent set with 3-clearance; and 3. each cell of \(N_{k}\) contributes at most one disk in \(S_{k}\). 4. For every \(d\in S_{k}\), there is a set \(B_{k}(d)\subset B_{k}\) such that 1. \(B_{k}=\bigcup_{d\in S_{k}}B_{k}(d)\); 2. \(|B_{k}|\leq 2\,|S_{k}|\); 3. for each \(b\in B_{k}(d)\), we have \(c_{d}<c_{b}\), where \(c_{b}\) and \(c_{d}\) are cells in \(N_{k}\) associated with \(b\) and \(d\), resp., and the cells \(c\), \(c_{d}<c<c_{b}\), are not associated with any disk in \(S_{k}\). 5. For every \(k\in\{1,\ldots,8\}\), 1. a cell \(c\in N_{k}\) is an _obstacle cell_ if it is associated with a disk in \(S_{k}\) (a _true obstacle_), or it has at least two children that each contain a disk in \(S_{k}\) (a _merge obstacle_). 2. For every obstacle cell \(c\), we define an _obstacle disk_ as \(o(c)=3d^{\prime}\), where \(d^{\prime}\) is the smallest enclosing disk of the cell \(c\). 6. If \(d\in\mathcal{D}_{k}\) is associated with a cell \(c(d)\in N_{k}\) but \(d\notin S_{k}\), then 1. there exists a disk \(d^{\prime}\in S_{k}\cup B_{k}\) associated with the cell \(c(d)\), or 2. \(d\) intersects the obstacle disk \(o(c^{\prime})\) for some cell \(c^{\prime}\) with \(c^{\prime}\leq c(d)\), or 3. \(d\) intersects a _candidate obstacle disk_\(3d_{b}\), where \(d_{b}\) is the smallest enclosing disk of the cell \(c_{b}\leq c(d)\) associated with some \(b\in B_{k}\). We show (Lemma 16 below) that invariants 1-6 guarantee that the largest of the eight independent sets, \(S_{1},\ldots,S_{8}\), is a constant-factor approximate MIS of \(\mathcal{D}\). As we use larger obstacle disks than in Section 5.1, to ensure 3-clearance, we need to adapt Lemma 10 to the new setting. We prove the following with an easy volume argument. **Lemma 15**.: _Let \(c\) be an obstacle cell in a nonatree \(N_{k}\), \(k\in\{1,\ldots,k\}\). Then the obstacle disk \(o(c)\) intersects at most \(O(1)\) pairwise disjoint disks in higher buckets of \(\mathcal{D}_{k}\)._ Proof.: Let \(c\) be an obstacle cell at level \(i\in\mathbb{Z}\). Then \(c\) has side length \(3^{i}\), and the disk associated with \(c\) has radii in the range \((\frac{1}{4}\,3^{i-1},\frac{1}{4}\,3^{i}]\). By invariant 5b, the obstacle disk of \(c\) is \(o(c)=3d^{\prime}\), where \(d^{\prime}\) is the smallest enclosing disk of \(c\). That is, the radius of \(o(c)\) is \(r=3\cdot 3^{i}/\sqrt{2}=3^{i+1}/\sqrt{2}\), and \(\operatorname{area}(o(c))=\pi r^{2}=\frac{9\pi}{2}\,3^{2i}\). By Lemma 13, the radius of any disk \(d\in\mathcal{D}_{k}\) in a higher bucket is at least \(3\cdot\frac{1}{4}\,3^{i}=\frac{1}{4}\,3^{i+1}\). Let \(\mathcal{A}\) be the set of pairwise disjoint disks in \(\mathcal{D}_{k}\) in higher buckets that intersect \(o(c)\). Scale down every disk \(d\in\mathcal{D}\) from a point in \(d\cap o(c)\) to a disk \(\widehat{d}\) of radius \(\widehat{r}=\frac{1}{4}\,3^{i+1}\); and let \(\widehat{\mathcal{A}}\) be the set of resulting disks. Note that \(|\widehat{\mathcal{A}}|=|\mathcal{A}|\), the disks in \(\widehat{\mathcal{A}}\) are pairwise disjoint, and they all intersect \(o(c)\). Let \(D\) be a disk concentric with \(o(c)\), of radius \(R=3^{i+1}/\sqrt{2}+\frac{1}{2}\,3^{i+1}=\frac{1+\sqrt{2}}{2}\,3^{i+1}\). By the triangle inequality, \(D\) contains all (scaled) disks in \(\widehat{\mathcal{A}}\) (see Figure 2b for a congruent example). Since the disks in \(\widehat{\mathcal{A}}\) are pairwise disjoint, then \(|\widehat{\mathcal{A}}|\cdot\pi\widehat{r}^{2}=\sum_{\widehat{d}\in\widehat{ \mathcal{A}}}\operatorname{area}(\widehat{d})\leq\operatorname{area}(D)= \pi R^{2}\). This yields \(|\mathcal{A}|=|\widehat{\mathcal{A}}|\leq\operatorname{area}(D)/\operatorname {area}(\widehat{d})=R^{2}/\widehat{r}^{2}=O(1)\), as claimed. We are now ready to prove that invariants 1-6 ensure that one of the independent sets \(S_{1},\dots,S_{8}\) is a constant-factor approximate MIS of \(\mathcal{D}\). **Lemma 16**.: _Let \(\mathcal{D}\) be a set of disks with the data structures described above, satisfying invariants 1-6. Then \(\max_{1\leq k\leq 8}|S_{k}|\geq\Omega(|S^{*}|)\), where \(S^{*}\) is a MIS of \(\mathcal{D}\)._ Proof.: Let \(S^{*}\subset\mathcal{D}\) be a MIS, and let \(S^{*}_{k}=S^{*}\cap\mathcal{D}_{k}\) for \(k=1,\dots,8\). By invariant 1, we have \(|S^{*}_{k}|\geq\frac{1}{8}\,|S^{*}|\) for some \(k\in\{1,\dots,8\}\). Fix this value of \(k\) for the remainder of the proof. Let \(S^{**}_{k}\subset\mathcal{D}_{k}\) be a maximum independent set subject to the constraints that (i) each grid cells in \(N_{k}\) contributes at most one disk to \(S^{**}_{k}\), and (ii) any grid cell in \(N_{k}\) that has two or more relevant children do not contribute. By Lemmata 8 and 12, we have \(|S^{**}_{k}|\geq\Omega(|S^{*}_{k}|)\geq\Omega(|S^{*}|)\). We claim that \[|S^{**}_{k}|\leq O(|S_{k}\cup B_{k}|), \tag{1}\] This will complete the proof: Invariant 4b guarantees \(|B_{k}|\leq 2\,|S_{k}|\). Since \(S_{k}\) and \(B_{k}\) are disjoint by invariant 3a, this yields \(|S_{k}|\geq\frac{1}{3}\,(|S_{k}|+|B_{k}|)=\frac{1}{3}\,|S_{k}\cup B_{k}|\geq \Omega(|S^{**}_{k}|)\geq\Omega(|S^{*}|)\), as required. **Charging Scheme.** We prove (1), using a charging scheme. Specifically, each disk \(d^{*}\in S^{**}_{k}\) is worth one unit. We _charge_ every disk \(d^{*}\in S^{**}_{k}\) to either a disk in \(S_{k}\cup B_{k}\) or an obstacle cell, using Invariant 6. Note that the number of obstacle cells is at most \(2\,|S_{k}|\) by Lemma 9. Then we show that the total number of charges received is \(O(|S_{k}\cup B_{k}|)\), which implies \(|S^{*}_{k}|\leq O(|S_{k}\cup B_{k}|)\), as required. In a bottom-up traversal, we consider the cells of the nonatree \(N_{k}\). Consider each cell \(c\) that contributes a disk to \(S^{**}_{k}\), and consider the disk \(d^{*}\in S^{**}_{k}\) associated with \(c\). If \(d^{*}\in S_{k}\cup B_{k}\), then we charge \(d^{*}\) to itself. Otherwise, Invariant 6 provides three possible reasons why \(d^{*}\) is not in \(S_{k}\). We describe our charging scheme in each case separately: 1. If there exists a disk \(d^{\prime}\in S_{k}\cup B_{k}\) associated with \(c\), then we charge \(d^{*}\) to such a disk \(d^{\prime}\). 2. Else if \(d^{*}\) intersects the obstacle disk \(o(c^{\prime})\) for some cell \(c^{\prime}\) with \(c^{\prime}\leq c\), then we first show that \(c^{\prime}\prec c\). Suppose, to the contrary, that \(c=c^{\prime}\). Since \(c\) is not associated with any disk in \(S_{k}\cup B_{k}\), but it is an obstacle cell, then \(c\) has two or more relevant children, consequently no disk in \(S^{**}_{k}\) is associated with \(c\): A contradiction. We may now assume \(c^{\prime}\prec c\). By invariant 5, there is a unique maximal obstacle disk \(o(c^{\prime})\) for some cell \(c^{\prime}\), \(c^{\prime}\prec c\), and we charge \(d^{*}\) to the cell \(c^{\prime}\). 3. Else \(d^{*}\) intersects the candidate obstacle disk \(3d_{b}\), where \(d_{b}\) is the smallest enclosing disk of the cell \(c_{b}\preceq c(d)\) associated with some candidate disk \(b\in B_{k}\). We charge \(d^{*}\) to \(b\in B_{k}\). We claim that each disk \(d^{\prime}\in S_{k}\cup B_{k}\) and each obstacle cell \(c^{\prime}\) receives \(O(1)\) charges. The choice of the independent set \(S_{k}^{**}\), each cell \(c^{\prime}\) contributes at most one disks to \(S_{k}^{**}\). Consequently, at most one disk \(d^{*}\in S_{k}^{**}\) is charged to \(d^{\prime}\) using invariant 6a. Consider now an obstacle cell \(c^{\prime}\). By Lemma 15, an obstacle disk \(o(c^{\prime})\) intersects \(O(1)\) pairwise disjoint disks of larger radii. Consequently, \(O(1)\) disks \(d^{*}\in S_{k}^{**}\) are charged to \(c^{\prime}\) using invariant 6b. Similarly, a candidate obstacle disk \(3d_{b}\), \(b\in B_{k}\), intersects \(O(1)\) pairwise disjoint disks of larger radii, and so at most \(O(1)\) disks \(d^{*}\in S_{k}^{**}\) are charged to any candidate disk \(b\in B_{k}\) using invariant 6c. Overall, \(|S_{k}^{**}|\leq|S_{k}\cup B_{k}|+O(|S_{k}|)+O(|B_{k}|)\leq O(|S_{k}\cup B_{k}|)\), as claimed. ### Hierarchical Dynamic Nearest/Farthest Neighbor Data Structures For each nonatree \(N_{k}\), \(k=1,\ldots,8\), we construct two point location--or rather cell location--data structures, \(L_{c}\) and \(L_{o}\), and two additional dynamic data structures, \(T_{\cup}\) and \(T_{\Sigma}\), described in this section. These data structures help navigate the nonatree: The cell location data structures \(L_{c}\) allow us to efficiently locate, a cell in the nonatree, while \(L_{o}\) returns for a given cell \(c\) the obstacle cell \(c_{o}\), \(c_{o}\preceq c\) closest to \(c\). Furthermore, after deleting a disk associated with a cell \(c_{q}\) from a current independent set \(S_{k}\), the data structure \(T_{\cup}\) helps find a cell \(c\), \(c_{q}\preceq c\), in which a new disk can be added to \(S_{k}\). After adding a new disk \(d_{q}\) associated with a cell \(c_{q}\), \(T_{\Sigma}\) helps find the closest cell \(c\), \(c_{q}\prec c\), in which \(d_{q}\) intersects some disk \(d\in S_{k}\) (which then has to be deleted from \(S_{k}\)). Let \(k\in\{1,\ldots,8\}\) be fixed. Assume that \(N_{k}\) is a nonatree, \(\mathcal{D}_{k}\) is a set of disks associated with cells in \(N_{k}\); and the sets \(S_{k}\) and \(B_{k}\) satisfy invariants 1-6. * The data structure \(F_{c}\) is a point location data structure for \(N_{k}\) analogous to a \(\mathcal{Q}\)-order for compressed quad trees [11, Chapter 2], which can be implemented in any ordered set data structure. We use \(F_{c}\) only to locate cells, and hence we refer to it as a cell location data structure. The \(\mathcal{Q}\)-order corresponds to a depth first search of \(N_{k}\), where sibling cells are ordered (geometrically) according to a \(\mathcal{Z}\)-order (see Figure 3). When a cell \(c\) is not found, a pointer (_finger_) the closest ancestor is returned (an ancestor of \(c\), in case \(c\) were to exist in \(N_{k}\)). * The data structure \(F_{o}\) mimics \(F_{c}\) but consists of only obstacle cells. Whereas the DFS order of \(F_{c}\) corresponds to a pre-order tree walk of \(N_{k}\), such that a parent cell comes before its children in the ordering, \(F_{o}\) corresponds to a post-order tree walk, and hence for a given cell \(c\), the closest obstacle cell \(c_{o}\) of \(N_{k}\), such that \(c_{o}\preceq c\), is easily found. * The data structure \(T_{\cup}\) is for all disks in \(\mathcal{D}_{k}\). It supports insertions and deletions to/from \(\mathcal{D}_{k}\); as well as the following query: Given a query cell \(c_{q}\) of \(N_{k}\) and an obstacle disk \(o_{q}\), find the lowest cell \(c\) such that \(c_{q}\preceq c\) and there exists a disk \(d\in\mathcal{D}_{k}\) associated with \(c\) and disjoint from \(o_{q}\), or report no such cell exists. The data structure \(T_{\cup}\) is a hierarchical version of the DFN data structure (cf. Lemma 2). * The data structure \(T_{\Sigma}\) is for the disks in \(S_{k}\). It supports insertions and deletions to/from \(S_{k}\); as well as the following query: Given a query disk \(d_{q}\) and a cell \(c_{q}\) of \(N_{k}\), find the lowest cell \(c\) such that \(c_{q}\preceq c\) and there exists a disk \(d\in S_{k}\) associated with \(c\) where \(d\) intersects \(d_{q}\), or report that no such cell exists. The data structure \(T_{\Sigma}\) is a hierarchical version of the DNN data structure (cf. Lemma 2). Data Structures \(F_{c}\) and \(F_{o}\).Since our (compressed) nonatree \(N_{k}\) is analogous to a compressed quadtree, the cell location data structure \(F_{c}\) works exactly like a \(\mathcal{Q}\)-order for compressed quadtrees: Insertion, deletion, and cell-location queries are therefore supported in \(O(\log n)\) time [12, Chapter 2]. For completeness we show how to extend the quadtree \(\mathcal{Q}\)-order to nonatrees. The location of each cell of \(N_{k}\) is encoded in a binary number, and all cells are ordered according to their encoding. Let \(L\) denote the list of levels of the nonatree \(N_{k}\) in decreasing order. For the single cell on the top level of \(N_{k}\) we use an encoding of only zero bits, and on the second level we use the four most significant bits to encode the nine cells of \(N_{k}\), as shown in Figure 2(a). Each subsequent level \(\ell\in L\) uses the next four bits to encode 3x3 subdivision inside the cell encoded by the previous bits, as in Figure 2(c). Finally, the data structure \(F_{o}\) works exactly like \(F_{c}\) but contains only obstacle cells and uses a slightly different encoding that allows a parent cell to be ordered after its ancestors, instead of before, as shown in Figure 2(b). This property is crucial for our usage of \(F_{o}\): We will query \(F_{o}\) with a cell \(c\) to find the closest obstacle cell \(c_{o}\), \(c_{o}\preceq c\), whose obstacle helps us determine which disks in higher levels of \(N_{k}\) (at least as high as \(c\)) can be added to \(S_{k}\) without overlapping (the 3-clearance of) disks in \(S_{k}\) in the levels below \(c\). The returned obstacle cell is always uniquely defined because we maintain invariant 2(a). Either \(c\) has multiple subtrees in which an obstacle is defined, in which case \(c\) must be an obstacle cell itself, or the closest obstacle cell \(c_{o}\) below (and including) \(c\) is located in the single subtree rooted at \(c\) containing all relevant cells contained in \(c\). In both cases, a cell location query with \(c\) will either find \(c_{o}=c\) as the obstacle cell we are looking for, or returns the predecessor of \(c\), which is the closest obstacle cell \(c_{o}\) in the single subtree rooted at \(c\). Data Structure \(T_{\cup}\).Let \(L\) denote the list of levels of the nonatree \(N_{k}\) in increasing order. The _weight_\(w(\ell)\) of a level \(\ell\in L\) is the number of disks in \(\mathcal{D}_{k}\) associated with cells in level \(\ell\). In particular, the sum of weights is \(\sum_{\ell\in L}w(\ell)=|\mathcal{D}_{k}|\). Let \(T_{\cup}(v)\) be a _weight-balanced binary search tree_ on \(L\)[1, Sec. 3.2]; see Figure 3(a). That is, \(T_{\cup}(v)\) is a rooted tree, where the leaves correspond to the elements of \(L\), and each internal node corresponds to a sequence of consecutive leaves in \(L\). The _weight_ of a subtree \(T_{\cup}(v)\) rooted at a node \(v\), denoted \(w(T_{\cup}(v))\), is the sum of the weights of the leaves in \(T_{\cup}(v)\). The weight-balance is specified by a parameter \(\alpha\approx 0.29\), as follows: For each subtree, the left and right sub-subtrees each have at least \(\alpha\) fraction of the total weight of the subtree, or is a singleton (i.e., a leaf) of Figure 3: \(\mathcal{Q}\)-orders in nonatrees. **(a)** The encoding used in \(F_{c}\), ordering the parent before its descendants. **(b)** The encoding used in \(F_{o}\), ordering the parent after its descendants. **(c)** The recursive orders in \(F_{c}\): The blue arrow is encoded by \(1010xxxx\); if \(xxxx=0000\) then the middle cell \(c\) on the second level is located, and otherwise the four trailing bits determine the child of \(c\). arbitrary weight. It is known that a weight-balanced tree with total weight \(n\) has height \(O(\log n)\), and supports _insertion_ and _deletion_ of leaves using \(O(\log n)\) rotations (Figure 3(b)). Furthermore, the time from one rotation of a node \(v\) to the next rotation of \(v\), a positive fraction of all leaves below \(v\) are changed [1, Sec. 3.2]. Recall that each node \(v\) of \(T_{\cup}\) corresponds to a sequence of consecutive levels of the nonatree \(N_{k}\). Let \(\mathcal{D}_{k}(v)\) denote the set of all disks of \(\mathcal{D}_{k}\) on these levels; and let \(G(v)\) denote the grid corresponding to the highest of these levels. For each cell \(c\in G(v)\), let \(\mathcal{D}(v,c)\) denote the set of disks in \(D_{k}(v)\) that lie in the cell \(c\). Now node \(v\) of the tree \(T_{\cup}\) stores, for each nonempty cell \(c\in G(v)\), the DFN data structure for \(\mathcal{D}_{k}(v,c)\). **Lemma 17**.: _The data structure \(T_{\cup}\) supports insertions and deletions of disks in \(\mathcal{D}_{k}\) in \(O(\log^{10}n)\) expected amortized time, as well as the following query in \(O(\log^{3}n)\) worst-case time: Given a query cell \(c_{q}\) of \(N_{k}\) and an obstacle disk \(o_{q}\), find the lowest cell \(c\) such that \(c_{q}\leq c\) and there exists a disk \(d\in\mathcal{D}_{k}\) associated with \(c\) and disjoint from \(d_{q}\), or report that no such cell exists._ Proof.: As noted above, for each internal node \(v\), the weight of the two subtrees must change by \(\Omega(w(T_{\cup}(v)))\) between two consecutive rotations. For a rotation at node \(v\), we recompute all DFN data structure at the new child of \(v\). Specifically, consider w.l.o.g. a right rotation at \(v\) (refer Figure 3(b)), where the left child \(u\) is removed and the new right child \(u^{\prime}\) is created. The DFN data structures at \(v\), \(x\), \(y\), and \(z\) remain valid, and we need to compute a new DFN data structure for \(u^{\prime}\). By Lemma 2, the expected preprocessing time of the DFN data structure for a set of \(m\) disks is \(O(m\log^{9}m)\). This means that the update of the data structure \(T_{\cup}\) due to a rotation at a node \(v\) of weight \(m=w(u^{\prime})\leq O(n)\) takes \(O(m\log^{9}m)\) time. Consequently, rotations at a node \(v\) of weight \(w(v)\) can be done in \(O(\log^{9}w(v))\leq O(\log^{9}n)\) expected amortized time. Thus, a rotation on each level of \(T_{\cup}\) take \(O(\log^{9}n)\) expected amortized time per insertion or deletion. Summation over \(O(\log n)\) levels implies that \(O(\log^{10}n)\) expected amortized update time is devoted to rotations. Note also that for a disk insertion or deletion, we also update \(O(\log n)\) DFN data structures (one on each level of \(T_{\cup}\)), each of which takes \(O(\log^{9}n)\) expected amortized update time. Overall, the data structure \(T_{\cup}\) supports disk insertion and deletion in \(O(\log^{10}n)\) amortized expected time. For a query cell \(c_{q}\) and disk \(d_{q}\), consider the ascending path in \(T_{\cup}\) from the level of \(c_{q}\) to the root. Consider the right siblings (if any) of all the nodes in this path. For each right sibling \(v\), there is a unique cell \(c_{v}\in G(v)\) such that \(c_{q}\subseteq c_{v}\). We query the DFN data structure for \(\mathcal{D}_{k}(v,c_{v})\) Figure 4: **(a)** An example for a weight-balanced binary tree over the levels \(L\) of the nonatree \(N_{k}\); and the weights the leaves leaves. **(b)** A right rotation at node \(v\) of \(T_{\cup}\). If none of these DFN data structures finds any disk in \(\mathcal{D}_{k}(v,c_{v})\) disjoint \(d_{q}\), then report that all disks associated with the ancestor cells of \(c_{q}\) intersect \(d_{q}\). Otherwise, let \(v\) be the first (i.e., lowest) right sibling in which the DFN data structure returns a disk \(d_{v}\in\mathcal{D}_{k}(v,c_{v})\) disjoint \(d_{q}\). By a binary search in the subtree \(T_{\cup}(v)\), we find a leaf node \(\ell\in L\) in which the DFN data structure returns a disk \(d_{\ell}\in\mathcal{D}_{k}(\ell,c_{\ell})\) disjoint \(d_{q}\). In this case, we return the cell \(c_{\ell}\) and the disk \(d_{\ell}\). By Lemma 2, we answer the query correctly, based on \(O(\log n)\) queries to DFN data structures, which takes \(O(\log n)\cdot O(\log^{2}n)=O(\log^{3}n)\) worst-case time. Data Structure \(T_{\Sigma}\).Let \(L\) denote the list of levels of the nonatree \(N_{k}\) in increasing order. This time, the _weight_\(w(\ell)\) of a level \(\ell\in L\) is the number of disks in \(S_{k}\) associated with cells in level \(\ell\). Let \(T_{\Sigma}(v)\) be a _weight-balanced binary search tree_ with these weights. Recall that each node \(v\) of \(T_{\Sigma}\) corresponds to a sequence of consecutive levels of the nonatree \(N_{k}\). Let \(S_{k}(v)\) denote the set of all disks of \(S_{k}\) on these levels; and let \(G(v)\) denote the grid corresponding to the highest of these levels. For each cell \(c\in G(v)\), let \(S_{k}(v,c)\) be the set of disks in \(S_{k}(v)\) that lie in the cell \(c\). Node \(v\) of the tree \(T_{\Sigma}\) stores, for each nonempty cell \(c\in G(v)\), the DNN data structure for \(S_{k}(v,c)\). **Lemma 18**.: _The data structure \(T_{\Sigma}\) supports insertions and deletions of disks in \(S_{k}\) in \(O(\log^{10}n)\) expected amortized time, as well as the following query in \(O(\log^{3}n)\) worst-case time: Given a query disk \(d_{q}\) in a cell \(c_{q}\) of \(N_{k}\), find the lowest cell \(c\) such that \(c_{q}\leq c\) and there exists a disk \(d\in S_{k}\) associated with \(c\) where \(d\) intersects \(d_{q}\), or report that no such cell exists._ Proof.: The analysis for the update time is analogous to the proof of Lemma 17. For a query cell \(c_{q}\) and disk \(d_{q}\), consider the ascending path in \(T_{\Sigma}\) from the level of \(c_{q}\) to the root. Consider the right siblings of the nodes in this path. For each right sibling \(v\), there is a unique cell \(c_{v}\in G(v)\) such that \(c_{q}\subseteq c_{v}\), and we query the DNN data structure for \(S_{k}(v,c_{v})\). If none of these DNN data structures finds a disk in \(S_{k}(v,c_{v})\) that intersects \(d_{q}\), then report that all disks associated with the ancestor cells of \(c_{q}\) are disjoint from \(d_{q}\). Otherwise, let \(v\) be the first (i.e., lowest) right sibling in which the DNN data structure returns a disk \(d_{v}\in S_{k}(v,c_{v})\) that intersects \(d_{q}\). By a binary search in the subtree \(T_{\Sigma}(v)\), we find a leaf node \(\ell\in L\) in which the DNN data structure returns a disk \(d_{\ell}\in S_{k}(\ell,c_{\ell})\) that intersects \(d_{q}\). In this case, we return the cell \(c_{\ell}\) and the disk \(d_{\ell}\). By Lemma 2, we answer the query correctly, based on \(O(\log n)\) queries to DNN data structures, which takes \(O(\log n)\cdot O(\log^{2}n)=O(\log^{3}n)\) worst-case time. ### Dynamic Maintenance Using Dynamic Nearest/Farthest Neighbor Data Structures To maintain an approximate maximum independent set of disks, we now consider how our data structures are affected by updates: Disks are inserted and deleted into an initially empty set of disks, and our goal is to maintain the data structures described in Section 5.2 and Section 5.3. On a high level, for a dynamic set of disks \(\mathcal{D}\), we maintain eight nonatrees \(N_{1},\ldots N_{8}\), and for each \(k\in\{1,\ldots,8\}\), we maintain the cell location data structures \(F_{c}\) and \(F_{o}\), and two sets of disks: an independent set \(S_{k}\) and a set of candidate disks \(B_{k}\). In this section, we show how to maintain these data structures with polylogarithmic update times while maintaining invariants 1-6 described in Section 5.2. For that, we may use the additional data structures \(T_{\cup}\) and \(T_{\Sigma}\), as defined in Section 5.3, to efficiently query the nonatrees, and their independent sets and candidate disks. Before we explain our algorithm, we first explain the combinatorial structure of the nonatrees on a higher level. Because of invariant 5, all nonatree cells with relevant cells in at least two subtrees will be merge obstacle cells. Such cells decompose the nonatree into ascending paths in which each cell has relevant descendants in only a single subtree (see Figure 4(a)). Inside an ascending path disks either intersect the obstacle disk of the (closest) obstacle cell below them, or are part of \(S_{k}\) and therefore define a (true) obstacle cell (see Figure 4(b)). We allow for certain disks on an ascending path to not be intersected by an obstacle disk, but in that case we can identify a candidate disk (as introduced in Section 5.2), which defines a candidate obstacle disk, to intersect the disks that do not intersect a (merge or true) obstacle disk (see Figure 4(c)). Note that dynamic changes can influence the greedy property (invariant 2(b)) of our (partial) solution \(S_{k}\), namely, if a small disk \(d\) in cell \(c\) is inserted into \(\mathcal{D}\), then we want \(d\) to be part of \(S_{k}\) instead of a larger disk \(d^{\prime}\in S_{k}\), because \(d^{\prime}\) intersects the obstacle \(o(c)\) defined by the cell of \(d\). Thus, when a disk is added to \(S_{k}\), we may find an overlap in a bucket containing larger disks, in which case we have to delete such a disk from \(S_{k}\). Such a deletion may allow us to insert an even larger disk to \(S_{k}\), and this process propagates upwards through the buckets. We stop this propagation of inserting larger disks after a constant \(C_{P}\) iterations, where \(C_{P}\) is an amortized constant. Similarly, when initially a disk is deleted from \(S_{k}\), we may be able to insert a larger disk into \(S_{k}\), resulting in an equivalent propagation upwards. We stop this propagation after \(C_{P}\) iterations as well. A crucial property of the decomposition of a nonatree into ascending paths in maintaining the invariants of our solution, is that we can deal with each ascending path independently: If a disk in cell \(c\) on an ascending path is added to \(S_{k}\), then we create a new (true) obstacle cell with obstacle \(o(c)=3d\) with \(d\) being the radius of the smallest enclosing disk of \(c\). Observe that this Figure 5: **(a) Decomposition of a nonatree into ascending paths between merge obstacle cells. Only relevant leaves are drawn and hence all leaves are obstacle cells (disks) as well. The (square) root may or may not be an obstacle cell, but forms the top end of the highest ascending path(s). One ascending path between merge nodes is highlighted in orange. (b) The structure of the highlighted ascending path: The merge obstacle cells at the top and bottom (with dark green obstacle disks) each have no disk associated with them. Each other obstacle cell on the path also defines a light green obstacle disk. Each such cell contains a (blue) disk, which is disjoint from the (closest) obstacle disk below it (red cross indicates where the blue disk cannot reside) and which will be part of \(S_{k}\). All disks on the ascending path above an obstacle cell (in red) are intersected by the obstacle below it. (c) It may happen that certain disks on an ascending path are not intersected by the obstacle below it. The lowest disk for which this holds (in light purple) will be part of \(B_{k}\), and hence defines a candidate obstacle disk (in dark purple). The candidate disk will intersect all disk on the ascending path until the first cell containing a disk in \(S_{k}\cup B_{k}\) above it (in brown). Disks in \(S_{k}\) above a candidate disk may or may not intersect the candidate obstacle disk (dashed arrows). Furthermore a candidate disk is mapped to the disk in \(S_{k}\) below it (purple arrow).** disk is a subset of the (merge) obstacle disk at the top end of the ascending path, since the obstacle cell at the top has strictly larger side length. We ensure that every disk in \(S_{k}\) does not intersect an obstacle disk below it in the nonatree, and hence if a disk above the ascending path would intersect \(o(c)\), then it would also intersect the obstacle of the merge cell at the top of the ascending path. Thus when adding disks to \(S_{k}\), changes are contained within an ascending path. More specifically, when a disk \(d\) associated with a cell \(c\in N_{k}\) is inserted or deleted, then \(c\) lies in an ascending path \(P(d)\) between two obstacle cells, say \(c_{1}\leq c\prec c_{2}\). To update independent set \(S_{k}\) and the candidate disks \(B_{k}\), we run the greedy algorithm in this path. The greedy process guarantees that these disks are disjoint from any smaller disk in \(S_{k}\). However, the newly added disks in \(S_{k}\) may intersect the disk \(s_{2}\in S_{k}\) associated with \(c_{2}\): If this is the case, we delete \(s_{2}\) from \(S_{k}\), insert it into \(B_{k}\), and reassign \(B_{k}(s_{2})\cup\{s_{2}\}\) to the highest disk in \(S_{k}\) in \(P(d)\) below \(s_{2}\); this highest disk in \(P(d)\) is necessarily the disk added last to \(S_{k}\), causing the intersection with \(s_{2}\). We are now ready to explain for each insertion/deletion of a disk into \(\mathcal{D}\), how we update our data structures: Insertion.Let \(d_{q}\) be a disk that is inserted in step \(q\), we make the following updates: First, insert \(d_{q}\) into our data structures. 1. Find the bucket \(i\) that \(d_{q}\) belongs to based on its radius, and find a unique cell \(c\) in bucket \(i\) that fully contains \(d_{q}\), using the center point of \(d_{q}\), similar to the unit disk case (see Figure 1). 2. Determine the nonatree \(N_{k}\) that \(d_{q}\) will be inserted into: The bucket \(i\) determines whether we insert into a nonatree consisting of odd or even buckets, and the cell \(c\) determines which shifted grid, and hence which of the four nonatrees of the appropriate parity we insert into. 3. Add \(d_{q}\) to \(N_{k}\). Remember that \(N_{k}\) is compressed, and hence we need to first locate \(c\) in \(N_{k}\) (in \(O(\log n)\) time). If \(c\) does not exist, then the cell location query with \(c\) finds the lowest ancestor \(c_{a}\) of \(c\). Analogous to compressed quadtrees, inserting \(c\) as a descendant of \(c_{a}\) requires at most a constant number of other cells to be updated; these are either split or updated in terms of parent-child relations. We now make the following changes: 1. If \(c\) is a leaf, that is not in \(F_{o}\) yet, then set \(o(c)\) to be an empty obstacle, and add \(c\) to \(F_{o}\). 2. For any cell \(c^{\prime}\) of the \(O(1)\) cells that may get new children by updated parent-child relationships, we query \(F_{o}\) with the child cells, to find out whether there are obstacles (and hence cells that contribute disks) in at least two subtrees. If so, we set the obstacle disk \(o(c^{\prime})=3d^{\prime}\), where \(d^{\prime}\) is the smallest enclosing disk of \(c^{\prime}\) and insert \(c^{\prime}\) into \(F_{o}\) (if it was not in \(F_{o}\) yet). To finalize this step, insert \(d_{q}\) into \(T_{\mathcal{D}}^{i}\), such that we can find \(d_{q}\) by querying \(T_{\mathcal{D}}^{i}\) for cell \(c\). 4. Insert \(d_{q}\) into \(T_{\cup}\) of \(N_{k}\). Precisely, \(d_{q}\) is inserted into the DFN data structure of cell \(c\) in the leaf \(t\) of \(T_{\cup}\) corresponding to bucket \(i\). Note that if there was no leaf for bucket \(i\) yet, then this node is created and \(T_{\cup}\) may be rebalanced (all in \(O(\log^{9}n)\) expected amortized time). Additionally, if \(c\) did not exist yet, then the DFN data structure is initialized in this step. Subsequently, \(d_{q}\) is added to all \(O(\log n)\) nodes on the path from \(t\) to the root of \(T_{\cup}\). In particular, \(d_{q}\) is inserted in all DFN data structures of these nodes corresponding to the cell (of a coarser grid) that overlaps \(c\). 5. Call subroutine updateIS on cell \(c\). 6. Call subroutine cleanupCD. Updating Independent Sets.Second, subroutine updateIS for a cell \(c\) in \(N_{k}\) works as follows. 1. Query \(F_{o}\) with \(c\) to find an obstacle cell \(c_{o}\), with \(c_{o}\preceq c\). 2. If there is a disk \(d\in S_{k}\) in \(c_{o}\), then set \(B_{k}(d)=\emptyset\). 3. Query \(T_{\cup}\) with \(c\) and \(o(c_{o})\) to either find the lowest cell \(c^{*}\) (in bucket \(i\) of \(N_{k}\)), such that \(c\preceq c^{*}\), together with a disk \(d\in\mathcal{D}_{k}\) associated with \(c^{*}\) and disjoint from \(o(c_{o})\), or find that no such cell and disk exist. 4. While \(c^{*}\) exists and querying \(F_{o}\) with \(c^{*}\) results in \(c_{o}\), repeat the following steps: 1. Add \(d\) to \(S_{k}\) by inserting it into \(T_{k}^{i}\). 2. Insert \(d\) into \(T_{\Sigma}\). Insertion is analogous to the insertion into \(T_{\cup}\) (as described above), except for that \(T_{\Sigma}\) holds DNN data structures, as opposed to DFN data structures in \(T_{\cup}\), and \(d\) should reside in the DNN data structures of the cell \(c^{*}\). Then, also set the obstacle \(o(c^{*})=3d^{*}\) and insert \(c^{*}\) into \(F_{o}\) (if it is not in \(F_{o}\) yet). 3. Rename \(c^{*}\) to \(c_{o}\). 4. Query \(T_{\cup}\) with \(c_{o}\) and \(o(c_{o})\) to either find the lowest cell \(c^{*}\) (in bucket \(i\) of \(N_{k}\)), such that \(c_{o}\preceq c^{*}\), together with a disk \(d\in\mathcal{D}_{k}\) associated with \(c^{*}\) and disjoint from \(o(c_{o})\), or find that no such cell and disk exist. 5. Query \(T_{\Sigma}\) with the parent of \(c^{*}\) and with \(3d^{*}\), where \(d^{*}\) is the smallest enclosing disk of \(c^{*}\), to find the lowest cell \(c^{-}\), such that \(c^{*}\prec c^{-}\), with a disk \(d^{-}\in S_{k}\) associated with \(c^{-}\) that intersects \(3d^{*}\), or we find that no such cell and disk exist. Remember that there can only be one such disk \(d^{-}\in S_{k}\) by Lemma 14. 6. If a cell \(c^{-}\) and disk \(d^{-}\) is found then 1. Delete \(d^{-}\) from \(T_{k}^{i}\) and from \(T_{\Sigma}\). The latter deletion is analogous to the insertion into \(T_{\Sigma}\). 2. Assign \(B_{k}(d)=B_{k}(d^{-})\cup\{d^{-}\}\). 3. Remove the obstacle \(o(c^{-})\) and delete \(c^{-}\) from \(F_{o}\). Cleaning up Candidate Disks.Third, the subroutine cleanupCD calls updateIS on the disks in \(S_{k}\) with the most assigned candidate disks. 1. For \(x=0\) to \(2\): 1. Let \(d_{x}\) be a disk in \(S_{k}\) that maximizes \(|B_{k}(d_{x})|\). 2. Find the bucket \(i\) that \(d_{x}\) belongs to based on its radius, and find a unique cell \(c_{x}\) in bucket \(i\) that fully contains \(d_{x}\), using the center point of \(d_{x}\). 3. Call subroutine updateIS on \(c_{x}\). Deletion.Let \(d_{q}\) be a disk that is deleted in step \(q\), we make the following updates: We delete\(d_{q}\) from our data structures, which again relies on the subroutine updateIS. 1. Find the bucket \(i\) that \(d_{q}\) belongs to based on its radius, and find a unique cell \(c\) in bucket \(i\) that fully contains \(d_{q}\), using the center point of \(d_{q}\), similar to the unit disk case (see Figure 1). 2. Determine the nonatree \(N_{k}\) that \(d_{q}\) is located in: The bucket \(i\) determines whether we insert into a nonatree consisting of odd or even buckets, and the cell \(c\) determines which shifted grid, and hence which of the four nonatrees of the appropriate parity we insert into. 3. Remove \(d_{q}\) from \(N_{k}\). We first locate \(c\) in \(N_{k}\) (in \(O(\log n)\) time). If \(c\) does not exist, we are done (since \(d_{k}\) does not exist in \(N_{k}\) either). If \(c\) exists, let \(c_{p}\) be the parent cell of \(c\) (if such a parent exists). We delete \(d_{k}\) from \(T^{i}_{\mathcal{D}}\), and query it with \(c\) to check if the cell is now empty. If so, we delete \(c\) from \(N_{k}\), which requires at most a constant number of other cells to be updated; these are either merges or updated in terms of parent-child relations. Note that these merges can merge \(c_{p}\), in case it is empty, with other (empty) siblings, and we consider \(c_{p}\) to be the lowest existing non-empty ancestor of \(c\). We now make the following changes: 1. If \(c_{p}\) is a leaf, that is not in \(F_{o}\) yet, then set \(o(c_{p})\) to be an empty obstacle, and add \(c_{p}\) to \(F_{o}\). 2. For any cell \(c^{\prime}\) of these \(O(1)\) cells that may get new children by updated parent-child relations, we query \(F_{o}\) with the child cells, to find out whether there are obstacles (and hence cells that contribute disks) in at least two subtrees. If so, we set the obstacle disk \(o(c^{\prime})=3d^{\prime}\), where \(d^{\prime}\) is the smallest enclosing disk of \(c^{\prime}\) and insert \(c^{\prime}\) into \(F_{o}\) (if it was not in \(F_{o}\) yet). 3. If a cell \(c^{\prime}\) has an obstacle but no longer has two subtrees with obstacles, we remove the obstacle \(o(c^{\prime})\) and delete \(c^{\prime}\) from \(F_{o}\). To finalize this step, delete \(d_{q}\) from \(T^{i}_{\mathcal{D}}\). 4. Delete \(d_{q}\) from \(T_{\cup}\) of \(N_{k}\). Precisely, \(d_{q}\) is deleted from the DFN data structure of cell \(c\) in the leaf \(t\) of \(T_{\cup}\) corresponding to bucket \(i\). Note that if \(c\) is now empty, then the DFN data structure for it can be removed. Additionally, if the leaf for bucket \(i\) is now empty, then this node is deleted and \(T_{\cup}\) may be rebalanced (in \(O(\log^{9}n)\) expected amortized time). Subsequently, \(d_{q}\) is removed from all \(O(\log n)\) nodes on the path from \(t\) to the root of \(T_{\cup}\). In particular, \(d_{q}\) is deleted from all DFN data structures of these nodes corresponding to the cell (of a coarser grid) that overlaps \(c\), again removing DFN data structures when empty. 5. Query \(T^{i}_{k}\) with \(c\) to find whether \(d_{q}\in S_{k}\). If so, delete \(d_{q}\) from \(T^{i}_{k}\) and from \(T_{\Sigma}\). The latter deletion is analogous to the deletion from \(T_{\cup}\), except for that \(T_{\sigma}\) holds DNN data structures, as opposed to DFN data structures in \(T_{\cup}\). Then, query \(F_{o}\) with the child cells of \(c\), to find out whether there are obstacles (and hence cells that contribute disks) in at least two subtrees. If not, also remove the obstacle \(o(c)\) and delete \(c\) from \(F_{o}\). 6. Call subroutine updateIS on cell \(c\), if it is not deleted. Otherwise call updateIS on cell \(c_{p}\). 7. Call subroutine cleanupCD. Bounded Propagation.Subroutine updateIS runs a greedy algorithm on an ascending path \(P\) of \(N_{k}\) between two consecutive obstacle cells, \(c_{o}\) < \(c_{o}^{\prime}\). We show first that the number of iterations in the while loop of updateIS (step 4) is bounded by \(|B_{k}(d_{o})|\), i.e., the number of candidate disks assigned to the disk \(d_{o}\in S_{k}\) associated with cell \(c_{o}\) at the bottom of ascending path \(P(d_{o})\). **Lemma 19**.: _Let \(c_{o}\) be an obstacle cell in \(N_{k}\) associated with an obstacle disk \(d_{o}\). Then the while loop of updateIS on \(c_{o}\) terminates after \(O(|B_{k}(d_{o})|)\) iterations._ Proof.: Let \(S^{*}\) be the set of disks added to \(S\) in the while loop of updateIS. Then the while loop has \(|S^{*}|+1\) iterations. We need to show that \(|S^{*}|\leq O(|B_{k}(d_{o})|)\). Let \(c_{o}^{\prime}\) be lowest obstacle cell in \(N_{k}\) with \(c_{o}\) < \(c_{o}^{\prime}\); and consider the ascending path \(P(d_{o})\) in \(N_{k}\) between \(c_{o}\) and \(c_{o}^{\prime}\). The candidate disks in \(B_{k}(d_{o})\) lie in the cells along \(P(d_{o})\) by invariant 4c. Let \(\mathcal{D}_{P}\) be the set of disks in \(\mathcal{D}\) associated with the cells in \(P(d_{o})\). By invariant 6, every disk in \(\mathcal{D}\) intersects \(o(c_{o})\) or the candidate obstacle disk of a smaller disk in \(B_{k}(d_{o})\). The greedy process in updateIS adds disks to \(S^{*}\) that are disjoint from \(o(d_{o})\) and each disk \(d\in S^{*}\) is also disjoint from \(o(d^{\prime})\) for any smaller disk \(d^{\prime}\in S^{*}\). Consequently, each disk in \(S^{*}\) intersects the candidate obstacle disk of a smaller disk in \(B_{k}(d_{o})\). By Lemma 14, each candidate disk in \(B_{k}(d_{o})\) intersects at most one disk in \(S^{*}\) in larger buckets. This yields \(|S^{*}|\leq|B_{k}(d_{o})|\), as required. **Lemma 20**.: _Dynamic insertion of a disk takes polylogarithmic amortized expected update time._ Proof.: Steps 1 and 2 of the insert routine take \(O(1)\) time. Step 3 takes \(O(\log^{2}n)\) time, since cell location using \(F_{c}\), and insertion into \(N_{k}\) are handled in logarithmic time, while insertion into range tree \(T_{\mathcal{D}}^{i}\) takes \(O(\log^{2}n)\) time. Similarly, the \(O(1)\) interactions with \(F_{c}\) in Steps 3a and 3b take logarithmic time as well. Step 4 takes polylogarithmic expected amortized time by Lemma 17. Finally, step 5 calls the subroutine updateIS, which we analyze next, followed a call to subroutine cleanupCD in step 6 with three additional calls to updateIS. Steps 1 and 3 of updateIS again interact with \(F_{o}\) and \(T_{\cup}\) in polylogarithmic time. Step 5 also interacts with \(T_{k}^{i}\) in \(O(\log^{2}n)\) time, with \(T_{\Sigma}\) in polylogarithmic expected amortized time, by Lemma 18, and with \(F_{o}\) in \(O(\log n)\) time. In particular, in step 5 only a single disk can be found (Lemma 14), and hence step 6 has interaction with \(T_{k}^{i}\) and \(T_{\Sigma}\) for only a single disk. Finally, to update the obstacles, a constant number of children is queried in \(F_{o}\), in \(O(\log n)\) time each. In step 4, the while loop is repeated \(O(|B_{k}(d_{o})|)\) times by Lemma 19. In each iteration, a disk may be added to \(S_{k}\) in step 4a. Recall that \(|B_{k}(d_{0})|\) increases by at most one for each dynamic update, and we can amortize \(|B_{k}(d_{0})|\) to the dynamic updates that contributed to \(|B_{k}(d_{0})|\). Overall, updateIS runs a polylogarithmic expected amortized time. **Lemma 21**.: _Dynamic deletion of a disk takes polylogarithmic expected amortized update time._ Proof.: Steps 1-4 of delete have the same asymptotic running time as the corresponding steps in insert, since (asymptotically) the same number of interactions take place with the same data structures. In step 5 we delete from a range tree \(T_{k}^{i}\) in \(O(\log^{2}n)\) time, delete from \(T_{\Sigma}\) in polylogarithmic expected amortized time, and delete from \(F_{o}\) in \(O(\log n)\) time. Finally, the subroutine updateIS is called, followed by the subroutine cleanupCD (with three additional calls to updateIS). The runtime analysis of each iteration of the while loop in updateIS is equivalent to the analysis in the proof of Lemma 20. **Lemma 22**.: _Dynamic insertion or deletion of a disk maintains invariants 1-6._ Proof.: For this proof we assume that the invariants hold before a dynamic update, and show that after an insertion or deletion, the invariants still hold. Invariants 1 and 2.Steps 1 and 2 of both insert and delete ensure that invariants 1 and 2 are satisfied, by identifying exactly one cell \(c\) in one nonatree \(N_{k}\) (with the right level parity), for the inserted/deleted disk. Invariant 3.We start with the observation that invariant 3a is always maintained: When new disks are added to \(S_{k}\) in updateIS, we first remove all disks in \(B_{k}\) that could be added to \(S_{k}\), in step 2, and only when a disk is deleted from \(S_{k}\) can it become a candidate disk, in step 6b. Furthermore, invariant 3 already holds for all disks in \(S_{k}\), and deleting disks cannot violate the invariant. We therefore have to prove only for a newly added disk \(d\) that the invariant still holds. Whenever we query \(T_{\cup}\) (in step 3 or 4d of updateIS) to find a disk \(d\) that we add to \(S_{k}\), we know the following properties of \(d\). * We query \(T_{\cup}\) with a cell \(c\) and an obstacle \(o(c_{o})\) to find \(d\) in cell \(c^{*}\). The obstacle \(o(c_{o})\) was found by querying \(F_{o}\) with \(c\), and hence \(c_{o}\) is the closest obstacle cell below \(c\). Any obstacle cell below \(c_{o}\) would have an obstacle with smaller radius (being empty, or defined by a cell \(c^{\prime}\subset c_{o}\)), which would also be completely contained in \(o(c_{o})\) (because \(c^{\prime}\subset c_{o}\)). Thus, disk \(d\) found by \(T_{\cup}\) must be disjoint from any obstacle cell below \(c\), satisfying invariant 3b for the disk of \(S_{k}\) in those obstacle cells below \(c\). * For the disks above \(c\), we know the following. In step 3 of updateIS we also check whether the closest obstacle disk below the found cell \(c^{*}\) is still \(o(c_{o})\). If this is not the case, then a disk is found above the next obstacle disk on the ascending path from \(c\) to the root. Adding such a disk is problematic, since it will intersect with the obstacle disk above \(c\). Thus we add \(d\) only if it is located between \(c_{o}\) and the obstacle cell above it. All other disks in \(S_{k}\) on the ascending path from \(c\) to the root are therefore located above \(d\). We can then argue that at most one disk in \(S_{k}\) can intersect the obstacle of cell \(c^{*}\) (Lemma 14). If such a disk exists, it is deleted in step 6 of updateIS, and hence invariant 3b also hold for all disks above \(d\). * In case a cell \(c^{*}\) is found, we check in step 3 whether the closest obstacle disk below the found cell \(c^{*}\) is still \(o(c_{o})\). If so, then invariant 3c must holds: The cell \(c^{*}\) cannot be contributing a disk to \(S_{k}\) already, when \(T_{\cup}\) found disk \(d\) in \(c^{*}\): the obstacle disk \(o(c^{*})\) would exist and prevent \(d\subset o(c^{*})\) from being reported by the \(T_{\cup}\) query. The only exception is when \(o(c_{o})\) is an empty obstacle. This happens only for newly added leaves, which do not contribute disk to \(S_{k}\) yet, thus in this case invariant 3c is satisfied, even though \(d\) is located in \(c_{o}\). Invariant 4.The first invariant 4a holds by definition: We consider only candidate disks that are assigned to a disk in \(S_{k}\) and hence the set \(\bigcup_{d\in S_{k}}B_{k}(d)\) defines all our candidate disks. Next consider invariant 4c. Observe that candidate disks are modified only in updateIS, and only in two ways: Candidate disks can be removed in step 2 and added in step 6b. Removing candidate disks cannot cause a violation of invariant 4c, hence consider addition of a candidate disk in step 6b. Since invariant 4c holds before a dynamic update, no disk in \(S_{k}\) lies between the removed disk \(d^{-}\), and the disks in \(B_{k}(d^{-})\). All these candidate disks, along with \(d^{-}\), are assigned to last disk added in step 4, which must therefore be the closest disk in \(S_{k}\) below \(d^{-}\). Therefore, invariant 4c must hold. Finally, observe that removing all candidate disks in step 2 before adding new disks to \(S_{k}\), prevents the new disks in \(S_{k}\) to be added between disk \(d\) in cell \(c_{o}\) and a disk in \(B_{k}(d)\). Thus any new disks added to \(S_{k}\) in updateIS will not violate invariant 4c, either. Lastly, consider invariant 4b. Recall that an obstacle cell is either a _true obstacle_ associated with an obstacle disk, or a _merge obstacle_ with at least two children that each contain a disk in \(S_{k}\) (invariant 5a). The obstacle cells decompose the nonatree \(N_{k}\) into a set \(\mathcal{P}\) of ascending paths between obstacle cells. This defines a parent-child relation between paths in \(\mathcal{P}\). For each path \(P\in\mathcal{P}\), let \(B_{k}(P)\) denote the candidate disks associated with the cells in \(P\). In particular, if the bottom cell of \(P\) is a true obstacle, associated with a disk \(d\in S_{k}\), then \(B_{k}(d)=B_{k}(P)\); otherwise it is a merge obstacle and \(B_{k}(P)=\varnothing\): Invariant 3c tells us that \(S_{k}\) is an independent set, so step 5 finds an overlap in \(S_{k}\) only after a new disk is added to \(S_{k}\). Thus, when step 6b is executed, we always deal with an ascending path that starts from a true obstacle. We use the fact that updateIS is called four times after each dynamic update; once on the cell affected by the dynamic change, and thrice in cleanupCD on cells that are assigned the most candidate cells. Each call to updateIS runs a greedy algorithm on an ascending path \(P\in\mathcal{P}\) between two consecutive obstacle cells \(c_{1}\prec c_{2}\). If \(c_{2}\) is a merge obstacle, then the greedy algorithm decomposes \(P\) into new paths, each such path \(P^{\prime}\) having \(|B_{k}(P^{\prime})|=0\). Assume that \(c_{2}\) is a true obstacle, and the bottom cell of a path \(Q\in\mathcal{P}\) between \(c_{2}\prec c_{3}\). Then then the greedy algorithm decomposes \(P\cup Q\) into two or more paths, where each such path \(P^{\prime}\) has \(|B_{k}(P^{\prime})|=0\), except for the highest new path \(P_{\max}\) for which may get \(|B_{k}(P_{\max})|=|B_{k}(Q)|+1\) (in step 6b). That is, the greedy algorithm creates at most one new path \(P_{\max}\in\mathcal{P}\) with nonempty \(B_{k}(P_{\max})\), by incrementing \(B_{k}(Q)\) by one for a previous path \(Q\). We assume that \(|B_{k}|\leq 2\,|S_{k}|\) holds before a dynamic change and consider how the size of \(B_{k}\) and \(S_{k}\) changes in updateIS. In the first call to updateIS on a path \(P\), a deletion of a disk from \(\mathcal{D}\) may delete a disk \(d\) from \(S_{k}\), and thereby reducing the size of \(S_{k}\) by one, since updateIS may not be able to find the same number of disjoint disks as before \(d\) was deleted: In particular, \(d\) may have been the only disk in \(P\). In the worst case \(|S_{k}|\) decreases by one, while \(|B_{k}|\) may increase by one (in step 6b). Consequently, after the first call to updateIS, the inequality \(|B_{k}|\leq 2\,|S_{k}|+3\) holds. We now show that the three subsequent calls to updateIS fix the discrepancy between \(|S_{k}|\) and \(|B_{k}|\). While \(|B_{k}|>2\,|S_{k}|\), we have \(\max_{d\in S_{k}}|B_{k}(d)|\geq 3\) by the pigeonhole principle. Subroutine cleanupCD calls updateIS on a cell associated with a disk \(d\in S_{k}\) where \(|B_{k}(d)|\geq 3\). In each call, \(|B_{k}|\) decreases by at least three, but \(|S_{k}|\) cannot decrease since no disk is deleted. Consequently, three repetitions of updateIS in cleanupCD restores the inequality \(|B_{k}|\leq 2\,|S_{k}|\). Invariant 5.To maintain invariant 5 we take the following steps: * In step 3 of both insert and delete the structure of \(N_{k}\) can change, and hence we query \(F_{o}\) and update it to make sure that invariant 5a remains satisfied, in all cases but one: A newly created leaf is given an empty obstacle in step 3a of insert and delete. This ensure that in step 1 of the subsequent updateIS call (in both insert and delete), the empty obstacle is found, and in step 3 a disk in the leaf is found to be added to \(S_{k}\). Thus after the updateIS call, invariant 5a is satisfied. Furthermore, when a disk is inserted into \(S_{k}\) in step 4b of updateIS, or when a disk is deleted from \(S_{k}\) in step 6 of updateIS or step 5 of delete, \(F_{o}\) is updated to ensure that invariant 5a is satisfied. * In most cases, whenever an obstacle \(o(c)\) is set, in the steps mentioned above, we set \(o(c)\) to three times the radius of the smallest enclosing disk of \(c\), satisfying invariant 5b. Only for the newly created leaves we set an empty obstacle in step 3a of insert and delete. However, as we argued above, this ensures that a disk from the leaf is added to \(S_{k}\) in the subsequent updateIS call, where in step 4b the empty obstacle is set to the correct size to satisfy invariant 5b. Invariant 6.Finally, we prove that invariant 6 is also satisfied after a dynamic update. After every insertion or deletion of a disk \(d\), the updateIS subroutine is called. * After an insertion into \(\mathcal{D}\), only the new disk \(d\) can violate invariant 6. In step 3 of updateIS we can hence find the cell \(c^{*}\) of the newly inserted disk. If we do not find the cell \(c^{*}\) of \(d\), then that means that \(T_{\cup}\) did not find that disk \(d\) is disjoint from the obstacle of the closest obstacle cell below \(d\). Thus invariant 6b holds. Note that the closest obstacle cell may be \(c^{*}\) (and the obstacle \(o(c^{*})\) is non-empty), then \(c^{*}\) already contributes a disk to \(S_{k}\), and hence invariant 6a holds. If disk \(d\) is disjoint from the (possibly empty) obstacle then \(d\) will be added to \(S_{k}\) in the next steps, again satisfying invariant 6a. Adding disk \(d\) to \(S_{k}\) may result in an intersection between \(o(c^{*})\) and a disk \(d^{\prime}\in S_{k}\) in cell \(c^{\prime}\), such that \(c^{*}\prec c^{\prime}\). Disk \(d^{\prime}\) will be deleted from \(S_{k}\) in step 6 of updateIS, which may violate invariant 6. We consider this case next. * After a deletion from \(\mathcal{D}\), we can add a disk to \(S_{k}\) only if the deleted disk was in \(S_{k}\). If no disk in \(S_{k}\) was deleted, then for all disks invariant 6 still holds, and in updateIS step 4 will loop until no more disk is found to be disjoint from obstacle disks and invariants 6a and 6b hold. Thus consider the case where a disk from \(S_{k}\) was deleted, which was located in cell \(c\): In step 3 of updateIS we may now find a disk for which invariant 6b is violated, as there may be multiple disks that no longer intersects with an obstacle below them. Adding from those violating disks the lowest disk \(d\) in cell \(c^{*}\), in the next step of updateIS will result in satisfying invariant 6b again: First, observe that \(d\) is the lowest of the violating disks, and hence all other violating disks are on the ascending path closer to the root. Next, observe that \(c\preceq c^{*}\), since \(d\) is located in a cell \(c^{*}\) on the ascending path from \(c\) to the root, it must hold that \(o(c)\subseteq o(c^{*})\). Thus all disks intersected by \(o(c)\) also intersect \(o(c^{*})\). Invariant 6b was satisfied before the removal of the disk in \(c\), and hence it is satisfied again by the existence of \(o(c^{*})\). * At the end of the updateIS routine, multiple disks can be added to \(S_{k}\). Observe that the obstacle disks for these newly-added disks are all contained in the obstacle disk for the last added disk \(d\in S_{k}\). The obstacle disk for \(d\) may overlap with at most one larger disk \(d^{-}\in S_{k}\) in cell \(c^{-}\) (Lemma 14), which will be deleted in step 6 of updateIS. Observe also that \(d^{-}\) necessarily is the disk that defines the closest obstacle cell above \(d\): If the obstacle disk for \(d\) intersected a disk \(d^{*}\in S\) above \(d^{-}\), then \(d^{-}\) would also intersect \(d^{*}\), since the obstacle disk for \(d^{-}\) completely contains the obstacle disk of \(d\). The deletion of \(d^{-}\) from \(S_{k}\) may again result in a violation of invariant 6b. To maintain invariant 6 we hence add \(d^{-}\) to \(B_{k}\) in step 6b. This ensures that all disks that intersected (true) obstacle disk \(o(c^{-})\) now intersect the same candidate obstacle disk, and hence invariant 6c is satisfied. * Finally, after the updateIS routine at the end of every insertion or deletion, the cleanupCD is called. It simply calls updateIS on three cells that are assigned many candidate sets. Since invariant 6 holds initially, and updateIS starts by removing all candidate sets assigned to the disk in the obstacle cell below, invariant 6c may now be violated. Observe the following two facts. By invariant 4c, the removed candidate disks must lie below the obstacle cell above, and the while loop in step 4 adds disks to \(S_{k}\) until \(T_{\cup}\) finds no more disks that are disjoint from an obstacle below, or until the obstacle cell above is reached. As such, all disks between these two obstacles that could violate invariant 6c must now intersect a (non-candidate) candidate disk and hence satisfy invariant 6b (or even 6a) We have described how to maintain the independent sets \(S_{1},\ldots,S_{8}\) satisfying invariants 1-6 in polylogarithmic expected amortized update time. By Lemma 16, the largest of \(S_{1},\ldots,S_{8}\) is a constant-factor approximate MIS of \(\mathcal{D}\). For each dynamic update in \(\mathcal{D}\), we call updateIS four times, and each call can add an amortized \(O(1)\) disks to \(S_{k}\), for some \(k\in\{1,\ldots,8\}\). However, \(S_{k}\) changes incrementally (by one disk at a time), while the size of the MIS changes by at most one, consequently, the largest of \(S_{1},\ldots,S_{8}\) remains a constant-factor approximate MIS at all times. Thus, by Lemma 1, we can smoothly transition from one independent set to another using the MIX algorithm, with amortized \(O(1)\) changes in the ultimate independent set per update in \(\mathcal{D}\), and conclude the following theorem. **Theorem 3**.: _For a fully dynamic set of disks of arbitrary radii in the plane, a constant-factor approximate maximum independent set can be maintained in polylogarithmic expected amortized update time._ ### Lower Bound We state a lower bound for the DGMIS problem for a set of disks of uniform radii. Our argument is similar to the argument of Henzinger et al. [10], who gave such lower bounds for hypercubes. For the sake of completeness, we state this result. **Theorem 5**.: _For a dynamic set of unit disks in the plane, there is no algorithm for DGIS with approximation ratio \(1+\varepsilon\) and amortized update time \(n^{O((1/\varepsilon)^{1-\delta})}\), for any \(\varepsilon,\delta>0\), unless the ETH fails._ Proof.: Marx [14] showed that, assuming ETH, there is no \(\delta>0\), such that a \(2^{(1/\varepsilon)^{O(1)}}\cdot n^{O((1/\varepsilon)^{1-\delta})}\) time PTAS exists for MIS for unit disks. Suppose we have an algorithm that maintains \((1+\varepsilon)\)-approximate solution with an amortized update time \(n^{O((1/\varepsilon)^{1-\delta})}\). Then, we could transform the input instance of MIS to a dynamic instance by inserting the disks one-by-one, in overall \(n^{O((1/\varepsilon)^{1-\delta})}\) time. This contradicts the result of Marx [14]. ## 6 Conclusions We studied the dynamic geometric independent set problem for a collection of disks in the plane and presented the first fully dynamic algorithm with polylogarithmic update time. First, we showed that for a fully dynamic set of unit disks in the plane, a constant factor approximate maximum independent set can be maintained in polylogarithmic update time. Moreover, we showed that this result generalizes to fat objects in any fixed dimension. Our main result was a dynamic data algorithm that maintains a constant factor approximate maximum independent set in polylogarithmic amortized update time. One bottleneck in our framework is the nearest/farthest neighbor data structure [11, 12] (as discussed in Section 1), which provides only _expected_ _amortized_ polylogarithmic update time. This is the only reason why our algorithm does not guarantee deterministic update time, and it does not extend to balls in \(\mathbb{R}^{d}\) for \(d\geq 3\), or to arbitrary fat objects in the plane. It remains an open problem whether there is a dynamic nearest/farthest neighbor data structure in constant dimensions \(d\geq 2\) with a worst-case polylogarithmic update and query time: Any such result would immediately carry over to a fully dynamic algorithm for an approximate MIS for balls in higher dimensions. Beyond Fatness.While there have been several attempts to obtain constant-factor dynamic approximation schemes for various sub-families of rectangles, it is not known if, for a dynamic collection of axis-aligned rectangles in the plane, there exists an algorithm that maintains a constant-factor approximate maximum independent set in sublinear update time. On the one hand, due to Henzinger et al. [10], we know that it is not possible to maintain a \((1+\varepsilon)\)-approximate solution in \(n^{O((1/\varepsilon)^{1-\delta})}\) amortized update time, for any \(\delta>0\), unless the ETH fails. On the other hand, recent progress on MIS for a static set of axis-parallel rectangles resulted in several constant-factor approximations [1, 13]. However, these algorithms are based on dynamic programming, and hence it is not clear how to naturally extend them into the dynamic realm.
2306.08651
Toward Grounded Commonsense Reasoning
Consider a robot tasked with tidying a desk with a meticulously constructed Lego sports car. A human may recognize that it is not appropriate to disassemble the sports car and put it away as part of the "tidying." How can a robot reach that conclusion? Although large language models (LLMs) have recently been used to enable commonsense reasoning, grounding this reasoning in the real world has been challenging. To reason in the real world, robots must go beyond passively querying LLMs and actively gather information from the environment that is required to make the right decision. For instance, after detecting that there is an occluded car, the robot may need to actively perceive the car to know whether it is an advanced model car made out of Legos or a toy car built by a toddler. We propose an approach that leverages an LLM and vision language model (VLM) to help a robot actively perceive its environment to perform grounded commonsense reasoning. To evaluate our framework at scale, we release the MessySurfaces dataset which contains images of 70 real-world surfaces that need to be cleaned. We additionally illustrate our approach with a robot on 2 carefully designed surfaces. We find an average 12.9% improvement on the MessySurfaces benchmark and an average 15% improvement on the robot experiments over baselines that do not use active perception. The dataset, code, and videos of our approach can be found at https://minaek.github.io/grounded_commonsense_reasoning.
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
2023-06-14T17:30:57Z
http://arxiv.org/abs/2306.08651v2
# Toward Grounded Social Reasoning ###### Abstract Consider a robot tasked with tidying a desk with a meticulously constructed Lego sports car. A human may recognize that it is not socially appropriate to disassemble the sports car and put it away as part of the "tidying." How can a robot reach that conclusion? Although large language models (LLMs) have recently been used to enable social reasoning, grounding this reasoning in the real world has been challenging. To reason in the real world, robots must go beyond passively querying LLMs and _actively gather information from the environment_ that is required to make the right decision. For instance, after detecting that there is an occluded car, the robot may need to actively perceive the car to know whether it is an advanced model car made out of Legos or a toy car built by a toddler. We propose an approach that leverages an LLM and vision language model (VLM) to help a robot actively perceive its environment to perform grounded social reasoning. To evaluate our framework at scale, we release the MessySurfaces dataset which contains images of \(70\) real-world surfaces that need to be cleaned. We additionally illustrate our approach with a robot on \(2\) carefully designed surfaces. We find an average \(12.9\%\) improvement on the MessySurfaces benchmark and an average \(15\%\) improvement on the robot experiments over baselines that do not use active perception. The dataset, code, and videos of our approach can be found at [https://minaek.github.io/groundedsocialreasoning/](https://minaek.github.io/groundedsocialreasoning/). Keywords:Social Reasoning, Human-Robot Interaction ## 1 Introduction Imagine you are asked to clean up a desk and you see a meticulously constructed Lego sports car on it. You might immediately recognize that the socially appropriate behavior is to leave the car be, rather than taking it apart and putting it away as part of the "cleaning". But how would a robot in that same position know that's the right thing to do? Traditionally, we would expect this information to be specified in the robot's objective - either learned from demonstrations [1, 2, 3] or from human feedback [4, 5, 6, 7, 8]. However, Lego sports cars are not common, and it is challenging for humans to specify a priori what objects a robot might encounter [9, 10]. While a robot could expensively query a human for what to do during these circumstances, we explore a different question in this work: _how can we enrich robots with the social commonsense reasoning necessary to know what to do, without any human intervention?_ Recent work has demonstrated that large language models (LLMs) trained on internet data have enough context for commonsense reasoning [11], making moral judgements [12, 13], or acting as a proxy reward function capturing human preferences [14]. Rather than explicitly asking a human for the answer, the robot could instead ask an LLM whether it would be appropriate to clean up the car. But in real-world environments, this is easier said than done. Tapping into an LLM's social reasoning skills in the real-world requires the ability to _ground language in the robot's perception of the world_ - an ability that might be afforded by powerful vision-and-language models (VLMs). Unfortunately, we find that today's VLMs cannot reliably provide all the relevant information for social reasoning. For instance, a VLM may not describe that the sports car is constructed from Legos, or that it contains over \(1000\) pieces - details that are key to making decisions. While advanced multi-modal models might alleviate this problem, a fundamental limitation is the image itself might not contain all the relevant information. If the sports car is partially occluded by a bag (as in Fig. 1), no VLM could provide the necessary context for reasoning over what actions to take. Such a system would instead need the ability to move the bag - or move _itself_ - to actively gather the necessary information. Thus, in order to perform "grounded social reasoning" robots must go beyond passively querying LLMs and VLMs to obtain action plans and instead _directly interact with the environment._ Our insight is that robots must reason about what additional information they need to make socially appropriate decisions, _and then actively perceive the environment to gather that information_. Acting on this insight, we propose a framework to enable a robot to perform grounded social reasoning by iteratively identifying details it still needs to clarify about the scene before it can make a decision (e.g. is the model car made out of intricate Lego pieces or MEGA Bloks?) and actively gathering new observations to help answer those questions (e.g. getting a close up of the car from a better angle). In this paper, we focus on the task of cleaning up real-world surfaces in a socially appropriate manner. Our framework is shown in Fig. 1. Given a textual description of the desk, an LLM asks follow-up questions about the state of each object that it needs in order to make a decision of what the robot should do with that object. The robot actively perceives the scene by taking close-up photos of each object from angles suggested by the LLM. The follow-up questions and close-up photos are then given to a VLM so that it can provide more information about the scene. This process can be repeated multiple times. The LLM then decides on an action the robot should take to clean the object in a socially appropriate manner. For example, our robot leaves the Lego sports car intact, throws a browning half-eaten banana in the trash, but keeps an unopened can of Yerba Mate on the desk. Furthermore, we release the MessySurfaces dataset containing images of \(70\) surfaces as well an evaluation benchmark that assesses how well a robot can clean up each surface in a socially appropriate manner. The dataset is available here. We evaluate our framework on our benchmark dataset as well as on a real-world robotic system. We examine each component of our framework, asking whether the robot asks useful follow-up questions, whether the robot chooses informative close-up images, and whether the images actually help a VLM more accurately answer questions. We find an average \(12.9\%\) improvement on the MessySurfaces benchmark and an average \(15\%\) improvement on the robot experiments over baselines that do not use active perception. ## 2 Related Work Social Reasoning.Large language models are trained on internet-scale data, making them effective commonsense reasoners [15; 16; 17; 18]. Prior works have studied whether LLMs' commonsense enables _social_ reasoning aligned with human values [12; 13; 19; 14]. There is evidence that when LLMs make moral or social judgements, they align with the normative beliefs of the population that generated their training data [20]. In addition, prior work show social reasoning models can align with conventional beliefs [21; 22; 23; 24]. _Our approach is in line with normative social reasoning; instead of adapting to individual preferences, we show we can take commonsense, socially-appropriate actions to clean up a scene._ Figure 1: **Grounded Social Reasoning Framework. We demonstrate our framework using the sports car. Blue boxes indicate the model and yellow boxes indicate its output. Our framework takes an image of the scene and an instruction as input. 1) The VLM outputs an initial description of the scene \(\mathcal{C}^{0}\) from the initial image \(im^{0}\). 2) The LLM asks follow-up questions about each object in the scene, \(\mathcal{Q}^{i}\). 3) The robot takes a close-up image \(im^{i}_{k}\) of each object \(k\). It is guided by an LLM that chooses the best angle that would help answer the question. 4) We pair the close-up images with the follow-up questions and ask the VLM to answer them. Answers are appended to the context. We repeat steps 1-4 to gather more information. 5) Finally, we query an LLM to choose the most socially appropriate way to tidy the object.** Learning Human Preferences.Past work on aligning with human preferences has focused on using human feedback to infer rewards and policies by designing queries for active preference learning [25, 4, 6, 26], performing inverse reinforcement learning [27, 28], or recovering reward signals from language feedback [14, 29, 30, 31, 32]. Policies defined via LLMs have also been directly tuned with language feedback by approaches like RLHF [33]. Instead of querying humans, we leverage normative values from pre-trained models. While some works use normative values from LLMs in negotiations and games [34], these are not grounded in the real world. _In this work, we do not focus on particular human preferences, though the normative responses of LLMs could be fine-tuned for particular applications._ Active Perception.When robots must reason socially like humans, active information gathering may be important [35]. Approaches like TidyBot actively zoom-in on objects to better categorize them [36]. Other approaches such as Inner Monologue seek out additional environment information, but need aid from a human annotator or assume access to simulators [37, 38]. VLMs have also been used for active perception in navigation [39, 40, 41]. _In this work, we show that active perception is necessary for grounded social reasoning, enabled by the semantic knowledge in an LLM._ LLMs for Robotics.Past work uses semantic knowledge in LLMs for task planning. Methods like SayCan decompose natural language tasks into primitive action plans [42, 43, 44]. In addition, approaches such as Code as Policies [45, 46] use LLMs to write Python programs that plan with executable robot policy code. Other approaches use multimodal sequence models to reason about language-conditioned manipulation [47, 48, 49, 50]. _We use the semantic awareness of an LLM to reason about action plans. Unlike the above works, an LLM interactively queries an off-the-shelf VLM to obtain a grounded understanding of the scene._ ## 3 Grounding Social Reasoning We propose a framework that combines existing foundation models in a novel way to enable active information gathering, shown in Fig. 1. Our framework makes multiple calls to an LLM and VLM to gather information. The LLM plays a number of distinct roles in our framework that we distinguish below: generating informative follow-up questions, guiding active perception, and choosing an action plan. In every call, the LLM takes in and outputs a string \(\texttt{LLM}\colon A^{*}\to A^{*}\), and the VLM takes in an image, string pair and outputs a string \(\texttt{VLM}\colon\mathcal{I}\times A^{*}\to A^{*}\), where \(A^{*}\) is the set of all strings and \(\mathcal{I}\) is the set of all images. The context \(\mathcal{C}^{i}\in A^{*}\) contains information about the scene that the robot has gathered up to iteration \(i\) of the framework. Initially, the inputs to our framework are an image of the scene \(im^{0}\in\mathcal{I}\) (i.e., an unblurred image from Fig. 1) and an instruction (e.g., "clean the surface"). **VLM Describes the Scene.** Our framework starts with the VLM producing an initial description \(\mathcal{C}^{0}\) of the scene from the scene image \(im^{0}\). Depending on the VLM, the description can contain varying amounts of information -- in the most uninformative case, it may simply list the objects that are present. In our experiments, this is the description that we use. **LLM Generates Follow-Up Questions.** To identify what information is missing from \(\mathcal{C}^{0}\), we use an LLM to generate informative follow-up questions as shown in stage (2) of Fig. 1. We prompt an LLM with \(\mathcal{C}^{0}\) and ask the LLM to produce a set of follow-up questions \(\mathcal{Q}^{i}=\{q_{1}^{i},\ldots,q_{K}^{i}\}\) for the \(K\) objects. LLMs are apt for this task because of their commonsense reasoning abilities. We use Chain-of-Thought prompting [51] where we first ask the LLM to reason about the socially appropriate way to tidy each object before producing a follow-up question (see examples in the supplementary). For example, the LLM could reason that the sports car should be put away if it is a toy but left on display if someone built it. The resulting follow-up question asks whether the sports car is built with Lego blocks. We assume that the information in \(\mathcal{C}^{0}\) is accurate (i.e., correctly lists the names of all the objects) to prevent the LLM from generating questions based on inaccurate information. **Robot Actively Percieves the Scene.** At this stage, one might normally query the VLM with the original scene image \(im^{0}\). However if the object-in-question is obstructed or too small to see, the scene image might not provide enough information for the VLM to answer the follow-up question accurately (e.g., the sports car is obstructed in Fig. 1). Instead, we would like to provide an unobstructed close-up image \(im_{k}^{i}\in\mathcal{I}\) of the object \(k\) to "help" the VLM accurately answer the generated questions. Taking informative close-up images requires interaction with the environment -- something we can use a robot for. To actively gather information, the robot should proceed based on some notion of "informativeness" of camera angles. To determine "informativeness", we can again rely on the commonsense knowledge of LLMs. Although LLMs don't have detailed visual information about the object, they can suggest reasonable angles that will be, on average, informative. For instance, an LLM will choose to take a photo from the top of an opaque mug, instead of its sides, to see its content. In practice, we find that this approach works well and can improve the informativeness of an image by \(8\%\). We query an LLM to choose a close-up angle of the object from a set of angles {<FRONT>, <BACK>, <LEFT>, <RIGHT>, <TOP>} that would give an un-obstructed view. We then pair the close-up images with their questions {(\(im_{1}^{i}\),\(d_{1}^{i}\)),...,(\(im_{k}^{i}\),\(q_{K}^{i}\))} and query the VLM for answers to these questions in step (4) of our framework. We concatenate the VLM's answers for each object and append them to our context \(\mathcal{C}^{i}\) to complete the iteration. To gather more information about each object, steps \(1-4\) can be repeated where the number of iterations is a tunable parameter. **LLM Chooses an Action Plan.** In the final step, for each object, we prompt the LLM with the context \(\mathcal{C}^{i}\) and a multiple choice question that lists different ways to tidy an object. The LLM is then instructed to choose the most socially appropriate option. The multiple choice options come from the MessySurfaces benchmark questions, a bank of \(308\) multiple-choice questions about how to clean up real-life objects found on messy surfaces. For example, in Fig. 1, the LLM chooses to leave the sports car as is because it infers that the sports car must be on display. To map the natural language action to robot behavior, we implement a series of hand-coded programmatic skill primitives that define an API the LLM can call into. See SS5 for more details. ## 4 The MessySurfaces Dataset To assess a robot's ability to reason socially in grounded environments, we introduce the MessySurfaces dataset. The dataset consists of images of \(308\) objects across \(70\) real-world surfaces that need to be cleaned. An average of \(68\%\) of objects are occluded in scene-level images1, so we also provide \(5\) close-up images as a way for the robot to "actively perceive" the object, see Fig. 2 for an example. MessySurfaces also includes a benchmark evaluation of multiple choice questions for each object where each option corresponds to different ways to tidy the object. Through a consensus of \(5\) human annotators, we determine which one of the choices is the most socially appropriate. To do well, a robot must reason about the socially appropriate way to clean each object from the images alone. Since no human preferences are given, the robot must identify relevant attributes of each object from the images (e.g., is the sports car built out of Legos or MEGA Bloks?) and then reason about how to tidy the object using this information. MessySurfaces contains \(45\) office desks, \(4\) bathroom counters, \(5\) bedroom tables, \(8\) kitchen counters, \(4\) living room tables and \(4\) dining tables. Footnote 1: Computed as the average number of times annotators indicated a question cannot be answered by the scene image. **Data Collection Process.** We recruited \(51\) participants to provide images of cluttered surfaces. Each participant was asked to pick \(4-6\) objects on a surface. They were then asked to take a photo of the scene-level view as well as close-up photos of each object from the top, right, left, front, and back angles - the offline equivalent of having a robot actively navigate a scene. The task took approximately \(15-30\) minutes. After receiving the photos, we post-processed each image and cropped out any identifiable information. Figure 2: **MessySurfaces Example.** Each object in MessySurfaces is represented by a scene image and \(5\) close-up images. Each object also has a benchmark question that presents \(5\) options to tidy the object; each option is constructed by producing a cleaning action conditioned on a hypothetical object state. **Benchmark Evaluation.** The benchmark questions consist of \(5\) LLM-generated multiple choice options about how to manipulate each object to clean the surface in a socially appropriate manner. To make the options diverse, we asked the LLM to first identify \(5\) states the object could be in and then queried it to come up with a cleaning action for each of those states (see Fig. 2 for an example). For each question, we recruited \(5\) annotators to choose the correct state-action pair based on the scene and close-up images of the object. Annotators were also given an option to indicate if none of the choices were a good fit. We used the majority label as our answer and omitted \(16\) questions (out of \(324\)) where a majority thought none of the choices were a good fit. For questions that had two equally popular answers, we counted both as correct. Our annotators agreed on average \(67\%\) of the time. To evaluate the quality of our multiple choice options, we asked annotators to rate how appropriate each cleaning action is for each object state. Annotators gave each option an average rating of \(4.1\) out of \(5\). The average rating for the correct option was \(4.4\) out of \(5\). _Annotators_. In total, we recruited \(350\) annotators from Prolific. Each annotator was an English-speaker based in the U.S. or U.K. and had an approval rating of at least \(98\%\). Our study is IRB-approved. ## 5 Experiments We examine how well our approach can perform grounded social reasoning on the MessySurfaces dataset as well as a real-world robotic system. **Primary Metric.** We use accuracy on the benchmark questions as our primary metric. Each benchmark question presents \(5\) options on how to tidy the object, with accuracy defined as the percentage by which our framework selects the most appropriate option (as indicated by our annotators). **Baselines.** Key to our approach ()**Ours-LLM**) is the ability to supplement missing information by asking questions and actively perceiving the environment. To evaluate this, we compare the following: * [noitemsep,topsep=0pt] * **Oracle.** We ask a human annotator to answer the benchmark questions where they can actively perceive the scene using all angles. * **Ours-LLM.** Our approach as described in SS3. * Front.** Inspired by TidyBot [36], this is a variant of our approach wherein we simulate "zooming" into the image, using the "front" angle image as input to the VLM. The "front" angles can be the most informative angle in many cases, making it an effective heuristic. * **Baseline Questions.** This baseline evaluates the need for socially-motivated questions in our framework by asking more factoid-based questions (e.g., "What color is the cup?"). * **No Active Perception.** This baseline evaluates the need for active perception in our framework by allowing the robot to ask questions _that are answered solely from the scene image_. * **No Questions.** This baseline requires the robot to perform grounded social reasoning from an initial description of the scene. The robot does not ask questions or actively perceive the environment, instead operating in an open-loop fashion akin to methods like SayCan [42]. **Implementation Details.** We use GPT-4 with temperature \(0\) as our LLM and InstructBLIP [52] (Flan-T5-XXL) as our VLM. We also report "oracle" results where a human answers questions instead Figure 3: **MessySurfaces Benchmark Accuracy. For both the Oracle VLM and InstructBLIP, on average, our approach outperforms all baselines on the MessySurfaces benchmark. Accuracy is given by the percentage by which our framework selects the most appropriate (as indicated by our annotators) way to tidy each object.** of the VLM to simulate results our approach could achieve if the VLM were near-perfect (denoted as the "Oracle VLM"). Further implementation details (e.g., prompts, model usage) are in the supplementary. ### Evaluation on MessySurfaces We evaluate our method on the \(308\) benchmark questions across \(5\) iterations of our framework. After each iteration, the robot is evaluated on the information it has accumulated up until that point. We measure accuracy on each question and report results using both the Oracle VLM and zero-shot performance on InstructBLIP. Although **No Question** and **Oracle** are "open-loop" methods that do not require iteration, we plot their results as a constant across iterations for comparison. **After \(5\) iterations, for both the Oracle VLM and InstructBLIP, our approaches outperform all baselines**: **No Question**, **No Active Perception**, and **Baseline Questions**. Notably, **Ours-LLM** significantly outperforms **No Question** by an average of \(27.7\%\) across the two VLM types, \(p\!<\!0.01\). **Ours-LLM** also outperforms **Baseline Questions** by an average of \(5\%\) across the VLM types, \(p\!>\!0.05\) and outperforms **No Active Perception** by an average of \(6\%\), \(p\!>\!0.05\). Using an Oracle VLM allows **Ours-LLM** to close the gap with the **Oracle** by an average of \(5\%\) more than using InstructBLIP. Although our approach outperforms baselines using both VLMs, we suspect that InstructBLIP gives lower accuracies because the MessySurfaces images - especially the close-up images - are out of distribution. For this reason, we presume that our approach gives a smaller advantage over other baseline methods when using InstructBLIP. These results suggest that asking questions and actively perceiving the environment can enable grounded social reasoning; with better VLMs, we can reach close to human-level performance. However, we were puzzled why the human **Oracle** was not more accurate. We hypothesize that in some situations, it is unclear what the most appropriate way to clean an object would be - our annotators agreed \(67\%\) of the time. To obtain higher accuracy, commonsense social reasoning may sometimes not be enough and we must query user preferences to personalize the cleaning action; we explore this further in SS6 and the supplementary. For the rest of this section, we analyze each component of our framework. **Does the LLM Ask Good Follow-Up Questions?** We first evaluate the LLM's follow-up questions and the reasoning used to produce those questions. On average, \(82\%\) of users agreed that the reasoning was valid and \(87\%\) agreed that the reasoning was socially appropriate. To evaluate the follow-up questions, we asked users to rate each question's usefulness and relevance for tidying the surface on a \(5\)-point Likert scale. We compared against **Baseline Questions**, where we removed the constraint that LLM-generated questions must be relevant for tidying surfaces in a socially appropriate manner. An example baseline question is, "Does the cup have a logo?". All prompts and example questions are in the supplementary. **Users rated our questions to be significantly more useful and relevant for tidying surfaces compared to the baseline** (\(p\!<\!0.01\), Fig. 4). However, across iterations, the average usefulness and relevance of our questions decreased. This result may be due to the fact that there are not many useful and relevant questions to ask about simple objects such as a keyboard without interacting with them or people in the room. **Does the LLM Suggest Informative Close-Up Angles?** We next focus on whether the close-up angles suggested by the LLM are informative. For each object, we asked users whether the object's follow-up Figure 4: **How Good are the Follow-Up Questions? Users rated our questions to be significantly more useful and relevant compared to baseline questions, \(p\!<\!0.01\). However, the average usefulness and relevance of questions decreased over iterations.** question is answerable from the close-up angle chosen by the LLM by showing them the corresponding close-up image. We also do this for the "front" angle. As our main baseline, we ask users whether questions are answerable from the scene-level view. Additionally, we compare against angles that the LLM did not choose ("Non-LLM Angles"), as well as non-front angles. **Across \(5\) iterations we find that, on average, \(35.5\%\) more questions are answerable by LLM-chosen angles and \(31\%\) more questions are answerable by the front angles compared to the scene, \(p\!<\!0.01\). The LLM-chosen angles and front angle are also significantly more informative than the non-LLM-chosen angles and non-front angles respectively.** This trend holds consistently for each iteration (Fig. 5). **Do Our Close-Up Angles Improve VLM Accuracy?** Using VLMs for grounded social reasoning pose challenges when there are obstructions in the image (e.g., a bag blocking the sports car) or when they are not able to describe relevant details. We hypothesized that providing a close-up image would "help" a VLM answer follow-up questions more accurately. We evaluate whether close-up images can actually improve VLM accuracy on follow-up questions. From the results in Table 1, we see that **having access to close-up angles greatly improves the zero-shot prediction accuracy for both VLM variants.** More importantly, the front angles and the LLM proposed angles generally outperform other angles. These results show that it is beneficial to have both active perception and correct angles for our tasks. ### Evaluation on Real-World Robotic System To assess the performance of our system on a real-world robot (Fig. 6), we assemble \(2\) surfaces with \(11\) objects that require complex social reasoning to tidy up. Importantly, we design these surfaces so that the socially appropriate way to tidy each object would be unambiguous. The first surface resembles a child's play area, with various toys of ranging complexities (e.g., a MEGA Bloks structure, a partially built toy train set, and a to-scale Lego model of an Italian sports car). The robot must understand which toys to clean up and which toys should be left on display. The second surface, shown in Fig. 6, consists of trash that a robot must sort through. Here, the robot must determine which objects to recycle, put in landfill, or keep on the desk (further visualizations of each surface are in the supplementary). **Grounding Language in Robot Behavior.** Following the active perception component of our framework, we use a robot arm (equipped with a wrist camera) to servo to angles produced by the LLM and take photos. To map the LLM-produced angles and natural-language action plans to robot behavior, we implement a series of programmatic skill primitives (e.g., relocate('block'')). In this work, each "view" and "action" primitive is defined assuming access to the ground-truth object class and position. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & Scene & Non-front Angles & Front Angle & Non-LLM Angles & LLM Angle \\ \hline InstructBLIP (Vicuna) & 47.98 & 51.06 & **52.64** & 50.94 & **53.21** \\ InstructBLIP (Flan-T5) & 51.95 & 53.99 & **56.74** & 54.08 & **56.30** \\ \hline \hline \end{tabular} \end{table} Table 1: VLM prediction accuracy (zero-shot) under different angles over all 5 iterations. We formulate the prediction problem as a multiple choice answering task, reporting the answer that has the highest likelihood under the VLM. Figure 5: **Do We Choose Informative Close-Up Angles?** An average of \(33.25\%\) more questions are answerable by the LLM-chosen angles and front angles compared to the scene, \(p\!<\!0.01\). The LLM-chosen angles and front angle are also significantly more informative than the non-LLM-chosen angles and non-front angles respectively. These programmatic skill primitives define an API that the LLM can call into, similar to the process introduced by Liang et al. [45]. Each action plan is translated to a sequence of these programmatic skills, which are then executed in an open loop (further implementation details are in the supplementary). **Benchmark Evaluation Results.** To evaluate our method, we designed benchmark questions for each of the \(11\) objects in a similar manner to that outlined in SS4. We recruited \(5\) annotators on Prolific to choose the correct answer and took the majority label. We report results for both the Oracle VLM and InstructBLIP after running \(5\) iterations of our framework (see Figure in the supplementary). **Across both types of VLMs, Ours-LLM beats Baseline Questions by an average of \(13.5\%\), beats No Active Perception by an average of \(18\%\), and beats No Questions by an average of \(13.5\%\).** With the Oracle VLM, we achieve **Oracle** performance. With InstructBLIP, our method produces a smaller advantage over baselines. ## 6 Discussion The purpose of this work is to equip robots with basic grounded social reasoning skills while reducing the need for human specification. These reasoning skills can later be personalized towards an individual's preferences. To this end, we conduct a preliminary study to explore how we can add personalization on top of our framework. We analyzed questions that the human **Oracle** got incorrect in SS5 and found that object attributes such as "dirtiness" can indeed be subjective. This may have caused the **Oracle** to incorrectly answer some questions. We experimented with adding personalization information to \(8\) questions where both the **Oracle** and our framework chose the same incorrect answer. **We found an average \(86\%\) improvement in accuracy, lending support to the hypothesis that preference information helps further enable grounded social reasoning.** See the supplementary for more details. **Limitations and Future Work.** While our work presents a first step towards actively grounded social reasoning, there are key limitations that we need to address. One such limitation is our reliance on heuristics to guide our active perception pipeline - while the five specified angles is enough for most of the questions in the MessySurfaces dataset, there are many cases where objects may be occluded, or otherwise require more granular views to answer questions; future work might explore learned approaches for guiding perception based on uncertainty, or developing multi-view, queryable scene representations [53; 54]. Similarly, limiting our approach is our current inability to _interact with objects dynamically_ - opening boxes, removing clutter - to better get a sense of the properties of objects in the environment. Finally, while we focus on normative, commonsense behaviors, there are times as we have alluded where the "right" thing for a robot to do is to ask for preferences or other identifying information - developing a model of when it should or should not act in an environment. This work takes an exciting step towards building competent robot assistants that reduce the need for human specification, especially when it comes to socially reasonable commonsense behavior. We hope that future work can build on our framework and grow the various types of reasoning we want of our robots, enabling richer modes of human-robot interaction. Figure 6: **Real-World Social Reasoning.** We outline the steps of our framework with a robot. Notably, the LLM generates questions and “angles” for the arm to servo to (e.g., _right of the banana_). We also use the LLM to generate an _action plan_ for each object – each plan is converted to a sequence of skill primitives that are then executed by the robot.
2305.00985
Attention-based Spatial-Temporal Graph Neural ODE for Traffic Prediction
Traffic forecasting is an important issue in intelligent traffic systems (ITS). Graph neural networks (GNNs) are effective deep learning models to capture the complex spatio-temporal dependency of traffic data, achieving ideal prediction performance. In this paper, we propose attention-based graph neural ODE (ASTGODE) that explicitly learns the dynamics of the traffic system, which makes the prediction of our machine learning model more explainable. Our model aggregates traffic patterns of different periods and has satisfactory performance on two real-world traffic data sets. The results show that our model achieves the highest accuracy of the root mean square error metric among all the existing GNN models in our experiments.
Weiheng Zhong, Hadi Meidani, Jane Macfarlane
2023-05-01T00:58:48Z
http://arxiv.org/abs/2305.00985v1
# Attention-based Spatial-Temporal Graph Neural ODE for Traffic Prediction ###### Abstract Traffic forecasting is an important issue in intelligent traffic systems (ITS). Graph neural networks (GNNs) are effective deep learning models to capture the complex spatio-temporal dependency of traffic data, achieving ideal prediction performance. In this paper, we propose attention-based graph neural ODE (ASTGODE) that explicitly learns the dynamics of the traffic system, which makes the prediction of our machine learning model more explainable. Our model aggregates traffic patterns of different periods and has satisfactory performance on two real-world traffic data sets. The results show that our model achieves the highest accuracy of the root mean square error metric among all the existing GNN models in our experiments. Graph neural ODE Attention mechanism Traffic dynamics ## 1 Introduction Accurate traffic prediction is of significant importance for the intelligent transportation system, especially on the highway where a massive traffic flow of high driving speed is moving. Traffic forecasting is a long-standing challenge due to the complexity of spatio-temporal dependencies of traffic data. On the one hand, traffic time series have strong nonlinear temporal dynamics due to the varying traffic demand in different time points. On the other hand, the correlation between different locations in a network changes under different traffic conditions. Traffic accidents also cause non-stationary traffic dynamics, making the traffic dynamics more unpredictable. Graph neural networks (GNNs) attracted tremendous attention as a popular class of deep learning models for traffic prediction. Unlike fully-connected neural networks and convolution neural networks, GNNs can deal with graph-structured data. GNNs outperform most traditional methods by aggregating traffic information of neighbor locations and achieving satisfactory predictions of future traffic conditions. There are many excellent GNN models, such as DCRNN Li et al. (2017), STGCN Yu et al. (2017), ASTGCN Guo et al. (2019), GraphWaveNet Sun et al. (2018), STSGCN Song et al. (2020), Graph Autoformer Zhong et al. (2022), Etc. These models have a specific framework to aggregate the spatial and temporal traffic features to predict future traffic data. Neural ODE Chen et al. (2018) appears as an inspiration for us to predict the traffic future based on the dynamic of the traffic system. Spatial-temporal neural ODE Zhou et al. (2021) combines neural ODE and convolution neural network for urban area traffic prediction, leveraging spatial-temporal data from multiple sources. Spatial-temporal graph neural ODE (STGODE) Fang et al. (2021) combines graph neural network and neural ODE to extract a more extended range of spatio-temporal correlation without being affected by the over-smoothing problem. However, to the best of our knowledge, there is not much literature that explicitly uses a deep learning model to learn the dynamics of the traffic system and predicts the future traffic data through the system's evolution. In this paper, we focus on using neural ODE to imitate the dynamics of the traffic system by performing supervise learning on the output of neural ODE at each time step. We utilize attention mechanism Feng et al. (2017) to model the dynamics of the traffic system, using the spatial and temporal attention to estimate the trend of traffic under different traffic conditions. We also shows the advantage of aggregating traffic trend of different periods for higher prediction accuracy. The adjoint training Chen et al. (2018) effects are also discussed in this paper. ## 2 Problem Setup In this study, we define a traffic network as an undirected graph \(G=(V,E,S)\), where \(V\) is a set of all vertices and \(E\) is a set of all edges of the network; \(S\in\mathbb{R}^{N\times N}\) defines the adjacent matrix of the graph and \(N=|V|\) is the number of all vertices. We use sensors in the vertices to collect measurements of the traffic features with a fixed sampling frequency within a period. The number of traffic features each sensor collects is \(F\). Then the collected traffic data sequence \(\mathbf{X}=\{x_{1},x_{2},...,x_{t},x_{t+1},...,x_{T}\}\), where \(x_{t}\in\mathbb{R}^{N\times F}\) denotes the record of all the traffic features of all nodes at time \(t\). We denote \(T_{h}\) as the time length of one hour, \(T_{d}\) as the time length of one day, and \(T_{w}\) as the time length of one week. Then we define four types of time series segments of the length of \(T_{h}\): * The **predicted segment**\(\mathbf{\chi}^{t_{p}}\) is defined as \(\{x_{t_{p}+1},x_{t_{p}+2},...,x_{t_{p}+T_{h}}\}\), where \(x_{t_{p}+1}\) is the starting point of the predicted segment. We will use historical traffic data segments to predict this traffic segment. * The **recent segment**\(\mathbf{\chi}^{t_{r}}\) is defined as \(\{x_{t_{p}-T_{h}+1},x_{t_{p}-T_{h}+2},...,x_{t_{p}}\}\), where \(t_{r}=t_{p}-T_{h}\). It is the time segment that is temporally closest to the predicted segment, which provides us with most important information for the traffic condition of next one hour. * The **daily period segment**\(\mathbf{\chi}^{t_{d}}\) is defined as \(\{x_{t_{p}-T_{d}+1},x_{t_{p}-T_{d}+2},...,x_{t_{p}-T_{d}+T_{h}}\}\), where \(t_{d}=t_{p}-T_{d}\). This is the traffic data of the same time in a day as the predicted segment and we use it to capture the daily traffic pattern of the system such as the rush hour in the morning and evening. * The **weekly period segment**\(\mathbf{\chi}^{t_{w}}\) is defined as \(\{x_{t_{p}-T_{w}+1},x_{t_{p}-T_{w}+2},...,x_{t_{p}-T_{w}+T_{h}}\}\), where \(t_{w}=t_{p}-T_{w}\). This is the traffic data of the same time in a week as the predicted segment and we use it to capture the weekly traffic pattern of the system. Using this time segment we aims at detecting the dynamic of the traffic system within the period of one week. We intend to build a deep learning model using the historical traffic data \(\{\mathbf{\chi}^{t_{r}},\mathbf{\chi}^{t_{d}},\mathbf{\chi}^{t_{w}}\}\) as inputs to predict future traffic data \(\mathbf{\chi}^{t_{p}}\). An example of constructing input data and output data is shown in Figure 2. ## 3 Methodology ### Model framework Our model consists of three independent Neural ODE blocks to capture the dynamic of the traffic system of different periods. We use a fully-connected network as our encoder to map the traffic feature data of each node in each time point to a higher-dimensional space, which is employed to strengthen the expressiveness of the model. Figure 1: An example of the input and output traffic segments of our model is shown. The recent traffic segment is defined as the traffic conditions of the previous hour. The daily traffic segment is the traffic conditions of yesterday at the same time slot as the predicted traffic segment. Similarly, The weekly traffic segment is the traffic conditions of last week at the same time slot of the same day as the predicted traffic segment. Then we use Neural ODE block to calculate the hidden state of the future time segments by learning the traffic pattern of different time periods. We implement three times "Forward Pass" Chen et al. (2018) of each Neural ODE block to obtain \(\{H^{t+\Delta t},H^{t+2\Delta t},H^{t+3\Delta t}\}\), where \(t\) is starting time point of the historical time segment and \(\Delta t\) is the \(\frac{1}{3}\) time length of the whole period. This means that \(t_{w}+3\Delta t_{w}=t_{d}+3\Delta t_{d}=t_{r}+3\Delta t_{r}=t_{p}\). We consider \(H^{t_{p,v}},H^{t_{p,d}},H^{t_{p,r}}\) as the hidden feature of predicted time segment of three independent time ODE blocks. We use a fully connected neural network as a fusion layer to aggregate the hidden features \(H^{t_{p,w}},H^{t_{p,d}},H^{t_{p,r}}\). The output of the fusion layer \(H^{t_{p}}\) is the same dimension as each hidden state of \(\{H^{t_{p,w}},H^{t_{p,d}},H^{t_{p,r}}\}\). As shown in Figure 3.1, we will output a set of intermediate hidden states \(\mathbb{H}=\{\{H^{t_{w}+\Delta t_{w}},H^{t_{w}+2\Delta t_{w}},H^{t_{d}+\Delta t _{d}},H^{t_{d}+2\Delta t_{d}},H^{t_{r}+\Delta t_{r}},\)\(H^{t_{r}+2\Delta t_{r}}\}\) and hidden state of the predicted time segment \(H^{t_{p}}\). We use a decoder, which is also a fully-connected neural network, as a mapping from the hidden state to the original data space. Combined with the decoder, We then can obtain the prediction of the predicted time segment \(X^{t_{p}}\). We perform supervised training for our model by minimizing the mean square error between predicted and ground-truth traffic data using stochastic gradient descent algorithms. The training loss \(L\) is defined as: \[L=MSE(\mathbb{X},\mathbb{X}_{GT})\] where \(X^{t_{p}}_{GT}\) are the ground-truth time segment data. The details of our model framework is shown in Figure 3.1. ### Neural ODE block Each ODE block in Figure 3.1 consists of a spatial attention layer, a temporal attention layer, and a chebyshev graph convolution layer. The traffic conditions of one location can affect the traffic condition of other neighbor locations. For this consideration, we adopt the chebyshev graph convolution to aggregate the information of the neighbor vertices Kipf and Welling (2016). In order to allow for capturing multi-hop neighbors' traffic conditions, we use up to third order of chebyshev polynomials. However, the influence of neighbor vertices is highly dynamic. We may need to pay different attention to the same neighbor vertices under different traffic conditions when we aggregate the information of the neighbor vertices. Also, in the temporal dimension, correlations between traffic conditions in different time slices vary in different situations. Hence, we use the attention mechanism Feng et al. (2017) to capture spatial and temporal correlations by calculating Figure 2: The framework of the proposed attention-based spatial temporal graph Neural ODE is shown in this figure. We apply different ODE blocks to different input traffic segments, which are used to learn the traffic dynamics of different time intervals. A fully-connected fusion layer is applied to the aggregated features of three blocks and outputs the final aggregated features. the attention scores \(A_{S}\) and \(A_{T}\), using the spatial and temporal attention layers. We couple all information in chebyshev graph convolution layer to predict the change of traffic condition \(\frac{dH^{t}}{dt}\) over a specific time length \(\Delta t\). The output of the ODE block is the hidden state of future time segment \(H^{t+\Delta t}=H^{t}+\frac{dH^{t}}{dt}\). The details of the ODE block architecture is shown in Figure 3.2. ## 4 Numerical Experiments ### Data set description We use two real-world traffic data sets, PeMS-BAY Li et al. (2017) and PeMS04 Guo et al. (2019), to validate our model. These two data sets are collected by the Caltrans Performance Measurement System (PeMS) every 30 seconds [STGODE]. Both the traffic data are aggregated into 5-minutes intervals. In PeMS-BAY data set, we only have the data of traffic flow velocity. There are 325 sensors in Bay Area collecting six months of data from Jan \(1^{st}\) 2017 to May \(31^{st}\) 2017. As for PeMS04 data set, we have 307 sensors on the highway of major metropolitan areas in California measuring three traffic features, including traffic flow, average speed, and average occupancy, for almost two months. We use our model to predict traffic speed using PeMS-BAY data set and traffic flow using PeMS04 data set. Both data sets are sorted by time from the past to the present. We split the data into three parts for training (70%), validation (10%), and testing (20%). Data normalization with the mean and standard deviation of the training data is applied to these three parts. ### Baseline models We compare our model performance with several baseline models: * Historical average (HA) Ermagun and Levinson (2018): it estimates seasonal traffic pattern and uses weighted average of traffic flow data as prediction. * Auto-regressive integrated moving average model (ARIMA) Ermagun and Levinson (2018): it is a traditional parametric for analysis of time series data. * Fully connected LSTM (FC-LSTM) Sutskever et al. (2014): it is an RNN-based sequence model whose encoder and decoder are both LSTM layers. * Diffusion convolutional recurrent neural network (DCRNN) Li et al. (2017): it purposes a combination of diffusion convolution operator and GRU Fu et al. (2016) to capture spatio-temporal correlations. * Spatial-temporal graph ODE (STGODE) Fang et al. (2021): aggregating the spatial and temporal information of the traffic segments, it uses Neural ODE to learn the system's dynamic. * Attention based spatial temporal graph convolutional networks (ASTGCN) Guo et al. (2019): it uses attention mechanism to capture spatial and temporal data dependency for better future prediction. * Graph multi-attention network (GMAN) Zheng et al. (2020): it uses a multi-head self-attention mechanism to capture the spatial and temporal correlation of traffic series data and transform attention to capture dependency between historical time segments and future time segments. ### Comparison of traffic prediction performance We use two different metrics to evaluate the model performance: the root mean squared error (RMSE) and mean absolute error (MAE). We will exclude missing data when evaluating the performance of both data sets. Using Adam Figure 3: The architecture of an individual neural ODE block is shown. We will compute the spatial attention scores and temporal attention scores based on the input traffic features. Combined with exact graph connectivity, we perform diffusion graph convolution to derive the traffic features of the next time step. optimizer, we use a learning rate of 0.0001 and train our model for 50 epochs. The tradeoff coefficient \(\alpha\) is set to be 0.1, and the batch size is set to be 32. We report the traffic prediction error of 15-min ahead prediction, 30-min ahead prediction, and 60-min ahead prediction in Table 3, using PeMS-BAY data set for validation. We observe that our method outperforms all the baseline models in 15-min ahead prediction and 30-min ahead prediction. For 60-min ahead prediction, our model achieves significant improvement in decreasing RMSE of the prediction and slight improvement in decreasing MAE. These results demonstrate the effectiveness of learning traffic dynamics to predict future traffic conditions. We also compare our model's performance on the average results of traffic flow prediction using PeMS04 data set over the next one hour in Table 3. The results show that our model also outperforms other baseline models in traffic flow prediction. Neural ODE-related model (STGODE and our model) has higher performance than other baseline models. Moreover, it also shows that our model has good performance in decreasing RMSE of the prediction. ### Effect of the fusion layer We consider every component in our model has some specific physical meaning except for the fusion layer. The encoder and decoder are two opposite mappings between observed traffic data and the hidden state of the dynamic traffic system. Neural ODE imitates the evolution of the dynamic system. However, it is hard to explain the output of the fusion layer. So we will investigate the effect of the fusion layer in our model. We remove the fusion layer and use the prediction of three independent ODE blocks for comparison. We consider that dynamic of different periods will have different effects on our prediction. As shown in Figure 4, recent traffic data has a significant effect on short-term traffic prediction (5min-20min), while weekly period data is more important in long-term traffic prediction (45min-60min). The ODE block of daily period data does not have ideal short-term or long-term prediction performance. We consider that daily traffic pattern is more difficult to capture due to abrupt change in rush hour traffic conditions. However, by coupling features of different traffic patterns, we can leverage different historical data effects on the prediction of different time points, which helps us increase the model accuracy of future traffic prediction. ### Effect of adjoint training method The adjoint training method highly reduces the memory cost of Neural ODE training Chen et al. (2018). However, Some researchers claimed that the adjoint training method of Neural ODE introduces error into gradient information, which will negatively influence the final well-trained model Daulbaev et al. (2020). Here we also implement two training methods, adjoint training and regular training. Then we compare the performance of models trained by these two methods. \begin{table} \begin{tabular}{c c c c c c} \hline model & \multicolumn{2}{c}{15 min} & \multicolumn{2}{c}{30 min} & \multicolumn{2}{c}{60 min} \\ & RMSE & MAE & RMSE & MAE & RMSE & MAE \\ \hline HA & 5.60 & 2.88 & 5.60 & 2.88 & 5.60 & 2.88 \\ ARIMA & 3.30 & 1.61 & 4.76 & 2.33 & 6.50 & 3.39 \\ FC-LSTM & 4.19 & 2.05 & 4.55 & 2.2 & 4.96 & 2.36 \\ DCRNN & 2.95 & 1.38 & 3.97 & 1.74 & 4.74 & 2.05 \\ ASTGCN & 2.80 & 1.35 & 3.82 & 1.66 & 4.56 & 2.06 \\ GMAN & 2.82 & 1.34 & 3.72 & 1.62 & 4.32 & **1.86** \\ Our method & **2.68** & **1.30** & **3.36** & **1.62** & **4.10** & 2.00 \\ \hline \end{tabular} \end{table} Table 1: Traffic speed prediction performance of different models on PeMS-BAY data set \begin{table} \begin{tabular}{c c c} \hline model & RMSE & MAE \\ \hline HA & 54.11 & 36.76 \\ ARIMA & 68.13 & 32.15 \\ FC-LSTM & 45.76 & 29.50 \\ DCRNN & 37.61 & 24.63 \\ ASTGCN & 35.22 & 22.93 \\ STGODE & 32.82 & 20.84 \\ Our method & **30.65** & **20.23** \\ \hline \end{tabular} \end{table} Table 2: Traffic flow prediction performance of different models on PeMS04 data set In Table 4.5, we conclude that the adjoint training method decreases prediction accuracy, and GMAN has higher accuracy in MAE if we use the adjoint method to train our model. However, the model of adjoint training still outperforms GMAN in the metric of RMSE. We also plot the average prediction error of our models in each training epoch in Figure 4.5. We can observe that the prediction error of the adjoint training model has more significant vibration, which indicates that adjoint training is less stable than non-adjoint training. Considering the factors of model performance and training stability, it is suggestive that not use adjoint training unless dealing with an extremely large neural ODE model. ## 5 Conclusion In this paper, we propose a novel deep learning architecture to capture the dynamics of the traffic network system. We help our Neural ODE better imitate the evolution of the traffic system by introducing the attention mechanism to capture the spatial and temporal correlation between traffic data. We also purpose a fusion layer to aggregate features of different periodic dynamics to perform more accurate predictions for the future traffic data. The results show that our model can outperform most of the existing models in the root mean square error metric. However, the attention mechanism is not the exact dynamics of the traffic system. To make the model more explainable, we can introduce more physics-related components in the neural ODE block, such as diffusion processes. Also, adjoint training is an efficient computation method of training our model. If we can increase the stability of the adjoint training, we can construct a more complicated deep learning model to capture the temporal dependency better. \begin{table} \begin{tabular}{c c c c c c c} \hline model & \multicolumn{2}{c}{15 min} & \multicolumn{2}{c}{30 min} & \multicolumn{2}{c}{60 min} \\ & RMSE & MAE & RMSE & MAE & RMSE & MAE \\ \hline GMAN & 2.82 & 1.34 & 3.72 & 1.62 & 4.32 & **1.86** \\ adjoint training & 2.72 & 1.33 & 3.52 & 1.72 & 4.22 & 2.05 \\ non-adjoint training & **2.68** & **1.30** & **3.36** & **1.62** & **4.10** & 2.00 \\ \hline \end{tabular} \end{table} Table 3: Effect of adjoint training on prediction error. Figure 4: Prediction error comparison of different neural ODE blocks and our purposed model. We used RMSE to compare the performance of using different traffic features. We observed that weekly period segment was useful for long-term future prediction and recent traffic segment was beneficial for short-term future prediction. The performance of our model showed the effectiveness of the fusion layer to achieve higher performance in our model.
2310.04955
Information-Theoretic Bounds on The Removal of Attribute-Specific Bias From Neural Networks
Ensuring a neural network is not relying on protected attributes (e.g., race, sex, age) for predictions is crucial in advancing fair and trustworthy AI. While several promising methods for removing attribute bias in neural networks have been proposed, their limitations remain under-explored. In this work, we mathematically and empirically reveal an important limitation of attribute bias removal methods in presence of strong bias. Specifically, we derive a general non-vacuous information-theoretical upper bound on the performance of any attribute bias removal method in terms of the bias strength. We provide extensive experiments on synthetic, image, and census datasets to verify the theoretical bound and its consequences in practice. Our findings show that existing attribute bias removal methods are effective only when the inherent bias in the dataset is relatively weak, thus cautioning against the use of these methods in smaller datasets where strong attribute bias can occur, and advocating the need for methods that can overcome this limitation.
Jiazhi Li, Mahyar Khayatkhoei, Jiageng Zhu, Hanchen Xie, Mohamed E. Hussein, Wael AbdAlmageed
2023-10-08T00:39:11Z
http://arxiv.org/abs/2310.04955v2
# Information-Theoretic Bounds on The Removal of Attribute-Specific Bias From Neural Networks ###### Abstract Ensuring a neural network is not relying on protected attributes (_e.g._, race, sex, age) for predictions is crucial in advancing fair and trustworthy AI. While several promising methods for removing attribute bias in neural networks have been proposed, their limitations remain under-explored. In this work, we mathematically and empirically reveal an important limitation of attribute bias removal methods in presence of strong bias. Specifically, we derive a general non-vacuous information-theoretical upper bound on the performance of any attribute bias removal method in terms of the bias strength. We provide extensive experiments on synthetic, image, and census datasets to verify the theoretical bound and its consequences in practice. Our findings show that existing attribute bias removal methods are effective only when the inherent bias in the dataset is relatively weak, thus cautioning against the use of these methods in smaller datasets where strong attribute bias can occur, and advocating the need for methods that can overcome this limitation. ## 1 Introduction _Protected attributes_ is a term originating from Sociology [30] referring to a finite set of attributes that must not be used in decision-making to prevent exacerbating societal biases against specific demographic groups [9]. For example, in deciding whether or not someone should be qualified for a bank loan, race (as one of the protected attributes) must not influence the decision. Given the widespread use of neural networks in real-world decision-making, developing methods capable of explicitly excluding protected attributes from the decision process - more generally referred to as removing attribute bias [34] - is of paramount importance. While many promising methods for removing attribute bias in neural networks have been proposed in the recent years [2; 20; 39; 29; 35; 41; 17], the limitations of these methods remain under-explored. In particular, existing studies explore the performance of these methods only in cases where the protected attribute (_e.g._, race) is _not strongly predictive_ of the prediction target (_e.g._, credit worthiness). However, this implicit assumption does not always hold in practice, especially in cases where training data is scarce. For example, in diagnosing Human Immunodeficiency Virus (HIV) from Magnetic Resonance Imaging (MRI), HIV subjects were found to be significantly older than control subjects, making age a strong attribute bias for this task [1]. Another example is the Pima Indians Diabetes Database which contains only 768 samples where several spurious attributes become strongly associated with diabetes diagnosis [33; 27]. Even the widely-used CelebA dataset [28] contains strong attribute biases: for example in predicting blond hair, sex is a strong predictor 1. Therefore, it is crucial to study the performance of bias removal methods beyond the moderate bias region to understand their limitations and the necessary conditions for their effectiveness. In Fig. 1, we will illustrate by a specific example the limitation in bias removal methods that we will later investigate theoretically and empirically in several real-world datasets. In this example, we conduct an extended version of a popular controlled experiment for evaluating the performance of attribute bias removal [20; 31; 41]. The task is to predict digits from colored MNIST images [20] where color is considered a protected attribute. During training, each digit is assigned a unique RGB color with a variance (_i.e._, the smaller the color variance, the more predictive the color is of the digit, and the stronger the attribute bias). To measure how much the trained model relies on the protected attribute in its predictions, model accuracy is reported on a held-out subset of MNIST with uniformly random color assignments (_i.e._, where the color is not predictive of the digit). While state-of-the-art methods [20; 31; 35; 41] report results for the color variance only in the range \([0.02,0.05]\) (without providing any justification for this particular range), we explore the results for the missing range of \([0,0.02]\), which we denote as the _strong bias region_. As shown in Fig. 1, in the strong bias region, we observe that the effectiveness of all existing methods sharply declines and that there exists a _breaking point_ in their effectiveness. The breaking point of a method is defined as the weakest bias strength at which its performance becomes indistinguishable from the baseline under a two-sample one-way Kolmogorov-Smirnov test with significance level of \(0.05\). The main goal of this paper is to study the cause and extent of this limitation empirically and theoretically. ## 2 Related Work **Bias in Neural Networks.** Mitigating bias and improving fairness in neural networks has received considerable attention in recent years [14; 7; 22; 12; 8; 25]. The methods proposed for mitigating bias in neural networks can be broadly grouped into two categories: 1) methods that aim to mitigate the uneven performance of neural networks between majority and minority groups; and 2) methods that aim to reduce the dependence of neural network prediction on specific attributes. Most notable examples of the former group are methods for constructing balanced training set [6; 19], synthesizing additional samples from the minority group [4; 26], importance weighting the under-represented samples [37], and domain adaption techniques that adapt well-learnt representations from the majority group to the minority group [38; 13; 18]. In this work, our focus is on the second group of methods, Figure 1: Accuracy of attribute bias removal methods under different levels of bias strength in Colored MNIST, showing results on the previously unexplored region of color variance \(<0.02\). The breaking point of each method, where its performance becomes statistically similar to the baseline classifier, is labeled with \(\blacktriangle\) on the x-axis. While all methods clearly outperform baseline in moderate bias region, their effectiveness sharply declines to baseline as bias strength increases. The plot shows average accuracy (lines) with one standard deviation error (shaded) over 15 randomized training runs. which we will further divide into two subgroups discussed below: methods that implicitly or explicitly minimize the mutual information between learnt latent features and the specific protected attribute. **Explicit Mutual Information Minimization.** Several methods aim to directly minimize mutual information (MI) between a latent representation for the target classification and the protected attributes, in order to learn a representation that is predictive of the target but independent of the protected attributes, hence removing attribute bias. These methods mainly differ in the way they estimate MI. Most notable examples include LNL [20] which minimizes the classification loss together with a MI regularization loss estimated by an auxiliary distribution; BackMI [31] which minimizes classification loss and MI estimated by a neural estimator [5] through the statistics network; and, CSAD [41] which minimizes MI estimated by [16] between a latent representation to predict target and another latent representation to predict the protected attributes. **Implicit Mutual Information Minimization.** Another group of methods aims to remove attribute bias by constructing surrogate losses that implicitly reduce the mutual information between protected attributes and the target of classification. Most notably, LfF [29] proposes training two models simultaneously, where the first model will prioritize easy features for classification by amplifying the gradient of cross-entropy loss with the predictive confidence (softmax score), and the second model will down-weight the importance of samples that are confidently classified by the first model, therefore avoiding features that are learnt easily during training, which are likely to be spurious features leading to large MI with protected attributes; EnD [35] adds regularization terms to the typical cross-entropy loss that push apart the feature vectors of samples with the same protected attribute label to become orthogonal (thereby increasing the conditional entropy of them given the protected attribute); BlindEye [2] pushes the distribution obtained by the attribute classifier operating on latent features towards uniform distribution by minimizing the entropy between them; and, domain independent training (DI) [39] learns a shared representation with an ensemble of separate classifiers per domain to ensure that the prediction from the unified model is not biased towards any domain. **Trade-offs between Bias Removal and Model Utility.** The trade-offs between fairness and accuracy in machine learning models have garnered significant discussion. Most notably, Kleinberg _et al_. [21] prove that except in highly constrained cases, no method can simultaneously satisfy three fairness conditions: _calibration within groups_ which requires that the expected number of individuals predicted as positive should be proportional to a group-specific fraction of individuals in each group, _balance for the negative class_ which requires that the average score of individuals predicted as negative should be equal across groups, and _balance for the positive class_ which requires the balance for the positive class across groups; and, Dutta _et al_. [11] theoretically demonstrate that, under certain conditions, it is possible to simultaneously achieve optimal accuracy and fairness in terms of _equal opportunity_[14] which requires even false negative rates or even true positive rates across groups. Different from the above-mentioned fairness criteria, we focus on another well-known fairness criterion, _demographic parity_[22; 12], which requires even prediction probability across groups, _i.e_., independence between model prediction and protected attributes. Regarding this criterion, Zhao and Gordon [40] show that any method designed to learn fair representations, while ensuring model predictions are independent of protected attributes, faces an information-theoretic lower bound on the joint error across groups. In contrast, we derive a general information-theoretic upper bound on the best attainable performance, which is not limited to the case where model predictions are independent of protected attributes and considers different levels of the retained protected attribute information in the learnt features. ## 3 Bounding the Performance of Attribute Bias Removal Methods The observations in Fig. 1 revealed that the existing methods are not effective when the attribute bias is too strong, _i.e_., they all have a breaking point, and that there is a continuous connection between their effectiveness and the strength of the attribute bias. However, so far, these observations are limited to the particular Colored MNIST dataset. In this section, we show that this situation is in fact much more general. By deriving an upper bound on the classification performance of any attribute bias removal method in terms of the bias strength, regardless of the dataset and domain, we will elucidate the cause and extent of the limitation we observed in Fig. 1. We first need to formalize the notions of performance, attribute bias strength, and attribute bias removal. Let \(X\) be a random variable representing the input (_e.g_., images) with support \(\mathcal{X}\), \(Y\) a random variable representing the prediction target (_e.g_., hair color) with support \(\mathcal{Y}\), and \(A\) a random variable representing the protected attribute (_e.g._, sex). We define the attribute bias removal method as a function \(f:\mathcal{X}\rightarrow\mathcal{Z}\) that maps input data to a latent bottleneck feature space \(\mathcal{Z}\) inducing the random variable \(Z\), and consider the prediction model as a function \(g:\mathcal{Z}\rightarrow\mathcal{Y}\) inducing the random variable \(\hat{Y}\). According to the information bottleneck theory [36, 32], the goal of classification can be stated as maximizing the mutual information between prediction and target, namely \(I(\hat{Y};Y)\), which is itself bounded by the mutual information between feature and target due to the data processing inequality [10], _i.e._, \(I(\hat{Y};Y)\leq I(Z;Y)\). Intuitively, \(I(Z;Y)\) measures how informative the features learnt by the model are of the target, with \(I(Z;Y)=0\) indicating completely uninformative learnt features: the best attainable prediction performance is no better than chance. Therefore, the optimization objective of attribute bias removal methods can be formalized as learning \(f\) parameterized by \(\theta\) that minimizes mutual information between feature and attribute \(I(Z_{\theta};A)\), while maximizing mutual information between feature and target \(I(Z_{\theta};Y)\). Given the above definitions, we can state our goal in this section concretely: to derive a connection between \(I(Z;Y)\) (the best attainable performance), \(H(Y|A)\) (the attribute bias strength measured by the conditional entropy of target given attribute), and \(I(Z;A)\) (the amount of remaining attribute bias in the learnt feature). Note that the stronger the attribute bias is, the better the attribute can predict the target, hence the lower \(H(Y|A)\). So the extreme attribute bias happens when \(H(Y|A)=0\). In this particular extreme setting, the following proposition shows that no classifier can outperform random guess if the attribute is removed from the feature, _i.e._, \(I(Z;A)=0\). Figure 2: Empirically verifying the bound in Theorem 1 on CelebA. The x-axis shows \(H(Y|A)\), which we vary directly by adjusting the fraction of bias-conflicting images while ensuring a constant number of biased images in the training set. We empirically compute \(H(Y|A)\) based on the distribution of \(Y\) and \(A\) in the modified training set, and estimate mutual information using [5]. The bound \(0\leq I(Z;Y)\leq I(Z;A)+H(Y|A)\) holds in all cases. **Proposition 1**.: _Given random variables \(Z,Y,A\), in case of the extreme attribute bias \(H(Y|A)=0\), if the attribute is removed from the feature \(I(Z;A)=0\), then \(I(Z;Y)=0\), i.e., no classifier can outperform random guess. 2_ Footnote 2: Proof in Appendix. This proposition extends and explains the observation on the leftmost location of the x-axis in Fig. 1: when the color variance is zero, color is completely predictive of the digit, \(H(Y|A)=0\), and removing color from the latent feature, \(I(Z;A)=0\), makes the prediction uninformative, \(I(Z;Y)=0\). However, Proposition 1 does not explain the rest of the curve beyond just zero color variance. The following theorem closes this gap by deriving a bound on the performance of attribute bias removal methods in terms of the attribute bias strength, thus providing a more complete picture of the limitation of attribute bias removal, and elucidating the connection of performance and bias strength. **Theorem 1**.: _Given random variables \(Z,Y,A\), the following inequality holds without exception: 3_ Footnote 3: _Proof in Appendix._ \[0\leq I(Z;Y)\leq I(Z;A)+H(Y|A) \tag{1}\] **Remark 1**.: _In the extreme bias case \(H(Y|A)=0\), the bound in Eq. (1) shows that the model performance is bounded by the amount of protected attribute information that is retained in the feature, namely \(I(Z;Y)\leq I(Z;A)\). This puts the model in a trade-off: the more the attribute bias is removed, the lower the best attainable performance._ **Remark 2**.: _When the protected attribute is successfully removed from the feature \(I(Z;A)=0\), the bound in Eq. (1) shows that the model's performance is bounded by the strength of the attribute bias, namely \(I(Z;Y)\leq H(Y|A)\). This explains the gradual decline observed in Fig. 1 as we moved from the moderate to the strong bias region (from right to left towards zero color variance)._ **Remark 3**.: _When \(H(Y|A)=0\) and \(I(Z;A)=0\), Eq. (1) reduces to the result of Proposition 1, \(I(Z;Y)=0\), hence no classifier can outperform random guess._ **Remark 4**.: _We emphasize that the provided bound is placed on the best attainable performance. So while decreasing the bound will decrease performance, increasing the bound will not necessarily result in an increased performance. For example, consider the baseline classifier: even though there is no attribute bias removal performed and therefore the bound can be arbitrarily large, \(I(Z;A)\gg 0\), the model declines in the strong bias region since learning the highly predictive protected attribute is likely in the non-convex optimization._ To empirically test our theory in a real-world dataset, we compute the terms in Theorem 1 for several attribute bias removal methods in CelebA and plot the results in Fig. 2. In these experiments, blond hair is the target \(Y\), and sex is the protected attribute \(A\). We vary the bias strength \(H(Y|A)\) by increasing/decreasing the fraction of bias-conflicting images in the training set (images of females with non-blond hair and males with blond hair) while maintaining the number of biased images in the training set at 89754. Then, we compute \(H(Y|A)\) directly and estimate the mutual information terms \(I(Z;A)\) and \(I(Z;Y)\) using mutual information neural estimator [5]. We observe that the bound holds in accordance with Theorem 1 in all cases. Now that we have mathematically and empirically shown the existence of the bound, in the next section, we will investigate the extent of its consequences for attribute bias removal methods in image and census datasets, in addition to the consequence we have already observed in the synthetic Colored MNIST dataset in Fig. 1. ## 4 Experiments In this section, we empirically study the performance of the existing attribute bias removal methods in the strong bias setting. We conduct experiments with an extensive list of existing state-of-the-art attribute bias removal methods [20; 39; 29; 35; 41; 17] on Colored MNIST as well as two real-world datasets: CelebA [28] and Adult [3]. For all results, we report average performance with one standard deviation over multiple trials (15 trials in Colored MNIST, 5 in CelebA, 25 in Adult). **Colored MNIST Dataset** is an image dataset of handwritten digits, where each digit is assigned a unique RGB color with a certain variance, studied by these methods [20; 31; 35; 41]. The training set consists of 50000 images and the testing set of 10000 images with uniformly random color assignment. The color is considered the protected attribute \(A\) and the digit is the target \(Y\). The variance of color in the training set determines the strength of the bias \(H(Y|A)\). The results on this dataset are reported in Fig. 1 and explained in Sec. 1. **CelebA Dataset**[28] is an image dataset of human faces studied by these methods [20, 39, 29, 35, 41, 17]. Facial attributes are considered the prediction target \(Y\) (_e.g._, blond hair), and sex is the protected attribute \(A\). For each target, there is a notion of _biased samples_ - images in which \(Y\) is positively correlated with \(A\), _e.g._, images of females with blond hair and males without blond hair - and a notion of _bias-conflicting_ samples - images in which \(Y\) is negatively correlated with \(A\), _e.g._, images of females without blond hair and males with blond hair. The fraction of bias-conflicting images in the training set determines the strength of the bias \(H(Y|A)\). For training, we consider the original training set of CelebA denoted _TrainOri_ consisted of 162770 images with \(H(Y|A)=0.36\), and an extreme bias version in which the bias-conflicting samples are removed from the original training set denoted _TrainEx_ consisted of 89754 images with \(H(Y|A)=0\). Additionally, we construct 16 training sets between TrainOri and TrainEx by maintaining the number of biased samples and varying the fraction of bias-conflicting samples. For testing, we consider two versions of the original testing set: (1) _Unbiased_ consists of 720 images in which all pairs of target and protected attribute labels have the same number of samples, and (2) _Bias-conflicting_ consists of 360 images in which biased samples are excluded from the _Unbiased_ dataset (only bias-conflicting samples remain). **Adult Dataset**[3] is a census dataset of income which is a well-known fairness benchmark. Income is considered the target \(Y\) and sex is the protected attribute \(A\). To construct training and testing sets, we follow the setup of CelebA explained above, but we further mitigate the effect of data imbalance and the variation in the total number of training samples. For training, we consider the balanced version of the original training set of Adult denoted _TrainOri_ consisted of 7076 records with \(H(Y|A)=0.69\), and an extreme bias version in which the bias-conflicting samples are removed from TrainOri and the same number of biased samples are appended denoted _TrainEx_ with \(H(Y|A)=0\) consisted of the same total number (7076) of records as TrainOri. Additionally, we construct 11 training sets in between TrainOri and TrainEx by varying the fraction of biased samples in TrainEx while maintaining the total size of training set. For testing, we consider two versions of the original testing set: (1) _Unbiased_ consists of 7076 records in which all pairs of target and protected attribute labels have the same number of samples, and (2) _Bias-conflicting_ consists of 3538 records in which biased samples are excluded from the _Unbiased_ dataset (only bias-conflicting samples remain). **Training Details.** For digit classification on Colored MNIST, we use Lenet-5 [24] as the baseline classifier. For facial attribute classification on CelebA, following CSAD [41], we use ResNet-18 [15] as the baseline classifier. For income prediction on Adult dataset, we use a three-layer MLP as the baseline classifier. For all experiments, we set learning rate to 0.001, batch size to 32, and Adam optimizer with \(\beta_{1}=0.9\) and \(\beta_{2}=0.999\). All hyperparameters are set according to respective papers. ### Analysis of the Extreme Bias Point \(H(y|a)=0\) In this section, we investigate the consequences of applying existing attribute bias removal methods at the extreme bias point \(H(Y|A)=0\). We study two aspects of each method, the classification \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{Test Accuracy} & \multicolumn{2}{c}{Mutual Information} \\ \cline{2-5} & Unbiased \(\uparrow\) & Bias-conflicting \(\uparrow\) & \(I(Z;A)\downarrow\) & \(\Delta\) (\%) \(\uparrow\) \\ \hline Random guess & 50.00 & 50.00 & 0.57 & 0.00 \\ Baseline & 66.11\(\pm\)0.32 & 33.89\(\pm\)0.45 & 0.57\(\pm\)0.01 & 0.00 \\ \hline LNL [20] & 64.81\(\pm\)0.17 & 29.72\(\pm\)0.26 & 0.56\(\pm\)0.06 & 1.75 \\ DI [39] & 66.83\(\pm\)0.44 & 33.94\(\pm\)0.65 & 0.55\(\pm\)0.02 & 3.51 \\ LfF [29] & 64.43\(\pm\)0.43 & 30.45\(\pm\)1.63 & 0.57\(\pm\)0.03 & 0.00 \\ EnD [35] & 66.53\(\pm\)0.23 & 31.34\(\pm\)0.89 & 0.57\(\pm\)0.05 & 0.00 \\ CSAD [41] & 63.24\(\pm\)2.36 & 29.13\(\pm\)1.26 & 0.55\(\pm\)0.04 & 3.51 \\ BCL [17] & 65.30\(\pm\)0.51 & 33.44\(\pm\)1.31 & 0.56\(\pm\)0.07 & 1.75 \\ \hline \hline \end{tabular} \end{table} Table 1: Performance of attribute bias removal methods trained under extreme bias in CelebA (_TrainEx_ training set) to predict _blond hair_. \(\Delta\) indicates the difference from baseline. None of the methods can effectively remove the bias \(I(Z;A)\) compared to baseline. performance (measured by accuracy on Unbiased and Bias-conflicting settings) and its ability to remove bias (measured by estimating \(I(Z;A)\) using [5] on the training set). Ideally, a method must achieve on-par or better accuracy than the baseline while learning a representation \(Z\) that does not reflect the attribute bias present in the training set (\(I(Z;A)=0\)), hence successfully removing the bias. However, in Tab. 1, we observe that none of the existing methods applied to CelebA can significantly reduce the bias \(I(Z;A)\) in the extreme bias setting. Similarly, in Tab. 2, we observe that none of the methods applied to Adult dataset can reduce \(I(Z;A)\). These observations are explained by Proposition 1 which states that maintaining classification performance above random guess while achieving \(I(Z;A)=0\) at \(H(Y|A)=0\) is impossible. \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{Test Accuracy} & \multicolumn{2}{c}{Mutual Information} \\ \cline{2-5} & Unbiased \(\uparrow\) & Bias-conflicting \(\uparrow\) & \(I(Z;A)\downarrow\) & \(\Delta\) (\%) \(\uparrow\) \\ \hline Random guess & 50.00 & 50.00 & 0.69 & 0.00 \\ Baseline & 50.59\(\pm\)0.54 & 1.19\(\pm\)0.83 & 0.69\(\pm\)0.00 & 0.00 \\ \hline LNL [20] & 50.10\(\pm\)0.18 & 0.43\(\pm\)0.46 & 0.69\(\pm\)0.01 & 0.00 \\ DI [39] & 50.61\(\pm\)0.28 & 0.65\(\pm\)0.64 & 0.69\(\pm\)0.01 & 0.00 \\ LfF [29] & 50.33\(\pm\)0.34 & 0.78\(\pm\)0.65 & 0.69\(\pm\)0.01 & 0.00 \\ EnD [35] & 50.59\(\pm\)0.75 & 1.18\(\pm\)0.96 & 0.69\(\pm\)0.00 & 0.00 \\ CSAD [41] & 50.76\(\pm\)2.22 & 1.43\(\pm\)2.46 & 0.69\(\pm\)0.01 & 0.00 \\ BCL [17] & 50.83\(\pm\)1.34 & 0.52\(\pm\)0.83 & 0.69\(\pm\)0.00 & 0.00 \\ \hline \hline \end{tabular} \end{table} Table 2: Performance of attribute bias removal methods trained under extreme bias in Adult (_TrainEx_ training set) to predict income. \(\Delta\) indicates the difference from baseline. None of the methods can effectively remove the bias \(I(Z;A)\) compared to baseline. Figure 3: Accuracy and mutual information under different bias strengths in CelebA. As bias strength increases (moving from right to left), the performance of all methods degrades and sharply declines to baseline at the breaking point (labeled by \(\blacktriangle\)). In Tab. 1, we also observe a trade-off between \(I(Z;A)\) and accuracy, where reducing the bias (when \(\Delta>0\)) results in a reduction in accuracy. This is explained by Theorem 1, which states that when \(H(Y|A)=0\), the amount of remained bias in the learnt feature \(Z\) is an upper bound to the best performance, _i.e._, \(I(Z;Y)\leq I(Z;A)\), and therefore removing more bias can result in lower performance. The only exception to this trade-off seems to be DI [39]. We conjecture that this is due to its enhanced ability in achieving the best attainable classification performance (see Remark 4). ### Analysis of the Strong Bias Region \(H(y|a)>0\) In this section, we go beyond the extreme bias point, and more generally investigate the consequences of applying existing bias removal methods on the entire range of bias strength, _i.e._, connecting the extreme bias training setting (TrainEx) we studied in Sec. 4.1 to the moderate bias in the original training setting (TrainOri) commonly studied in existing methods. We again study two aspects of each method, its classification performance (measured by accuracy on Unbiased and Bias-conflicting settings) and its ability to remove bias (measured by estimating \(I(Z;A)\) using [5] on the training set). In Figs. 3 and 4, we observe a decline in the performance of all methods as the bias becomes stronger, in both CelebA and Adult datasets, similar to our observation in Colored MNIST in Fig. 1. This observation is consistent with Theorem 1, which states that the bias strength determines an upper bound on the best performance of bias removal methods, regardless of the dataset and method. In Figs. 2(c) and 3(c), we use breaking points to approximately divide the strong bias region into three phases and explain the observed changes in the performance of methods from the perceptive of Theorem 1. In phase 1, as \(H(Y|A)\) increases from zero to the breaking point (bias strength decreases), we observe that the attribute bias \(I(Z;A)\) is not minimized because of the trade-off between best attainable performance \(I(Z;Y)\) and attribute bias removal when bias is very strong: the methods choose to increase accuracy towards the best attainable accuracy \(I(Z;Y)\) rather than removing attribute bias (this choice is most likely due to the larger weight on the accuracy term in Figure 4: Accuracy and mutual information under different bias strengths in Adult. As bias strength increases (moving from right to left), the performance of all methods degrades and sharply declines to baseline at the breaking point (labeled by \(\blacktriangle\)). their objectives). Then, in phase 2, as \(H(Y|A)\) increases through the breaking point (bias strength decreases further), the methods start to minimize attribute bias \(I(Z;A)\) because the upper bound on best attainable performance \(I(Z;Y)\) is now large enough to avoid the trade-off between accuracy and attribute bias removal. Finally, in phase 3, as \(H(Y|A)\) further departs from the breaking point, accuracy gradually approaches its best attainable performance, while attribute bias \(I(Z;A)\) is minimized further below that of the baseline because the weaker bias strength now allows the model to distinguish \(Y\) from \(A\) so that minimizing attribute bias and maximizing accuracy do not compete. ## 5 Conclusion and Future Work In this work, we mathematically and empirically showed the sensitivity of state-of-the-art attribute bias removal methods to the bias strength. This highlights a previously overlooked limitation of these methods. In particular, we empirically demonstrated that when a protected attribute is strongly predictive of a target, these methods become ineffective. To understand the cause and extent of these findings, we derived an information-theoretical upper bound on the performance of any attribute bias removal method, and verified it in experiments on synthetic, image, and census datasets. These findings not only caution against the use of existing attribute bias removal methods in datasets with potentially strong bias (_e.g._, small datasets), but also motivate the design of future methods that can work even in strong bias situations, for example by utilizing external unlabelled datasets to relax the upper bound. Additionally, investigating the role of bias strength in removing attribute bias from generative models is another interesting direction for future research. ## Acknowledgement This research is based upon work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via [2022-21102100007]. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.
2309.00488
Controlled Martingale Problems And Their Markov Mimics
In this article we prove under suitable assumptions that the marginals of any solution to a relaxed controlled martingale problem on a Polish space $E$ can be mimicked by a Markovian solution of a Markov-relaxed controlled martingale problem. We also show how such `Markov mimics' can be obtained by relative entropy minimisation. We provide many examples where the above results can be applied.
Siva Athreya, Vivek S. Borkar, Nitya Gadhiwala
2023-09-01T14:27:47Z
http://arxiv.org/abs/2309.00488v1
# Controlled martingale problems and their Markov Mimics ###### Abstract. In this article we prove under suitable assumptions that the marginals of any solution to a relaxed controlled martingale problem on a Polish space \(E\) can be mimicked by a Markovian solution of a Markov-relaxed controlled martingale problem. We also show how such 'Markov mimics' can be obtained by relative entropy minimisation. We provide many examples where the above results can be applied. The work of SA was supported in part by Knowledge Exchange grant at ICTS-TIFR The work of VB was supported in part by a S. S. Bhatnagar Fellowship from the Council of Scientific and Industrial Research, Government of India. and, in a control theoretic framework, also by Borkar [9]. They both assume a uniform non-degeneracy condition for the diffusion matrix \(\delta_{t}\delta_{t}^{T}\) but the flavour of the results differ. In [22], the existence of a solution to a stochastic differential equation with state-dependent coefficients that mimics the laws of the Ito differential equation is established. The solution, however, need not be Markov unless the stochastic differential equation is well-posed. In [9], when the diffusion matrix is assumed to be a Lipschitz function of state alone, then a stronger result, viz., that the mimic exists and is a Markov process, is shown. We shall refer to the process that replicates the one dimensional marginals of a given controlled martingale problem a _Markov control mimic_ if its controlled extended generator depends on the current state and time alone, without requiring that the process be Markov. We say that it is a _Markov mimic_ if in addition, the process is Markov. Thus [22] produces a Markov control mimic whereas [9] produces a Markov mimic, under their respective sets of assumptions. Such results elicited renewed interest following their application in finance, notably due to the work of Dupire [17], [18]. An excellent account of this, along with some important extensions, can be found in [12] (see also [20]). Independently, motivated by stochastic control, there was work by Mikami [31], [32] along similar lines. In this paper we address the question of Markov mimics in the very general framework of relaxed controlled martingale problems (see, e.g., Chapter 5 of [2] for background and applications). Our aim is to unify and at the same time extend the existing results. We also point out connections with other results in Markov process theory, controlled or otherwise, by way of remarks. In Theorem 2.4, under broad assumptions we show the existence of Markov mimics for released controlled martingale problems and point out its implications in stochastic control. We also show that discounted occupational measures can be mimicked by time homogeneous Markov processes (see Theorem 2.6). Our assumptions guarantee existence of Markov controls and a Markov solution (see Remark 2.3). It is trivial to note that if there is no Markov solution to the martingale problem then the problem of finding a Markov mimic is vacuous. Theorem 2.4 shows existence of Markov mimics and also has implications in stochastic controls where costs (that are to be optimised) depend on one-dimensional marginals (see Remark 2.5). Examples where our broad assumptions hold are discussed in Section 2.1. More recently, a renewed interest in this topic was generated by optimal transport, wherein minimization of entropy (to be precise, _relative entropy_, i.e. Kullback-Leibler divergence) as a route to Markov mimics was explored in [29], [5], [3]. Stochastic control problems are closely related to Schrodinger bridges and the Monge-Kantorovich optimal transport problems. See [13] for a survey for understanding connections between one-time marginal flows in control problems with McCann displacement in optimal transport. See also [29] for a survey of the Schrodinger problem and its connections to optimal transport. In [5], the authors consider a generalisation of the Schrodinger problem, namely the so called Brodinger problem. The objective is to minimise relative entropy, with respect to a base measure \(\mathbb{P}_{0}\), over a set of measures \(\mathbb{P}\) with certain prescribed constraints on the marginals. Under markovian assumption on the base measure \(\mathbb{P}_{0}\), it is shown that if the optimisation problem has a unique solution then it is also Markov [5, Theorem 4.1]. In Section 3, we focus on lowering relative entropy. In Proposition 3.5 we show that if \(\mathbb{P}\) is non-Markov then the relative entropy can be lowered by a suitable markovianisation procedure of \(\mathbb{P}\) (see Definition 3.3) that preserves marginals. The proof is adapted from [10] and a similar technique is used in [5] as well. We also give a sufficient condition on the constraint set for existence of such Markov mimics that minimize relative entropy, in Theorem 3.6 and Corollary 3.7. See Remark 3.8 for a discussion on the uniform integrability assumption imposed in the hypothesis of the two results and also how Corollary 3.7 may be used for an alternative method of Markov selection. Some related literature is as follows. In [14], the authors consider a formulation of minimisation of relative entropy for diffusion with killing. The unbalanced optimal transport problem is handled via suitable augmentation, see [14, Problem 7], where our results are also applicable. See also [33] for results on stochastic control with fixed marginals and connections to Schrodinger bridges and optimal transport. The rest of the article is organised as follows. In the next section we introduce the controlled martingale problem, required assumptions and prove our main results. By adapting the argument of [9], we show existence of Markov mimics (Theorem 2.4) and that discounted occupational measures can be mimicked by time homogeneous Markov processes (see Theorem 2.6). In Section 2.1, we give representative examples that illustrate the applicability of our main result. Section 3 develops the alternative approach of entropy minimisation in significant generality, see Proposition 3.5 and Theorem 3.6. We conclude the paper with a couple of examples that illustrate the applications of our results on entropy minimisation to questions in optimal transport. ## 2. Markov mimics Let \(\mathbb{U}\) be a Polish space. For a generic Polish space \(E\), \(\mathcal{P}(E)\) will denote the space of probability measures on \(E\) endowed with the Prokhorov topology and \(\mathcal{B}_{b}(E)\) will denote the space of bounded measurable functions \(E\to\mathbb{R}\). Let \(D([0,\infty),;E)\) be the Polish space of r.c.l.l. paths from \([0,\infty)\to E\) with the Skorokhod topology and let \(\mathscr{U}\) be the space of all measurable maps from \([0,\infty)\to\mathcal{P}(\mathbb{U})\). _Topology on \(\mathscr{U}\):_ Let \(\{f_{i}\}\) be a countable dense set in the unit ball of \(C(\overline{\mathbb{U}})\), where \(\overline{\mathbb{U}}\) is the standard compactification of \(\mathbb{U}\), i.e., the closure of its usual homeomorphic embedding into \([0,1]^{\infty}\). Then \(\{f_{i}\}\) is a convergence determining class for \(\mathcal{P}(\mathbb{U})\). For \(U\in\mathscr{U}\), let \[\alpha_{i}(t):=\int_{\mathbb{U}}f_{i}(u)U_{t}(du),\qquad i=1,2,\ldots\] Then \(\alpha_{i}\) has measurable paths and \(|\alpha_{i}(t)|\leq 1\) for all \(t\geq 0\). For \(T>0\), let \(\mathcal{B}_{T}\) denote the space of measurable maps \([0,T]\to[-1,1]\) with the weak\({}^{\star}\)-topology of \(L^{2}[0,T]\) relativized to it. Let \(\mathcal{B}\) denote the space of measurable maps \([0,\infty)\to[-1,1]\) with the corresponding inductive topology, i.e., the coarsest topology that renders continuous the map \(\mathcal{B}\to\mathcal{B}_{T}\) that maps \(x\in\mathcal{B}\) to its restriction to \(\mathcal{B}_{T}\), for every \(T>0\). Let \(\mathcal{B}^{\infty}\) be the countable product of \(\mathcal{B}\) with the product topology. Next, note that the map \(\phi:\mathcal{P}(\mathbb{U})\to[-1,1]^{\infty}\) defined by \[\mu\in\mathscr{U}\mapsto\left(\int f_{1}d\mu,\int f_{2}d\mu,\int f_{3}d\mu, \ldots\right)\in\mathcal{B}^{\infty}\] is continuous, one-to-one with a compact domain, and hence is a homeomorphism onto its range. Equivalently, we denote the map \(\mathcal{P}(\mathbb{U})\to(\alpha_{1},\alpha_{2},\ldots)\in\mathcal{B}^{\infty}\) for \(\alpha_{i}(\cdot):=\int f_{i}d\mu(\cdot)\) also as \(\phi:\mathscr{U}\to\mathcal{B}^{\infty}\). We relativize the topology of \(\mathcal{B}\) to \(\phi(\mathscr{U})\) and topologize \(\mathscr{U}\) with the coarsest topology that renders \(\phi:\mathscr{U}\to\phi(\mathscr{U})\) a homeomorphism. From [2, Theorem 2.3.2] it is immediate that \(\mathscr{U}\) is compact and metrizable, hence Polish. Furthermore, [2, Theorem 2.3.3] also implies that if \(U^{n}\to U\) in \(\mathscr{U}\) as \(n\to\infty\) and \(f\in C([0,T]\times\mathbb{U})\) for some \(T>0\), then \[\int_{0}^{T}\int_{\mathbb{U}}f(t,u)U^{n}_{t}(du)dt\longrightarrow\int_{0}^{T} \int_{\mathbb{U}}f(t,u)U_{t}(du)dt\] as \(n\to\infty\). **Definition 2.1**.: _Let \(\mathcal{A}\) be a linear operator with domain \(\mathcal{D}(\mathcal{A})\subset\mathcal{B}_{b}(E)\) and range \(\mathcal{R}(\mathcal{A})\subset\mathcal{B}_{b}(E)\). Let \(\nu\in\mathcal{P}(E).\) An \(E\) valued process \(\{X_{t}:t\geq 0\}\) on a probability space \((\Omega,\mathcal{F},\mathbb{P})\) is said to be a solution to the martingale problem for \((\mathcal{A},\nu)\) with respect to a filtration \(\{\mathcal{F}_{t}\}_{t\geq 0}\) if_ 1. \(X\) _is progressively measurable with respect to_ \(\{\mathcal{F}_{t}\}_{t\geq 0}\) _,_ 2. _the law of_ \(X_{0}\) _is_ \(\nu\)_, and,_ 3. _for all_ \(f\in\mathcal{D}(\mathcal{A})\)__ (1) \[f(X_{t})-f(X_{0})-\int_{0}^{t}\mathcal{A}f(X_{s})ds\] _is an_ \(\{\mathcal{F}_{t}\}_{t\geq 0}\) _martingale under_ \(\mathbb{P}\)_._ **Definition 2.2**.: _Let \(\mathcal{A}\) be a linear operator with domain \(\mathcal{D}(\mathcal{A})\subset\mathcal{B}_{b}(E)\) and range \(\mathcal{R}(\mathcal{A})\subset\mathcal{B}_{b}(E\times\mathbb{U})\). Let \(\nu\in\mathcal{P}(E).\) An \(E\times\mathbb{U}\) valued process \(\{(X_{t},U_{t}):t\geq 0\}\) on a probability space \((\Omega,\mathcal{F},\mathbb{P})\) is said to be a solution to the controlled martingale problem for \((\mathcal{A},\nu)\) with respect to a filtration \(\{\mathcal{F}_{t}\}_{t\geq 0}\) if :_ 1. \((X,U)\) _is progressively measurable with respect to_ \(\{\mathcal{F}_{t}\}_{t\geq 0}\)_,_ 2. _the law of_ \(X_{0}\) _is_ \(\nu\)_, and,_ 3. \((X,U)\) _is progressively measurable with respect to_ \(\{\mathcal{F}_{t}\}_{t\geq 0}\)_,_ 4. _the law of_ \(X_{0}\) _is_ \(\nu\)_, and,_ 5. \((X,U)\) _is progressively measurable with respect to_ \(\{\mathcal{F}_{t}\}_{t\geq 0}\)_,_ 6. _the law of_ \(X_{0}\) _is_ \(\nu\)_, and,_ 7. \((X,U)\) _is progressively measurable with respect to_ \(\{\mathcal{F}_{t}\}_{t\geq 0}\)_,_ 8. _the law of_ \(X_{0}\) _is_ \(\nu\)_, and,_ 9. \((X,U)\) _is progressively measurable with respect to_ \(\{\mathcal{F}_{t}\}_{t\geq 0}\)_,_ 10. _the law of_ \(X_{0}\) _is_ \(\nu\)_, and,_ 11. \((X,U)\) _is progressively measurable with respect to_ \(\{\mathcal{F}_{t}\}_{t\geq 0}\)_,_ 12. _the law of_ \(X_{0}\) _is_ \(\nu\)_, and,_ 13. \((X,U)\) _is progressively measurable with respect to_ \(\{\mathcal{F}_{t}\}_{t\geq 0}\)_,_ 14. _the law of_ \(X_{0}\) _is_ \(\nu\)_, and,_ 15. \((X,U)\) _is progressively measurable with respect to_ \(\{\mathcal{F}_{t}\}_{t\geq 0}\)_,_ 16. _the law of_ \(X_{0}\) _is_ \(\nu\)_, and,_ 17. \((X,U)\) _is progressively measurable with respect to_ \(\{\mathcal{F}_{t}\}_{t\geq 0}\)_,_ 18. _the law of_ \(X_{0}\) _is_ \(\nu\)_, and,_ 19. \((X,U)\) _is progressively measurable with respect to_ \(\{\mathcal{F}_{t}\}_{t\geq 0}\)_,_ 20. _the law of_ \(X_{0}\) _is_ \(\nu\)_, and,_ 21. \((X,U)\) _is progressively measurable with respect to_ \(\{\mathcal{F}_{t}\}_{t\geq 0}\)_,_ 22. _the law of_ \(X_{0}\) _is_ \(\nu\)_, and,_ 23. \((X,U)\) _is progressively measurable with respect to_ \(\{\mathcal{F}_{t}\}_{t\geq 0}\)_,_ 24. _the law of_ \(X_{0}\) _is_ \(\nu\)_, and,_ 19. \((X,U)\) _is progressively measurable with respect to_ \(\{\mathcal{F}_{t}\}_{t\geq 0}\)_,_ 25. _the law of_ \(X_{0}\) _is_ \(\nu\)_, and,_ 18. \((X,U)\) _is progressively measurable with respect to_ \(\{\mathcal{F}_{t}\}_{t\geq 0}\)_,_ 26. _the law of_ \(X_{0}\) _is_ \(\nu\)_, and,_ 19. \((X,U)\) _is progressively measurable with respect to_ \(\{\mathcal{F}_{t}\}_{t\geq 0}\)_,_ 27. _the law of_ \(X_{0}\) _is_ \(\nu\)_, and,_ 19. \((X,U)\) _is progressively measurable with respect to_ \(\{\mathcal{F}_{t}\}_{t\geq 0}\)_,_ 28. _the law of_ \(X_{0}\) _is_ \(\nu\)_, and,_ 19. \((X,U)\) _is progressively measurable with respect to_ \(\{\mathcal{F}_{t}\}_{t\geq 0}\)_,_ 29. _the law of_ \(X_{0}\) _is_ \(\nu\)_, and,_ 20. \((X,U)\) _is progressively measurable with respect to_ \(\{\mathcal{F}_{t}\}_{t\geq 0}\)_,_ 30. _the law of_ \(X_{0}\) _is_ \(\nu\)_, and,_ 19. \((X,U)\) _is progressively measurable with respect to_ \(\{\mathcal{F}_{t}\}_{t\geq 0}\)_,_ 31. \((X,U)\) _is progressively measurable with respect to_ \(\{\mathcal{F}_{t}\}_{t\geq 0}\)_,_ 32. _the law of_ \(X_{0}\) _is_ \(\nu\)_, and,_ 19. \((X,U)\) _is progressively measurable with respect to_ \(\{\mathcal{F}_{t}\}_{t\geq 0}\)_,_ 4. _the law of_ \(X_{0}\) _is_ \(\nu\)_, and,_ 5. \((X,U)\) _is progressively measurable with respect to_ \(\{\mathcal{F}_{t}\}_{t\geq 0}\)_,_ 6. _the law of_ \(X_{0}\) _is_ \(\nu\)_, and,_ 7. \((X,U)\) _is progressively measurable with respect to_ \(\{\mathcal{F}_{t}\}_{t\geq 0}\)_,_ 8. _the law of_ \(X_{0}\) _is_ \(\nu\)_, and,_ 9. \((X,U)\) _is progressively measurable with respect to_ \(\{\mathcal{F}_{t}\}_{t\geq 0}\)_,_ 19. _ * _for all_ \(f\in\mathcal{D}(\mathcal{A})\)__ (2) \[f(X_{t})-f(X_{0})-\int_{0}^{t}\mathcal{A}f(X_{s},U_{s})ds\] _is an_ \(\{\mathcal{F}_{t}\}_{t\geq 0}\) _martingale under_ \(\mathbb{P}\)_._ _Correspondingly an \(E\times\mathcal{P}(\mathbb{U})\)-valued process \((X,U)\) defined on a probability space \((\Omega,\mathcal{F},\mathbb{P})\) is said to be a solution to the relaxed controlled martingale problem for \((\mathcal{A},\nu)\) with respect to a filtration \(\{\mathcal{F}_{t}\}_{t\geq 0}\) if_ * \((X,U)\) _is progressively measurable with respect to_ \(\{\mathcal{F}_{t}\}_{t\geq 0}\)_,_ * _the law of_ \(X_{0}\) _is_ \(\nu\)_, and,_ * _for all_ \(f\in\mathcal{D}(\mathcal{A})\)__ (3) \[f(X_{t})-f(X_{0})-\int_{0}^{t}\int_{\mathcal{U}}\mathcal{A}f(X_{s},u)U_{s}( du)ds\] _is an_ \(\{\mathcal{F}_{t}\}_{t\geq 0}\) _martingale under_ \(\mathbb{P}\)_._ We define the operator \(\bar{\mathcal{A}}:\mathcal{D}(\mathcal{A})\to\mathcal{B}_{b}(E\times\mathcal{ P}(\mathbb{U}))\) by \[\bar{\mathcal{A}}f(x,\nu)=\int_{\mathbb{U}}\mathcal{A}f(x,u)\nu(du),\] where \(f\in\mathcal{D}(\mathcal{A}),x\in E\) and \(\nu\in\mathcal{P}(\mathbb{U}).\) Consequently, (3) can be written as \[f(X_{t})-f(X_{0})-\int_{0}^{t}\bar{\mathcal{A}}f(X_{s},U_{s})ds.\] For \(u\in\mathbb{U}\) and \(\nu\in\mathcal{P}(\mathbb{U})\), we shall define \(\mathcal{A}^{u}:\mathcal{D}(\mathcal{A})\to C_{b}(E\times\mathbb{U})\) and \(\bar{\mathcal{A}}^{\nu}:\mathcal{D}(\mathcal{A})\to C_{b}(\mathcal{P}( \mathbb{U}\times E))\) respectively, by \[\mathcal{A}^{u}f(x)=\mathcal{A}f(x,u)\text{ and }\bar{\mathcal{A}}^{\nu}f(x)= \bar{\mathcal{A}}f(x,\nu),\] where \(f\in\mathcal{D}(\mathcal{A}),x\in E,u\in\mathbb{U}\) and \(\nu\in\mathcal{P}(\mathbb{U})\). Finally, we say that a (relaxed) controlled martingale problem has a _Markov solution_ if it has a solution \((X_{t},U_{t})\) for each initial condition \(x\in E\) such that the collection \(x\mapsto\mathbb{P}_{x}:=\) the law of this solution for \(x\in E\), satisfies the Chapman-Kolmogorov equation. We make the following assumptions. **Assumptions:** 1. There exists a countable set \(\{g_{k}\}\subset\mathcal{D}(\mathcal{A})\) such that \[\{(g,\mathcal{A}g):g\in\mathcal{D}(\mathcal{A})\subset\text{bp-closure}\{(g_{k}, \mathcal{A}g_{k}):k\geq 1\},\] where bp-closure is bounded pointwise closure (see [19, Section 3.4, page 111] for a definition). 2. \(\mathcal{D}(\mathcal{A})\) is an algebra that separates points in \(E\) and contains constant functions. Also, \(\mathcal{A}\mathbf{1}=0\) where \(\mathbf{1}\) is the constant function identically equal to \(1\). 3. Given a \(u\in\mathbb{U}\) and \(x\in E\), 1. there exists a r.c.l.l. solution to the martingale problem for \((\mathcal{A}^{u},\delta_{x})\), with \(\delta_{x}\) being the Dirac measure at \(x\in E\), 2. \(\mathcal{A}^{u}\) is dissipative (see [19, Section 1.2, page 11] for a definition), 3. \(\mathcal{D}(\mathcal{A}^{u})\) is dense in \(\mathcal{B}_{b}(E)\), 4. \(\mathcal{R}(I-A)=\mathcal{B}_{b}(E)\). 4. By a standard measurable selection theorem (see [6, Lemma 1]), given a solution \((X_{\cdot},U_{\cdot})\) to the relaxed controlled martingale problem for \((\mathcal{A},\nu)\), there exists a measurable map \(\bar{v}:E\times[0,T]\to\mathcal{P}(\mathbb{U})\) such that (4) \[\mathbb{E}[\bar{\mathcal{A}}f(X_{s},U_{s})|X_{s}]=\bar{\mathcal{A}}^{\bar{v}( X_{s},s)}f(X_{s})\] a.s. for \(s\in[0,T].\) Assume that the martingale problem for \((\bar{\mathcal{A}}^{\bar{v}(X_{s},s)},\nu)\) with \(\{\mathcal{F}_{t}\}\) replaced by \(\{\mathcal{F}_{t}^{X}\}:=\) the natural filtration of \(X\), has a solution for all \(\nu\). 5. Suppose we are given a solution \((X_{\cdot},U_{\cdot})\) to the relaxed controlled martingale problem for \((\mathcal{A},\nu)\) and \(\bar{v}(\cdot,\cdot)\) is defined as in (A4). Then the relaxed control problem for \((\bar{\mathcal{A}}^{\bar{v}(X_{s},s)},\delta_{x})\), \(x\in E\), has a Markov solution. Before we proceed, we make a few observations concerning the above assumptions. **Remark 2.3**.: * _Markov and Stationary Markov Controls:_ A control of the form \(U_{t}=v(X_{t},t)\), with \(t\geq 0\) and measurable \(v:E\times\mathbb{R}_{+}\to\mathbb{U}\) is said to be a Markov control. It is said to be a stationary Markov control if it is of the form \(U_{t}=v(X_{t})\), for \(t\geq 0\) and a measurable \(v:E\to\mathbb{U}\). Analogous definitions apply for relaxed controls. If \(X\) is Markov, resp. time-homogeneous Markov, the control may be taken to be a Markov, resp. stationary Markov control. This can be proved along the lines of [2, Theorem 2.2.23, p. 46]. The converse, however, is not true, as, e.g., in case of uncontrolled degenerate diffusions with bounded continuous coefficients. In fact, the entire set of solutions can be characterized in this case as in [34, Section 12.3]. * _Existence of a Markov solution:_ If the martingale problem for \((\bar{\mathcal{A}}^{\bar{v}(X_{s},s)},\nu)\) is well-posed for all \(\nu\), then the additional requirement of the existence of a Markov solution is automatically satisfied. In general, if the set of solution measures for each initial condition \(x\) is nonempty and compact in \(\mathcal{P}(D([0,\infty);E))\), then a procedure due to Krylov [24] (see also [34, Section 12.2]) yields a Markov selection for diffusion. For an alternative selection procedure for finite dimensional uncontrolled diffusions, see [8, 1]. See [19, Theorem 5.9] for sufficient criteria when Krylov's Markov selection can be done on solutions of martingale problems. In our case this is true, e.g., if the relaxed control \(U.\) is of the form (5) \[U_{t}=F(X([0,t]),t)\] for a continuous \(F(\cdot,t):D([0,t],E)\to\mathcal{P}(\mathbb{U})\) for each \(t\geq 0\). This would imply that \(\bar{v}(\cdot,t)\) is continuous for each \(t\) and facilitate the desired compactness of solution measures (usually proved by first establishing tightness and therefore relative compactness thereof using standard criteria and then showing that each subsequential limit is a legitimate solution measure, for which continuity of coefficients plays a role). In general one can only guarantee that \(\bar{v}(\cdot,t)\) are measurable. For finite dimensional uncontrolled diffusions, in this generality, one can possibly use a stochastic differential inclusion [23] that will facilitate a compact set of admissible laws for a given initial condition. We flag this as a direction for future research. * _Weak vs strong solutions:_ In general, the martingale problem for \((\mathcal{A}^{v(X_{s},s)},\nu)\) (alternatively, \((\bar{\mathcal{A}}^{\bar{v}(X_{s},s)},\nu)\) ) has to be interpreted in the weak sense, i.e., the underlying probability space is not specified a priori, but only the existence thereof is asserted, and uniqueness is interpreted in terms of uniqueness of the laws. We shall refer to this as the 'weak formulation' to distinguish it from the'strong formulation' of Definition 2.2. The following, however, holds: If \((X,U)\) is a solution to the martingale problem \((A^{u},\nu)\) with respect to \(\{\mathcal{F}_{t}\}\), then \((X,U^{\prime})\) with \(U^{\prime}_{t}=v(X_{t},t)\ \forall t\geq 0\), is a solution to the martingale problem \((\mathcal{A}^{v(X_{s},s)},\nu)\) with respect to \(\{\mathcal{F}_{t}\}\). Conversely, if the latter problem has a (weak) solution on some probability space, then one can construct a copy in law of \((X,U)\) on a possibly augmented version of this probability space. This follows as in [2, Theorem 2.3.4, p. 52], where this result is proved for controlled diffusions. We are now ready to state our main result. **Theorem 2.4**.: _Assume (A1)-(A5). Given any solution \((X,U)\) to a relaxed controlled martingale problem for \((\mathcal{A},\nu)\), there exists a Markov control \(\bar{v}(\cdot,\cdot)\) and a solution \(X^{\prime}\) to the relaxed controlled martingale problem for \((\mathcal{A},\nu)\) with this Markov control and \(X_{0}\stackrel{{ d}}{{=}}X^{\prime}_{0}\), such that \(X,X^{\prime}\) have identical one dimensional marginals. Furthermore, \(X^{\prime}\) can be taken to be a Markov solution._ Proof.: By assumption (A1)-(A3), let \(\{(X_{t},U_{t})\}_{t\geq 0}\) be a solution to the controlled martingale problem. In view of (A4) and (A5), let \(\tilde{X}\) be a Markov solution to the martingale problem for \((\bar{\mathcal{A}}^{\bar{v}},\delta_{x})\), where \(\bar{\nu}\) is as in (4). Fix \(t>0\). Using (A1)-(A3), we will choose a version of \(\bar{v}(X_{.},\cdot)\) such that the (two parameter) transition semigroup \(T_{s,t}\) with \(0\leq s\leq t\) is a strong contraction semigroup with generator \(\bar{\mathcal{A}}^{\bar{v}}\) on the Banach space of bounded measurable functions with supremum norm (see [35, Phillips - Lumer Theorem, p. 250]). Then by [19, Proposition 1.1.5] we have for \(f\in\mathcal{D}(\mathcal{A})\), \(T_{s,t}f\in\mathcal{D}(\mathcal{A})\) and \[\frac{\partial}{\partial s}T_{s,t}f+\bar{\mathcal{A}}^{\bar{v}(\cdot,s)}T_{s,t }f=0, \tag{6}\] for all \(s\in[0,t]\). As \((X_{t},U_{t})\) satisfies the controlled martingale problem, applying (1) to the function \(T_{s,t}f\) we have that \[T_{t,t}f(X_{t})-T_{0,t}f(X_{0})-\int_{0}^{t}\left(\frac{\partial}{\partial s}T _{s,t}f(X_{s})+\bar{\mathcal{A}}T_{s,t}f(X_{s},U_{s})\right)ds\] is an \(\{\mathcal{F}_{t}\}_{t\geq s}\) martingale under \(\mathbb{P}\). From the above, we then have \[\mathbb{E}[f(X_{t})]-\mathbb{E}[f(\tilde{X}_{t})] = \mathbb{E}[T_{t,t}f(X_{t})]-\mathbb{E}[T_{0,t}f(X_{0})]\] \[= \mathbb{E}\left[\int_{0}^{t}\left[\frac{\partial}{\partial s}T_{ s,t}f(X_{s})+\bar{\mathcal{A}}T_{s,t}f(X_{s},U_{s})\right]ds\right].\] Using (6) leads to \[\mathbb{E}[f(X_{t})]-\mathbb{E}[f(\tilde{X}_{t})] =\int_{0}^{t}\mathbb{E}[-\bar{\mathcal{A}}^{\bar{v}(\tilde{X}_{ s},s)}T_{s,t}f(X_{s})+\bar{\mathcal{A}}T_{s,t}f(X_{s},U_{s})]ds\] \[=\int_{0}^{t}\mathbb{E}[\mathbb{E}[-\bar{\mathcal{A}}^{\bar{v}( \tilde{X}_{s},s)}T_{s,t}f(X_{s})+\bar{\mathcal{A}}T_{s,t}f(X_{s},U_{s})\mid X_ {s}]]ds\] \[=\int_{0}^{t}\mathbb{E}[-\bar{\mathcal{A}}^{\bar{v}(\tilde{X}_{ s},s)}T_{s,t}f(X_{s})+\mathbb{E}[\bar{\mathcal{A}}T_{s,t}f(X_{s},U_{s})\mid X_ {s}]]ds\] \[=0\] by (4). As \(f\in\mathcal{D}(\bar{\mathcal{A}})\) was arbitrary, by (A1), (A2) we have that \(X\) and \(\tilde{X}\) have the same marginals. **Remark 2.5**.: * _A weaker version of this result appears as_ _[_7_, Theorem 2.4, p. 1552]__, where it is proved that the one dimensional marginals can be mimicked by a process controlled by a Markov control. But it is not asserted or claimed that the latter process itself is Markov. A similar observation applies to_ _[_22_, Theorem 4.6, p. 516]_ _where under non-degeneracy condition on the diffusion matrix of an Ito differential equation, the one-dimensional marginals are mimicked by a stochastic differential equation with measurable drift and diffusion matrix. The latter conditions ensure only existence and not uniqueness (_[_26_, Section 2.6]_). Consequently it can have non-Markov solutions. This issue is avoided in_ _[_9_]_ _by means of a stronger additional condition on the diffusion matrix, viz., that it is a Lipschitz function of the current value of the process alone. Then the Markov controlled process is itself Markov._ * _One immediate implication for stochastic control problems wherein the cost or reward depends only on one dimensional marginals, is that the existence of an optimal non-anticipative control implies the existence of an optimal Markov control._ * _Theorem_ 2.4 _says that, given a solution_ \((X.,U.)\) _to the relaxed controlled martingale problem, there exists a measurable map_ \(\bar{v}:E\times[0,T]\to\mathcal{P}(\mathbb{U})\) _such that_ (7) \[\mathbb{E}[\mathcal{A}f(X_{s},U_{s})|X_{s}]=\mathcal{A}^{\bar{v}(X_{s},s)}f(X _{s})\] _a.s. for_ \(s\in[0,T],\) _and the martingale problem for_ \((\mathcal{A}^{\bar{v}(X_{s},s)},\nu)\) _has a solution_ \(\widetilde{X}(\cdot)\) _that is a Markov process with the same one dimensional marginals as_ \(X.\)_. If all solutions of the latter martingale problem have identical marginals for every choice of the initial distribution_ \(\nu\)_, then by Theorem_ 4.4.2_, p. 184, of_ _[_19_]__, the martingale problem in fact has a unique solution._ We conclude this section with a related result that the so called \(\alpha\)_-discounted occupation measure_ \(\mu\) for \((X.,U.)\) defined by \[\int f(x,u)\mu(dx,du):=\alpha\mathbb{E}\left[\int_{0}^{\infty}e^{-\alpha t}f( X_{t},U_{t})dt\right],\ f\in C_{b}(E\times\mathcal{A}),\] can be replicated by a Markov mimic controlled by a _stationary_ Markov relaxed control \(\bar{v}(\cdot)\). **Theorem 2.6**.: _Assume (A1)-(A5). Given a discount factor \(\alpha>0\) and any solution \((X,U)\) of the relaxed controlled martingale problem for \((\mathcal{A},\nu)\), there exists a relaxed stationary Markov control \(\bar{v}(\cdot)\) and a solution \(X^{\prime}\) to the relaxed controlled martingale problem for \((\mathcal{A},\nu)\) with this relaxed stationary Markov control with \(X_{0}\stackrel{{ d}}{{=}}X^{\prime}_{0}\), such that \(X,X^{\prime}\) have identical marginals and therefore identical \(\alpha\)-discounted occupation measures._ Proof.: Let \(E\) be a Polish space, \(\mathbb{U}\) be a compact metric space. Let \((X,U)\) be a relaxed controlled martingale problem for \((\mathcal{A},\nu)\) satisfying (A1)-(A5). Let \(x\in E\) and \(\alpha>0\). Define a probability measure \(\mu\) on \(E\times\mathbb{U}\) by \[\int fd\mu=\alpha\mathbb{E}\left[\int_{0}^{\infty}e^{-\alpha s}f(X_{s},U_{s}) ds\right], \tag{8}\] for any bounded continuous function \(f:E\times\mathbb{U}\to\mathbb{R}\). Let the marginal on \(E\) of \(\mu\) be denoted by \(\eta\) and let \(\bar{v}(du\mid y)\) denote the conditional distribution of \(u\) given \(y\) under \(\mu\). Let \(X^{\prime}\) be the solution to the relaxed Markov controlled martingale problem with \((\bar{\mathcal{A}}^{\bar{v}},\nu)\) Let \(k\in\mathcal{D}(\bar{\mathcal{A}})\) and \(\psi(x)=\mathbb{E}_{x}\left[\int_{0}^{\infty}e^{-\alpha s}k(X^{\prime}_{s},\bar{ v}(X^{\prime}_{s}))ds\right]:=\mathbb{E}\left[\int_{0}^{\infty}e^{-\alpha s}k(X^{\prime}_{s}, \bar{v}(X^{\prime}_{s}))ds\mid X_{0}=x\right]\). Then from the definition of \(\bar{v}(\cdot)\), it follows that \[\bar{\mathcal{A}}^{\bar{v}}\psi(x)-\alpha\psi(x)+k(x,\bar{v}(x))=0\] and \[\mathbb{E}\left[\int_{0}^{\infty}e^{-\alpha s}k(X_{s},\bar{v}(X_{ s}))ds\right] =\mathbb{E}\left[\int_{0}^{\infty}e^{-\alpha s}\left(-\bar{ \mathcal{A}}^{\bar{v}}\psi(X_{s})+\alpha\psi(X_{s})\right)ds\right]\] \[=\lim_{T\uparrow\infty}\mathbb{E}\left[\int_{0}^{T}e^{-\alpha s} \left(-\bar{\mathcal{A}}^{\bar{v}}\psi(X_{s})+\alpha\psi(X_{s})\right)ds\right]\] \[=\int\psi(x)\nu(dx)-\lim_{T\uparrow\infty}e^{-\alpha T}\mathbb{E} \left[\psi(X_{T})\right]=\int\psi(x)\nu(dx).\] This establishes the claim. **Remark 2.7**.: _As in [9], one can consider stationary Markov controls and try to mimic laws at exit times. Suppose \((X,U)\) is a solution to a relaxed controlled martingale problem on \(\mathbb{R}^{d}\). Suppose \(X^{\prime}\) is a time-homogeneous Markov solution to the relaxed controlled martingale problem with a stationary Markov control, \(\bar{r}(X^{\prime}_{t})\in\mathcal{P}(\mathbb{U})\) say. Let \(\tau=\inf\{t\geq 0:X^{\prime}_{t}\not\in D\},\) with \(D\subset\mathbb{R}^{d}\) and \(\mathbb{E}[\tau]<\infty.\) Then one could imitate the arguments in Theorem 2.6 and show that if \(X_{0}\stackrel{{ d}}{{=}}X^{\prime}_{0}\), then \(X_{\tau}\stackrel{{ d}}{{=}}X^{\prime}_{\tau}\). Define for \(f\in C_{0}^{2}(\mathcal{R}^{d})\), \(h:\mathbb{R}^{d}\to\mathcal{R}\) by_ \[h(x)=\mathbb{E}[f(X^{\prime}_{\tau^{\prime}})|X^{\prime}_{0}=x)].\] _Then we will require that \(h\) solves the Dirichlet problem given by_ \[\bar{\mathcal{A}}^{\bar{r}(x)}h=0,\qquad h=f\text{ on }\partial D \tag{9}\] _Then as in [9], an application of Dynkin's formula yields the result. One would need to impose additional assumptions on \(\bar{\mathcal{A}}\) so that (9) holds._ ### Examples In this section we shall discuss several examples where Theorem 2.4 is applicable. We discuss controlled martingale problems that arise naturally in applications, satisfying the hypotheses of Theorem 2.4 and Theorem 2.6. First we note that, if the problem is well-posed, i.e. the respective martingale problem has a unique solution, then the solution is already Markov. We begin with an example from finite dimensional diffusions. **Example 1**.: _Let \(E=\mathbb{R}^{d}\), \(\mathbb{U}\) be any compact metric space, \(\nu\in\mathcal{P}(E)\) and \(S_{d}\) denote the set of all symmetric non-negative definite \(d\times d\) real matrices. For \(i,j\in\{1,2,\ldots,d\}\), define \(a_{ij}:E\to\mathbb{R}\) and \(b_{i}:E\times\mathbb{U}\to\mathbb{R}\) such that \(a_{ij}\) and \(b_{i}\) are bounded and measurable for all \(i,j\) and \(a=[a_{ij}]\in S_{d}\). Let \(\mathcal{A}\) be a linear operator with \(\mathcal{D}(\mathcal{A})=C_{0}^{2}(\mathbb{R}^{d})\) be given by_ \[\mathcal{A}f(x,u)=\sum_{i=1}^{d}b_{i}(x,u)\frac{\partial f}{\partial x_{i}}(x) +\frac{1}{2}\sum_{i,j=1}^{d}a_{ij}(x)\frac{\partial^{2}f}{\partial x_{i}x_{j}} (x). \tag{10}\] _As \(\mathcal{D}(\mathcal{A})=C_{0}^{2}(\mathbb{R}^{d})\), it is easy to see that (A1) and (A2), (A3) (ii), (iii), (iv) are satisfied. Then, by [34, Theorem 6.1.7] there is a solution \((X,U)\) to the martingale problem associated to \((\mathcal{A}^{u},\delta_{x})\), so (A3) (i) holds. By [27, Theorem 4.1] or [7, Theorem 2.4], (A4) holds. Finally, from [34, Theorem 12.2.3] or [19, Theorem 5.19], (A5) holds when \(U\) is as in (5) and \(b,a\) are bounded continuous functions._ Next we consider the case of pure jump diffusion. **Example 2**.: _Let \(E\) be a locally compact Polish space. Let \(\mathbb{U}\) be a Polish space. Let \(\lambda:E\times\mathbb{U}\to\mathbb{R}_{+}\) be a non-negative, measurable functions bounded on compact sets. Let \(\gamma(x,u,A)\) be a transition function on \(E\times\mathbb{U}\times\mathcal{B}(E).\) Let \((X_{t},U_{t})\) be a solution to the controlled martingale problem for \((\mathcal{B},\nu)\) where_ \[\mathcal{B}f(x,u)=\lambda(x,u)\int_{\mathbb{R}^{d}}[f(x+y)-f(x)]\gamma(x,u,dy)\] _for \(f\in C_{0}(E^{\Delta})\), where \(E^{\Delta}\) is a one point compactification of \(E.\) Further assume that for \(x\in E,u\in\mathbb{U}\) and for \(f\in C_{0}(E)\),_ \[\lambda(x,u)\int_{\mathbb{R}^{d}}\mid f(x+y)-f(x)\mid\gamma(x,u,dy)<\infty.\] _As \(\mathcal{D}(\mathcal{B})=C_{0}(E)\), it is easy to see that (A1), (A2), and (A3) (ii), (iii) and (iv) are satisfied. From [19, Exercise 15 in p. 263] or [27, Example 3.5] there is a solution \((X,U)\) to the martingale problem associated to \((\mathcal{A}^{u},\delta_{x})\), so (A3) (i) holds. By [27, Theorem 4.1] or [7, Theorem 2.4], (A4) holds. Finally, from [19, Theorem 5.19], (A5) holds when \(U\) is as in (5) and \(\lambda\) is a bounded continuous function._ By [27, Example 3.3], for \(E=\mathbb{R}^{d}\), a linear combination of \(\mathcal{A}\) as in Example 1 and \(\mathcal{B}\) as in Example 2 will also satisfy (A1)-(A4). (A5) will also follow if they both satisfy the respective hypothesis required in each of the examples. We now present an example in the infinite dimensional setting. **Example 3**.: _Let \(E=H\) be a real separable Hilbert space. Let \(U\) be a closed unit ball \(K\) of another real separable Hilbert space, with the weak topology. Let \(F:H\to K\) be continuous, \(B:K\to H\) be bounded linear, \(W(\cdot)\) be an \(H\)-valued Wiener process with covariance given by a trace class operator \(Q\). Let \(L\) be an infinitesimal generator of a differentiable compact semigroup of contractions \(\{S(t)\}_{t\geq 0}\) on \(H\) such that \(L^{-1}\) is a bounded self-adjoint operator with discrete spectrum. Let \(\{e_{i}:i\geq 1\}\) be a CONS in \(H\) such that they are eigenfunctions of \(L^{-1}\) with corresponding eigenvalues \(\{\lambda_{i}^{-1}:i\geq 1\}\). Let \(P_{n}:H\to\mathbb{R}^{n}\) be the map defined by \(P_{n}(x)=[\langle x,e_{1}\rangle,\ldots,\langle x,e_{n}\rangle].\) Let \(\mathcal{D}(\mathcal{A})=\{f\circ P_{n}:f\in C_{0}^{2}(\mathbb{R}^{n}),n\geq 1 \}\subset C_{b}(H)\). Define \(A:\mathcal{D}(\mathcal{A})\to C_{b}(H\times U)\) by_ \[\mathcal{A}(f\circ P_{n})(h,u)=\sum_{i=1}^{n}\langle e_{i},(F(h)+Bu-\lambda_{ i}h)\rangle\frac{\partial f}{\partial x_{i}}\circ P_{n}(h)+\frac{1}{2}\sum_{i,j=1}^{n} \langle e_{i},Qe_{j}\rangle\frac{\partial^{2}f}{\partial x_{i}\partial x_{j}}.\] _By definition of \(\mathcal{D}(\mathcal{A})\), it is easy to see that (A1) and (A2), (A3) (ii) and (iii) are satisfied. From [15, Theorem 8.1] (A3) (i) holds. [7, Example 3 and Theorem 2.4], ensure that (A4) holds._ We conclude this section with an example from nonlinear filtering theory. This arises from control problems for diffusion with partial observations. **Example 4**.: \(E=\mathcal{P}(\mathbb{R}^{d})\) _and \(\mathbb{U}\) be any compact metric space. Let \(\mathcal{A}\) be as in Example 1. Let_ \[\mathcal{D}(B)=\left\{f\in C_{b}(\mathcal{P}(\mathbb{R}^{d})):\begin{array}{l} f(\mu)=g(\int f_{1}d\mu,\int f_{2}d\mu,\ldots,\int f_{n}d\mu),\mu\in \mathcal{P}(\mathbb{R}^{d}),\\ \mbox{for some }n\geq 1,g\in C_{0}^{2}(\mathbb{R}^{n}),f_{1},\ldots,f_{n}\in \mathcal{D}(A)\end{array}\right\}.\] _Let \(B\) be a linear operator from \(B:\mathcal{D}(B)\to C_{b}(\mathcal{P}(\mathbb{R}^{d})\times\mathbb{U})\) defined by_ \[Af(\mu,u)=\sum_{i=1}^{n}\frac{\partial g}{\partial x_{i}}(\int f _{1}d\mu,\ldots,\int f_{n}d\mu)\int Af_{i}(\cdot,u)d\mu\] \[+\frac{1}{2}\sum_{i,j=1}^{n}\frac{\partial^{2}g}{\partial x_{i} \partial x_{j}}(\int f_{1}d\mu,\ldots,\int f_{n}d\mu)\times\] \[\left\langle\int f_{i}hd\mu-\int f_{i}d\mu\int hd\mu,\int f_{j}hd \mu-\int f_{j}d\mu\int hd\mu\right\rangle\] _By definition of \(\mathcal{D}(B)\) and the hypotheses assumed on \(\mathcal{A}\) from Example 1, it is easy to see that (A1) and (A2), (A3) (ii), (iii) and (iv) are satisfied. From discussion in [2, Section 8.2, 8.3] there is a solution \((X,U\) to the martingale problem associated to \((\mathcal{A}^{u},\delta_{x})\), so (A3) (i) holds. Finally, [7, Example 4, Theorem 2.4] ensure that (A4) holds. Such a treatment is also possible for stochastic evolution equations (see [30])._ ## 3. Minimizing Relative Entropy We begin with the definition of relative entropy between two probability measures on Polish spaces. **Definition 3.1**.: _For a Polish space \(S\) endowed with its Borel \(\sigma\)-field \(\mathcal{B}(S)\), let \(\mathbb{P},\mathbb{P}_{0}\) be probability measures on \((E,\mathcal{B}(S))\) with \(\mathbb{P}<<\mathbb{P}_{0}\). Let \(\mathbb{E}[\ \cdot\ ],E_{0}[\ \cdot\ ]\) denote the respective expectation operators and let \(\Lambda:=\frac{d\mathbb{P}}{d\mathbb{P}_{0}}\) denote the Radon-Nikodym derivative of \(\mathbb{P}\) w.r.t. \(\mathbb{P}_{0}\). We define the relative entropy (equivalently, the Kullback-Leibler divergence) of \(\mathbb{P}\) with respect to \(\mathbb{P}_{0}\) as_ \[D(\mathbb{P}\|\mathbb{P}_{0}):=\mathbb{E}_{0}[\Lambda\log\Lambda]=E[\log \Lambda].\] Let \(T>0\), \(E\) be a polish space and let \(D([0,T];E)\) be the polish space of r.c.l.l. paths in \(E\) with the Skorokhod topology. Let \(\mathbb{P}_{0}\) denote a reference probability measure on \(D([0,T];E)\) under which the coordinate process is Markov. Let \(\{X_{t}\}_{0\leq t\leq T}\in D([0,T];E)\) be a r.c.l.l. process whose law \(\mathbb{P}\) satisfies: \(\mathbb{P}_{t}:=\) the restriction of \(\mathbb{P}\) to \(D([0,t];E)\) is absolutely continuous w.r.t. \(\mathbb{P}_{0t}:=\) the restriction of \(\mathbb{P}_{0}\) to \(D([0,t];E),\)\(\forall\ t\geq 0\). For \(t>0\), let \(\Lambda_{t}:=\frac{d\mathbb{P}_{t}}{d\mathbb{P}_{0t}}\) with \(\Lambda_{0}:=1\), and for \(0<s<t\), let \[\tilde{\Lambda}_{s,t}:=\frac{\Lambda_{t}}{\Lambda_{s}}\] when \(\Lambda_{s}>0\) and \(=0\) otherwise. **Definition 3.2**.: _Suppose for \(0\leq u<v\) if \(\mathcal{F}_{u,v}=\sigma(X_{r}:u\leq r\leq v)\). Let \(0<s<t\leq T\). We say that \(s\) is a 'Markov point' for \(X\) if \(\mathcal{F}_{s,t}\) and \(\mathcal{F}_{0,s}\) are conditionally independent given \(X_{s}\)._ _Markovianisation:_ Fix \(0<s<t\leq T\). For \(T\geq 0\), given a process \(X\in D([0,T];E)\), we shall use the notation \(X([s,t])\) for \(0\leq s<t\leq T\) to denote the restriction of \(X\) to \([s,t]\), viewed as an element of \(D([s,t];E)\). Construct \(\breve{X}\) on the path space \(D([0,t];E)\) as follows. The process \(\breve{X}[0,s]\) has the same law as that of \(X([0,s])\). Let the conditional law of \(\breve{X}([s,t])\), given \(\breve{X}([0,s)\) be the conditional law of \(X([s,t])\) given \(X_{s}\). Then the values of \(\breve{X}([0,s])\) and \(\breve{X}([s,t])\) are matched at \(s\) and the concatenation thereof can be viewed as an element of \(D([0,t];E)\). More precisely, for any \(n\geq 1\), \(1\leq i\leq{n-1}\), \(0\leq s_{i}\leq s_{i+1}\leq s\leq t_{i}\leq t_{i+1}\leq t\) and Borel \(A_{i},B_{i},B\) \[\mathbb{P}(\breve{X}_{s_{i}}\in A_{i},1\leq i\leq n):=\mathbb{P}(X _{s_{i}}\in A_{i},1\leq i\leq n),\] \[\mathbb{P}(\breve{X}_{t_{i}}\in B_{i},1\leq i\leq n):=\mathbb{P}(X _{t_{i}}\in B_{i},1\leq i\leq n),\] \[\mathbb{P}(\breve{X}_{s}\in B):=\mathbb{P}(X_{s}\in B),\] \[\mathbb{P}(\breve{X}_{t_{i}}\in A_{i},1\leq i\leq n\mid\breve{X }_{s}\in B):=\mathbb{P}(X_{t_{i}}\in A_{i},1\leq i\leq n\mid X_{s}\in B)\text{ and}\] \[\mathbb{P}(\breve{X}_{t_{i}}\in A_{i},1\leq i\leq n\mid\breve{X }_{s_{i}}\in B_{i},1\leq i\leq n\text{ and }\breve{X}_{s}\in B)\] \[:=\mathbb{P}(X_{t_{i}}\in A_{i},1\leq i\leq n\mid X_{s}\in B).\] Note that the process \(\breve{X}\) is well defined on the canonical path space \(D([0,t];E)\) by the above definition. In particular, the process \(\breve{X}\) has law which is identical to that of \(X\) on the interval \([0,s]\) and \([s,t].\) Suppose for \(0\leq u<v\) if \(\breve{\mathcal{F}}_{u,v}=\sigma(\breve{X}_{r}:u\leq r\leq v)\). Then for any \(s\leq r\leq t\) the conditional law of \(\breve{X}_{r}\) given \(\breve{\mathcal{F}}_{0,s}\) is the same as the conditional law of \(\breve{X}_{r}\) given \(\breve{X}_{s}\). This ensures that \(\breve{\mathcal{F}}_{0,s}\) and \(\breve{\mathcal{F}}_{s,t}\) are conditionally independent given \(\breve{X}_{s}\). Note that the law of \(\breve{X}\), \(X\) will not be the same in \([0,t]\) unless \(s\) is a _Markov point_ for \(X\). **Definition 3.3**.: _The process \(\breve{X}\) constructed above will be defined as the'markovianizing' of \(X\) at time \(s\). A set \(\mathcal{G}\) of probability measures on \(D([0,T];E)\) is said to be closed under markovianization at (say) \(s\in[0,T]\) if the law of \(\breve{X}\) above is in \(\mathcal{G}\) whenever the law of \(X\) is._ **Lemma 3.4**.: _Let \(\{X_{t}\}_{0\leq t\leq T}\), \(\mathbb{P}\), \(\mathbb{P}_{t}\), \(\mathbb{P}_{0}\), \(\mathbb{P}_{0t}\) be as above. Fix \(0<s<t\leq T\) and \(\{\breve{X}_{u}\}_{0\leq u\leq T}\) be the markovianisation of \(X\) at \(s\) and \(\breve{P}\) denote its law. Let \(\breve{P}_{t}\) be the restriction of \(\breve{P}\) to \(D([0,t];E)\), \(\forall\ 0\leq t\leq T\). Then \(\breve{P}_{t}\ll\mathbb{P}_{0t}\) and Radon-Nikodym derivative \(\breve{\Lambda}_{t}\) of \(\breve{P}_{t}\) w.r.t. \(\mathbb{P}_{0t}\) is given by_ \[\mathbb{E}_{0}\left[\Lambda_{t}\bigg{|}\mathcal{F}_{s,t}\right].\] Proof.: Let \(g\in C_{b}(D([0,s];E)),h\in C_{b}(D([s,t];E))\). Let \(\breve{E}[\ \ ]\) denote the expectation under \(\breve{P}\). Then by construction of \(\breve{X}\) we have \[\breve{E}[g(\breve{X}([0,s]))h(\breve{X}([s,t]))] = \breve{E}[g(\breve{X}([0,s]))\breve{E}[h(\breve{X}([s,t]))| \breve{\mathcal{F}}_{0,s}]] \tag{11}\] \[= \breve{E}[g(\breve{X}([0,s]))\breve{E}[h(\breve{X}([s,t]))|\breve {X}_{s}]]\] Now as the conditional law of \(X([s,t])\) given \(X_{s}\) is the same as the conditional law of \(\breve{X}([s,t])\) given \(\breve{X}(s)\) and the law of \(X([0,s])\) is the same as the law of \(\breve{X}([0,s])\) we have that \[\breve{E}[g(\breve{X}([0,s]))\breve{E}[h(\breve{X}([s,t]))|\breve {X}(s)]] = \mathbb{E}[g(X([0,s]))\mathbb{E}[h(X([s,t]))|X_{s}]]. \tag{12}\] Recall that \(\Lambda_{t}\) is the Radon-Nikodym derivative of \(\mathbb{P}_{t}\) w.r.t \(\mathbb{P}_{0,t}\). So, \[\mathbb{E}[g(X([0,s]))\mathbb{E}[h(X([s,t]))|X_{s}]] = \mathbb{E}_{0}\left[\Lambda_{s}g(X([0,s]))\mathbb{E}[h(X([s,t])) |X_{s}\right]\] \[= \mathbb{E}_{0}\left[g(X([0,s]))\Lambda_{s}\bigg{(}\frac{\mathbb{E }_{0}[h(X([s,t]))\Lambda_{t}|X_{s}]}{\mathbb{E}_{0}[\Lambda_{t}|X_{s}]}\bigg{)} \right].\] (13) Here the second equality follows from the change of measure formula for conditional expectations. As \(\sigma(X_{s})\subset\mathcal{F}_{s,t}\) and \(X([s,t])\) is \(\mathcal{F}_{s,t}\) measurable we have \[E_{0}[h(X([s,t])\Lambda_{t}|X_{s}]=E_{0}[E_{0}[h(X([s,t])\Lambda_ {t}|\mathcal{F}_{s,t}]|X_{s}]=E_{0}[h(X([s,t])E_{0}[\Lambda_{t}|\mathcal{F}_{s,t}]|X_{s}] \tag{14}\] Therefore, using this in (13) we have \[\mathbb{E}[g(X([0,s]))\mathbb{E}[h(X([s,t]))|X_{s}]] = \mathbb{E}_{0}\left[g(X([0,s]))\Lambda_{s}\left(\frac{\mathbb{E }_{0}[h(X([s,t]))\mathbb{E}_{0}[\Lambda_{t}|\mathcal{F}_{s,t}]|X_{s}]}{ \mathbb{E}_{0}[\Lambda_{t}|X_{s}]}\right)\right] \tag{15}\] We know that the coordinate process is Markov under \(\mathbb{P}_{0}\), so the above equals \[\mathbb{E}_{0}\left[g(X([0,s]))\Lambda_{s}\left(\frac{\mathbb{E} _{0}[h(X([s,t]))\mathbb{E}_{0}[\Lambda_{t}|\mathcal{F}_{s,t}]|\mathcal{F}_{0, s}]}{\mathbb{E}_{0}[\Lambda_{t}|\mathcal{F}_{0,s}]}\right)\right] \tag{16}\] \[= \mathbb{E}_{0}\left[g(X([0,s]))\Lambda_{s}\left(\frac{\mathbb{E} _{0}[h(X([s,t]))\mathbb{E}_{0}[\Lambda_{t}|\mathcal{F}_{s,t}]|\mathcal{F}_{0, s}]}{\Lambda_{s}}\right)\right]\] \[= \mathbb{E}_{0}[g(X([0,s]))\mathbb{E}_{0}[h(X([s,t]))\mathbb{E}_{0 }[\Lambda_{t}|\mathcal{F}_{s,t})]|\mathcal{F}_{0,s})]]\] where the second last line follows from the martingale property of \(\Lambda_{t}\) under \(\mathbb{P}_{0}\). Thus from (11), (12), (15), and (16) we have \[\breve{E}[g(\breve{X}([0,s]))h(\breve{X}([s,t]))]=\mathbb{E}_{0} \bigg{[}g(X([0,s]))h(X([s,t]))\ \mathbb{E}_{0}[\Lambda_{t}|X([s,t])]\bigg{]}. \tag{17}\] It is easy to see that (17) holds for functions of the form \((x,y)\mapsto\sum_{i=1}^{n}\alpha_{i}g_{i}(x)h_{i}(y)\) with \(\alpha_{i}\in\mathbb{R},x\in D([0,s];E)),y\in C_{b}(D([s,t]),g_{i}\in C_{b}(D([0,s ];E))\) and \(h_{i}\in C_{b}(D([s,t];E)).\) The claim follows via an application of Stone-Weierstrass theorem. **Proposition 3.5**.: _Let \(\{X_{t}\}_{t\geq 0}\), \(\mathbb{P}\), \(\mathbb{P}_{T}\), \(\mathbb{P}_{0}\), \(\mathbb{P}_{0T}\) be as above. If there exists an \(s\in(0,T]\) that is not a Markov point for \(X\), then the process \(\{\breve{X}_{t}\}_{t\geq 0}\) obtained by markovianising \(X\) at \(s\) satisfies_ 1. _the marginals of_ \(\breve{X}\) _is the same as the marginals of_ \(X\) _and,_ 2. _the law of_ \(\breve{X}\) _given by the probability measure_ \(\breve{P}\) _on_ \(D([0,\infty);E)\) _with_ \(\breve{P}_{T}:=\) _the restriction of_ \(\breve{P}\) _to_ \(\mathcal{D}([0,T];E)\)_, satisfies_ (18) \[D(\breve{P}_{T}||\mathbb{P}_{0T})<D(\mathbb{P}_{T}||\mathbb{P}_{0T}),\] _for all_ \(T>s.\)__ Proof.: Let \(\breve{\Lambda}\) be the Radon-Nikodym of \(\breve{P}_{t}\) w.r.t. \(\mathbb{P}_{0t}\). Using Lemma 3.4, we have \[\mathbb{E}_{0}[\breve{\Lambda}_{t}\log\breve{\Lambda}_{t}] = \mathbb{E}_{0}[\mathbb{E}_{0}[\Lambda_{t}|\mathcal{F}_{s,t}]\log (\mathbb{E}_{0}[\Lambda_{t}|\mathcal{F}_{s,t})]\] \[< E_{0}[\Lambda_{t}\log(\Lambda_{t})],\] where the last line follows by the conditional Jensen's inequality and the strong convexity of the map \(x\in[0,\infty)\mapsto x\log x\in\mathbb{R}.\) The proposition readily follows from this, the construction of \(\breve{X}\), and the definition of relative entropy. From Proposition 3.5, it follows that among all \(E\)-valued r.c.l.l. processes that have the same marginals as \(X\) and have laws absolutely continuous with respect to \(\mathbb{P}_{0}\), the minimiser of relative entropy, if one exists, is Markov. A more general claim holds : **Theorem 3.6**.: _Let \(\mathbb{P}_{0}\) be a reference probability measure on \(D([0,T];E)\) under which the coordinate process is Markov. Suppose \(\mathcal{Q}\subset\mathcal{P}(D([0,T];E))\) is a set of probability measures absolutely continuous w.r.t. \(\mathbb{P}_{0}\) that is closed under markovianization at any \(s\in[0,T]\), and let \(\mathcal{H}:=\{\Lambda_{T}:=\frac{d\mathbb{P}_{T}}{d\mathbb{P}_{0T}}|\ \mathbb{P}\in\mathcal{Q}\}\) equipped with \(\sigma(L_{1},L_{\infty})\) topology (\(:=\) the weak topology on \(L_{1}\))._ 1. _If_ \(\mathbb{E}_{0}[\Lambda_{T}\log\Lambda_{T}]\) _attains its minimum on_ \(\mathcal{H}\)_, then the minimizer is unique and is the law of a Markov process._ 2. _Suppose that there exists_ \(h:\mathbb{R}_{+}\to\mathbb{R}_{+}\) _such that_ (19) \[\lim_{t\to\infty}\frac{h(t)}{t\log(t)}=\infty\] _and_ (20) \[\sup_{\mathcal{H}}\mathbb{E}_{0}\left[h(\Lambda_{T})\right]<\infty,\] _then \(\mathbb{E}_{0}[\Lambda_{T}\log\Lambda_{T}]\) attains its minimum on \(\mathcal{H}\)._ Proof.: Define \(f:\mathcal{H}\mapsto[0,\infty)\) by \[f(\Lambda_{T})=\Lambda_{T}\log\Lambda_{T}.\] The map \(x\mapsto x\log x\) is convex and continuous on \(\mathbb{R}_{+}\). Hence \[f(\Lambda_{T})=\sup_{g\in\mathcal{C}}g(\Lambda_{T}),\] where \[\mathcal{C} := \{g:\mathbb{R}_{+}\to\mathbb{R}:g(x)=ax+b\ \ \text{for some}\ a,b\in \mathbb{R}\ \ \text{and}\ \ g(x)\leq f(x)\ \forall\ x\geq 0\}.\] Therefore \(f(\cdot)\) is lower semi-continuous a.s. on \(\mathcal{H}\) and hence so is the function \(F:\mathcal{H}\mapsto\mathbb{R}_{+}\) given by \(F(\Lambda_{T}):=\mathbb{E}_{0}[\Lambda_{T}\log\Lambda_{T}]\). Consequently, if \(F\) attains its minimum on \(\mathcal{H}\), there is a unique minimizer due to the strong convexity of \(f\). Suppose that the minimiser is not a Markov process. Then it has a non-Markov point \(s\) and the \(\breve{\Lambda}\) (\(:=\) the corresponding Radon-Nikodym derivative for the probability measure \(\breve{P}_{t}\) as defined in the proof of Proposition 3.5, w.r.t \(\mathbb{P}_{0t}\)) will have a strictly lower value of \(F\), a contradiction. Hence the unique minimiser is a Markov process. Under (20), \(\mathcal{H}\) is uniformly integrable by the de la Vallee Poussin theorem ([16], p. 24II). Therefore it is relatively compact and relatively sequentially compact in the \(\sigma(L_{1},L_{\infty})\) topology by the Dunford-Pettis compactness criterion ([16], p. 27II). It is also easy to check that \(\mathcal{H}\) is closed. Therefore \(f\) attains its minimum on \(\mathcal{H}\) by the Weierstrass theorem. **Corollary 3.7**.: _Let \(\mathcal{Q}_{m}:=\) the closed subset of \(\mathcal{Q}\) whose elements have the same one dimensional marginals as some prescribed element of \(\mathcal{Q}\) at \(t\in\) some \(\mathcal{T}\subset[0,\infty)\). Define \(\mathcal{H}_{m}\) correspondingly, analogously to the \(\mathcal{H}\) above. Then a unique minimiser of \(F\) on \(\mathcal{H}_{m}\) exists and will be a Markov process._ This is immediate on observing that \(\mathcal{Q}_{m}\) is closed under markovianization at any point. **Remark 3.8**.: * _Note that (_19_) is satisfied for e.g._ \(h(x)=x^{1+\epsilon}\) _for any_ \(\epsilon>0.\) _Furthermore, under (_20_), Theorem_ 2.4 _then shows the existence of a minimizer. This result could be of independent interest._ * _A priori, the law of a Markov process whose marginals match those of a given random process need not be absolutely continuous with respect to the law of the latter. For example let_ \(\{B_{t}\}_{t\in[0,1]}\) _be a Brownian motion and define the process to be_ \[Y_{t}=\left\{\begin{array}{ll}B_{t}&t\in[0,\frac{1}{2}),\\ B_{t}^{\prime}&t\in[\frac{1}{2},1],\end{array}\right.\] _where_ \(\{B^{\prime}_{t}\}_{\frac{1}{2}\leq t\leq 1}\) _is an independent Brownian motion such that_ \(B^{\prime}_{\frac{1}{2}}\) _is Normal with mean_ \(0\) _and variance_ \(\frac{1}{2}.\) _The above result then shows that under (_20_), there is at least one Markov mimic for which absolute continuity holds._ * _There is one case where the uniform integrability of_ \(\mathcal{H}_{m}\) _is easy to obtain without a condition such as (_20_). Note that_ \(\Lambda_{t},t\geq 0\)_, is a multiplicative functional of the sample path, which makes_ \(\log\Lambda_{t},t\geq 0\)_, an additive functional. There are cases (e.g., diffusion processes) where_ \(E_{0}[\Lambda_{t}\log\Lambda_{t}]=E[\log\Lambda_{t}]\) _in fact depends only on one dimensional marginals of the process. In this case, this quantity is a constant on_ \(\mathcal{H}_{m}\)_. Uniform integrability is often easy to check in these scenarios. Even in some situations where this additive functional does not depend only on one-dimensional marginals, this may give an easy route for verifying uniform integrability, e.g., for reflected diffusions where the additive functional involves local time at the boundary._ * _One interesting result about controlled martingale problems of the type studied in Theorem_ 2.4 _is as follows. Fix an initial distribution_ \(\nu\)_. Define an equivalence relation, denoted by_ \(\,\approx\,\)_' between two solution processes for the controlled martingale problem for a prescribed controlled extended generator_ \(\mathcal{A}^{u}\) _as follows: Set_ \((X.,u.)\approx(X^{\prime}.,u^{\prime}_{\cdot})\) _if their one dimensional marginals agree Lebesgue-a.e. The following is proved in_ _[_2_]__, see Theorem_ 6.4.16_, p. 241_,_ _[_2_]__, extending an earlier result for controlled diffusions from_ _[_11_]__._ **Theorem 3.9**.: _The extreme points of the closed convex set (in quotient topology) of such equivalence classes are singletons containing a Markov process._ _Corollary_ 3.7 _now gives us, under the additional hypotheses of absolute continuity w.r.t. a common reference measure, an additional piece of information, viz. that every equivalence class contains a Markov solution as well._ To illustrate the application of Proposition 3.5 and Theorem 3.6 we begin with an example of finite dimensional diffusion discussed in Section 2.1. **Example 5** (Example 1 contd.).: _In [28], trajectories of cellular development are modelled using_ \[dX_{t}=-\nabla\Psi(t,X_{t})+\sigma dB_{t}, \tag{21}\] _with \(X_{t}\) taking values in a compact smooth Riemannian manifold without boundary \(E\), \(\Psi:[0,1]\times E\to\mathbb{R}\) is a continuously twice differentiable function and \(\nabla\) denotes the gradient in \(x\)-variable. The objective is to obtain the law of the trajectory from its marginals. Let \(\Omega=C([0,1],E)\) and \(\mathcal{P}(\Omega)\) be the set of probability measures on \(\Omega.\) Let \(P\in\mathcal{P}(\Omega)\) be the law of \(X\) and \(\mathbb{W}^{\sigma}\) be the law of \(\{\sigma B_{t}\}_{t\in[0,1]}\) with \(B\) being a standard Brownian motion on \(E\). They show that (see [28, Theorem 2.1]) the law of \(X\) can be characterised from its marginals via the following entropy minimisation problem,_ \[\min\{H(R|\mathbb{W}^{\sigma}):R\in\mathcal{P}(\Omega),R_{t}=P_{t}\text{ for all }t\in[0,1]\}, \tag{22}\] _where for any \(R\in\mathcal{P}(\Omega)\), \(R_{t}\) is the marginal at time \(t\)._ _The above can be considered for a general diffusion with the generator given by (10) as discussed in Example 1. If the associated martingale problem is well-posed, then the one dimensional marginals characterise the law (see [19, Theorem 4.4.2]). In addition if hypothesis (19) and (20) of Theorem 2.4 are satisfied (for e.g. when the drift is a bounded continuous function), then Corollary 3.7 will imply that (22) will yield that the minimiser, i.e. \(R^{\star}\) is a Markov process. This has also been observed in [4, Theorem 4.5] assuming uniqueness of solution._ _Further in [28, Theorem 4.1] it is shown that the \(P\) is the unique minimizer using the fact that the Radon-Nikodym derivative of \(P\) w.r.t \(\mathbb{W}^{\sigma}\) depends only on the marginals. Such an argument will follow in general as long as the quadratic variation process depends only on the marginals of \(\log\Lambda_{t}\). This will imply that the minimizer obtained via Corollary 3.7 yield the unique minimizer as the true law of the process._ We conclude by considering an example of martingale problems associated with branching Markov processes. **Example 6**.: _Let \(E_{0}\) be a Polish space and \(\mathbb{U}\) be a compact metric space. Each particle shall move in \(E_{0}\) according to a Feller process with generator \(\mathcal{B}\), as in Example 1 and 2. Each particle branches or dies with a location dependent intensity \(\alpha(x,u)\) for \(x\in E\) and \(u\in U\). We shall assume that \(\alpha(\cdot,\cdot)\) is a continuous function on \(E_{0}\times U\). Upon its death it gives rise to children with location dependent offspring distribution whose probability generating function is_ \[\phi(z,x,u)=\sum_{l\geq 0}p_{l}(x,u)z^{l},\] _where \(\phi:[0,1]\times\mathbb{R}^{d}\times\mathbb{R}^{d}\mapsto\mathbb{R}\) such that \(\phi\in C_{0}([0,1]\times\mathbb{R}^{d}\times\mathbb{R}^{d})\). We assume that the distribution has finite mean, i.e. \(\sum_{l\geq 0}lp_{l}(\cdot,\cdot)<\infty\), and \(\sum_{l\geq 0}lp_{l}(\cdot,\cdot)\in C_{0}(\mathbb{R}^{d}\times\mathbb{R}^{d})\). Let \(M_{F}(E_{0})\) denote the space of finite measures on \(E_{0}\) endowed with the topology of weak convergence. For any bounded continuous \(f\) on \(E_{0}\), let_ \[\langle f,\mu\rangle:=\int fd\mu.\] _Let \(\mathcal{A}\) be a linear operator with \(\mathcal{D}(\mathcal{A})=\{g\in C_{0}^{2}(\mathbb{R}^{d}):\parallel g\parallel_ {\infty}<1\}\) given by_ \[\mathcal{A}g(\mu)=\exp(\langle\log g,\mu\rangle)\left\langle\frac{\mathcal{B} g+\alpha(\phi(g)-g)}{g},\mu\right\rangle,\] _where \(\mu\in M_{F}(E_{0}).\) It is easy to see that (A1) and (A2) are satisfied. We can choose \(\mathcal{B}\) as in Example 1 or 2, then by [19, Theorem 9.4.2], (A3) is satisfied. By [27, Theorem 4.1] or [7, Theorem 2.4], (A4) holds. Finally, from [19, Theorem 5.19], (A5) holds when \(U\) is as in (5) and all parameters are given by bounded continuous functions. This provides a generic setting where relaxed controlled martingale problems with branching diffusions can have Markov mimics._ _We now turn to an application of Theorem 2.4. In [4], an entropy minimization problem with respect to branching Brownian motion is shown to be equivalent to regularized unbalanced optimal transport. The branching Brownian motion starts with an initial distribution \(R_{0}\) and each particle moves according to a Brownian motion with diffusion constant \(\nu\) in \(E=\chi,\) which is a compact smooth Riemannian manifold without boundary. The branching mechanism is given by \(\mathbf{q}=\{q_{k}\}_{k\geq 1},\) where \(q_{k}\) is rate at which the particle branches into \(k\) particles. We will denote the system of branching Brownian motions by \(R\equiv\) BBM(\(R_{0},\nu,\mathbf{q}\))._ _Using stochastic calculus for general semimartingales with jumps, they show that under exponential moment assumptions on \(R_{0}\) and \(\mathbf{q}\), one can construct modified branching Brownian motions that are absolutely continuous with BBM(\(R_{0},\nu,\mathbf{q}\)). In the modified branching Brownian motion, particles move according to a stochastic differential equation with an additional drift \(\tilde{v}\) along with time dependent branching rates \(\tilde{\mathbf{q}}=\{\tilde{q}_{k}(t)\}_{k\geq 1,t\geq 0}\) (see [4, Theorem 4.23] for assumptions on \(\tilde{v}\) and \(\tilde{\mathbf{q}}(t)\))._ _If one models the trajectory of cell development considered in [28] via a suitable branching diffusion, then an optimisation problem with marginal constraints as in (22) with \(\mathbb{W}^{\sigma}\) being replaced by \(R\) can be considered. In [21, Theorem 3.2.1], the optimisation problem_ \[\min\{H(Q|R):Q\in\mathcal{C}\}, \tag{23}\] _is considered, where_ \[\mathcal{C}=\left\{Q\in\mathcal{P}(\Omega):\begin{array}{c}Q_{t}=P_{t}, \forall t\in[0,1]\text{ and }Q\text{ is any modified}\\ \text{branching Brownian motion with branching mechanism }\tilde{\mathbf{q}}\end{array}\right\}.\] _It is shown that the \(P\) is the unique minimizer of (23) using the fact that the Radon-Nikodym derivative of \(P\) w.r.t. \(R\) depends only on the marginals._ _Lastly, if the associated martingale problem for the branching diffusion is well-posed, then the one dimensional marginals characterise the law (see [19, Theorem 4.2]). Thus if the Radon-Nikodym derivative between the branching diffusion and the base branching Brownian motion satisfy hypothesis (19) and (20) of Theorem 2.4, (for, e.g. when \(\tilde{v}\) and \(\tilde{q}(t)\) are bounded continuous), then Corollary 3.7 will imply that the equivalent problem with (23) will yield that the minimizer \(R^{\star}\) is a Markov process. One would need additional assumptions as in [21, Theorem 3.2.1.] to show that the unique minimiser is the true law of the process._
2310.15680
Dissociation and thermodynamical properties of heavy quarkonia in an anisotropic strongly coupled hot QGP: using baryonic chemical potential
We extended the recent work Phys. Rev. D 97(9), 094033 (2018) to investigate quarkonium dissociation in presence of baryonic chemical potential (mu_b) and anisotropy ({\xi}) using quasi-particle approach in hot quantum chromodynamics (QCD) medium. We have determined binding energy and thermal width of S-states of charmonia and bottomonia for n=1 and n=2 (radial quantum number) with anisotropic parameter ({\xi}) and baryonic chemical potential. We have also determined the effects of baryonic chemical potential and anisotropy on mass spectra of 1S-states of quarkonia and the results obtained were consistent with theoretical and experimental works. But the key result obtained was dissociation temperature of the S-states with the effect of {mu_b} and {\xi}. At last, we have calculated the thermodynamical properties of QGP (i.e., pressure, energy density and speed of sound) using the parameter {\xi} and {mu_b}, which is the main key to study suppression of the quarkonium with latest determined value of energy density psNN after incorporating the effect of {\xi} and (mu_b).
Siddhartha Solanki, Manohar Lal, Rishabh Sharma, Vineet Kumar Agotiya
2023-10-24T09:44:45Z
http://arxiv.org/abs/2310.15680v2
Dissociation and thermodynamical properties of heavy quarkonia in an anisotropic strongly coupled hot QGP: using baryonic chemical potential ###### Abstract We extended the recent work Phys. Rev. D 97(9), 094033 (2018) to investigate quarkonium dissociation in presence of baryonic chemical potential (\(\mu_{b}\)) and anisotropy (\(\xi\)) using quasi-particle approach in hot quantum chromodynamics (QCD) medium. We have determined binding energy and thermal width of S-states of charmonia and bottomonia for \(n\)=1 and \(n\)=2 (radial quantum number) with anisotropic parameter (\(\xi\)) and baryonic chemical potential. We have also determined the effects of baryonic chemical potential and anisotropy on mass spectra of IS-states of quarkonia and the results obtained were consistent with theoretical and experimental works. But the key result obtained was dissociation temperature of the S-states with the effect of \(\mu_{b}\) and \(\xi\). At last, we have calculated the thermodynamical properties of QGP (i.e., pressure, energy density and speed of sound) using the parameter \(\xi\) and \(\mu_{b}\), which is the main key to study suppression of the quarkonium with latest determined value of energy density \(\sqrt{s_{NN}}\) after incorporating the effect of \(\xi\) and \(\mu_{b}\). **KEYWORDS**: Momentum anisotropy, Dissociation Temperature, pressure, energy density, speed of sound, Quasi Particle Model, Thermal width, Quark-gluon Plasma and Heavy Ion Collision. ## I Introduction World's largest giant accelerator, Relativistic Heavy-Ion Collision (RHIC) situated at Brookhaven national laboratory (BNL) USA and Large Hadron Collider(LHC) at CERN Switzerland inferred that quark-gluon plasma (QGP) behave like a perfect fluid instead of non-interacting gas of quasi-parton and quasi-gluons due to the collective nature of the QGP [1; 2; 3]. Several signature of the QGP have been identified so far but suppression of quark-antiquark pair is one of the most important or confirming signal of the QGP formation during non-central collision of heavy-ions [4; 5]. Matsui and Satz [6] was first to study dissociation of the quarkonia particularly that of charmonia (J/\(\psi\)) by involving the color screening in deconfined state. Both experimental and theoretical studies are going on to explore properties of the QGP and few essential refinement in the QGP study have been observed during last few decades [7; 8; 9]. It is well known that the quarkonium bound together by static gluon and acts as independent degree of freedom [10; 11; 12; 13]. Light hadrons were emitted during the transition of quarkonium from one state to another state while passing through the QGP medium [14]. Various authors [14; 15; 16] studied the features of QCD, strong theory of interaction, at high temperature scale. Studies like [17; 18; 19], were dedicated to production of the quarkonium in color evaporation model or color singlet model. The suppression of QGP through coalescence or the recombination of the partons can be found in [20; 21]. Due to small velocity or large mass of the heavy quark compare to QCD scale parameters, we preferably used non-relativistic approach to study the QGP properties [22; 23; 24]. In the non-relativistic approach, we employed non-relativistic potential which possesses both fundamental features i.e., asymptotic freedom and color confinement of the QCD. In the [25; 26; 27; 28; 29; 30; 31], properties (including dissociation temperature), production and suppression (both theoretical or experimental) has been already discussed in details. In [32], dissociation temperature of states has been investigated by using quasi-particle approach in presence of the momentum anisotropy collision. In the current work, we consider the effect of momentum space anisotropy due to the fact that during the non-central heavy ion collisions, the QGP does not possess the spatial isotropy. Also there are several other studies such as [29; 30; 31; 33; 34; 35] include the anisotropic effect to explore the QGP. But the key idea in the present work is to include the effect of baryonic chemical potential along with the anisotropic one in the hot QGP medium using effective fugacity quasiparticle model. The effect of the momentum space has been incorporated through the distribution function details of can be found in [10; 36; 37]. Further, the gluon propagator and hence dielectric permittivity were modified in the presence of anisotropy (\(\xi\)). The effect of chemical potential has been introduced through quasi-particle Debye mass [38; 39]. In this work, we modified potential accordingly. From real part of the potential, so formed, we obtained the binding energies of charmonium and bottomonium at different values of anisotropy [12; 40; 41; 42; 43; 44]. The thermal width of QGP have been derived from the imaginary part of the potential [12; 40; 41; 42; 43; 44]. In studies like [45; 46; 47; 48], authors have calculated the dissociation temperature by using the criteria of thermal width. This idea enlightened us to study binding energy and thermal width of QGP particularly at high baryon density (baryonic chemical potential). The effect of baryonic chemical potential and anisotropy significantly revise values of dissociation temperature. The thermodynamical behavior of the QGP have also been studied in the presence of \(\mu_{b}\) and \(\xi\). Various thermodynamical quantities of QGP such as pressure, energy density and speed of sound have been studied. These quantities played a vital role to study the Suppression of the QGP which is regarded as the most prominent signal for the existence of the QGP. The present manuscript is organized in the following manner. A brief discussion about the screening between the quark-antiquark pair (Debye Screening) in the presence of temperature and \(\mu_{b}\) has been provided in the section-II. In section-III, we explain about the quark-antiquark potential and its modification through Fourier Transform. We briefly explained about the inclusion of momentum space anisotropy in the medium modified form of Cornell potential in Section-IV. The binding energies of various quarkonium states have been discussed in Section-V. In section-VI, the dissociation criteria have been briefly explained. Thermal width of the quarkonia has been obtained and discussed in section-VII. Mass spectra of the charmonium and bottomonium have been calculated in section-VIII. The effect of \(\mu_{b}\) and \(\xi\) on the nature of thermodynamical properties (\(C_{s}^{2}\), \(\epsilon_{s}\) and P) has been discussed in section-IX. Finally we have concluded our work in section-X. ## II Study of quasi-particle Debye mass with baryonic chemical potential and temperature Unlike quantum electrodynamics, the Debye mass (\(m_{D}\)) in case of QCD is non-perturbative and gauge invariant. The leading order Debye mass in QCD coupling at high temperature has been known from the long time and perturbative in nature. Rebhan [49] has defined the Debye mass by seeing the pole of the static propagator which is relevant instead of the time-time component of the gluon self energy and obtained a Debye mass which is gauge independent. This is due to the fact that the pole of self energy does not depend on the choice of the gauge. The Debye mass was calculated for QGP, at high temperature in next to leading order (NLO) in QCD coupling from correlation of two polyakov loop by Braaten and Nieta [50], this result agrees with the HTL result [49]. It was pointed out by Arnold and Yaffe [51] that the physics of confined magnetic charge has to be known in order to understand the contribution of O(\(g^{2}T\)) to the Debye mass in QCD, it was also pointed out by them that the Debye mass as a pole of gluon propagator, no longer holds true. Importantly in lattice QCD, the definition of Debye mass itself encounter difficulty due to the fact that unlike QED the electric field correlators are not gauge invariant in QCD. The proposal of this problem is based on effective theories obtained by dimensional reduction [52], spatial correlation function of gauge-invariant meson energy and the behavior of color singlet free energies [53] has been made Burnier and Rothkopf [54] has attempted to defined a gauge invariant mass from a complex static in medium heavy-quark potential obtained from lattice QCD. Several attempts has been made to capture all the interaction effects present in hot QCD equation of state (EoS) in terms of non-interacting quasi-partons (quasi-gluons and quasi-quarks). These quasiparton are the excitations of the interacting quarks and gluons and there are several model describing the quasi-partons such as, effective mass model [55; 56], effective mass model with polyakov loop [57], model based on PNJL and NJL [58] and effective fugacity model [59; 60]. In QCD the quasi-particle model is a phenomenological model which is widely used to describe the nonlinear behavior of QGP near phase transition point. In this model a system of interacting massless quarks and gluon can be described as an ideal gas of massive non interacting quasi particle. The mass of the quasi particle is dependent on the temperature which arises due to the interaction of gluons and quarks with surrounding medium. The quasi particle retain the quantum number of the quarks and gluons [61]. In our calculation, we used the Debye mass (\(m_{D}\)) for the full QCD case which was given by: \[\frac{m_{D}^{2}\left(T\right)}{g^{2}(T)T^{2}}=\left[\left(\frac{N_{c}}{3} \times\frac{6PolyLog[2,z_{g}]}{\pi^{2}}\right)+\left(\frac{\dot{N_{f}}}{6} \times\frac{-12PolyLog[2,-z_{g}]}{\pi^{2}}\right)\right] \tag{1}\] and \[\dot{N_{f}}\ =\ \left(N_{f}+\frac{3}{\pi^{2}}\sum\frac{\mu_{b}^{2}}{9T^{2}}\right) \tag{2}\] Here, \(g(T)\) is the temperature dependent two loop running coupling constant, \(N_{c}\)=3 (\(SU(3)\)) and \(N_{f}\) is the number of flavor, the function \(PolyLog[2,z]\) having form, \(PolyLog[2,z]=\sum_{k=1}^{\infty}\frac{z^{k}}{k^{2}}\) and \(z_{g}\) is the quasi-gluon effective fugacity and \(z_{q}\) is quasi-quark effective fugacity. These distribution functions are isotropic in nature, \[f_{g,q}=\frac{z_{g,q}exp(-\beta p)}{(1\pm z_{g,q}exp(-\beta p))} \tag{3}\] Where, \(g\) stands for quasi-gluons and \(q\) for quasi-quarks. These fugacities should not be confused with any conservation's law (number conservation) and have merely been introduced to encode all the interaction effects at high temperature QCD. Both \(z_{g}\) and \(z_{q}\) have a very complicated temperature dependence and asymptotically reach to the ideal value unity [60]. The temperature dependence \(z_{g}\) and \(z_{q}\) fits well to the form given below, \[z_{g,q}=a_{q,g}\exp{\bigg{(}-\frac{b_{g,q}}{x^{2}}-\frac{c_{g,q}}{x^{4}}-\frac{ d_{g,q}}{x^{6}}\bigg{)}}. \tag{4}\] (Here, \(x=T/T_{c}\) and \(a\), \(b\), \(c\) and \(d\) are fitting parameters), for both EoS1 and EoS2. EoS1 is the \(O(g^{5})\) hot QCD [51] and EoS2 is the \(O(g^{6}\ln(1/g)\) hot QCD EoS [52] in the quasi-particle description [59; 60] respectively. Now, the final expressions of full QCD or quasi-particle Debye mass in terms of baryonic chemical potential and temperature can be written as: \[\frac{m_{D}^{2}\left(T,\mu_{b}\right)}{T^{2}}=\bigg{\{}\bigg{\{}\frac{N_{c}}{3 }Q_{g}^{2}\bigg{\}}+\bigg{\{}\bigg{[}\frac{N_{f}}{6}+\frac{1}{2\pi^{2}}\bigg{(} \frac{\mu_{b}^{2}}{9T^{2}}\bigg{)}\bigg{]}Q_{q}^{2}\bigg{\}}\bigg{\}} \tag{5}\] Where \(\mu_{b}\) is baryonic chemical potential, \(Q_{g}\) and \(Q_{q}\) are the effective charges given by the equations: \[Q_{g}^{2} = g^{2}(T)\frac{6PolyLog[2,z_{g}]}{\pi^{2}}\] \[Q_{q}^{2} = g^{2}(T)\frac{-12PolyLog[2,-z_{g}]}{\pi^{2}}. \tag{6}\] In our analysis, the temperature dependent quasi-particle Debye mass, \(m_{D}^{QP}\) for the full QCD case with \(N_{f}\)=3 has been employed to deduce the binding energy and dissociation temperature of the quarkonia states. ## III Modification of Cornell potential using Fourier transform (FT) The velocity of heavy quark mass in the bound state is small because of large quark mass (m=\(m_{c,b}\geq\Lambda_{QCD}\)), and the binding effects in quarkonia at the value of zero temperature can be understood in terms of non-relativistic potential models [62]. At zero temperature, the vacuum potential (Cornell potential) is given as below: \[\mathrm{V}(r)=-\frac{\alpha}{r}+\sigma r \tag{7}\] Where, \(\sigma\) and \(\alpha\) denotes string tension and two loop coupling constant respectively. Since the one dimensional vacuum potential defined by eq.(7) is valid at zero temperature, so to study the QGP at finite temperature, the modification to the Cornell potential is required and this Figure 1: Variation of real potential with distance (r in Fermi) at different values of anisotropy (left panel) and at different values of baryonic chemical potential (right panel) in both parallel and perpendicular case. is done by using Fourier transform (FT) and the medium modification enters to this heavy quark potential V(k) via FT [63] as below: \[\tilde{V}(k)=\frac{\tilde{\rm V}(k)}{\epsilon(k)} \tag{8}\] Where, k is the Fourier conjugate of the interquark distance (r) and the dielectric permittivity (\(\epsilon(k)\)) is obtain by the static limit of longitudinal part of the gluon self energy [64; 65]: \[\epsilon(k)\equiv\left(1+\frac{m_{D}^{2}\left(T,\mu_{b}\right)}{k^{2}}\right) \tag{9}\] Where, \(m_{D}^{2}\left(T,\mu_{b}\right)\) is the notation of quasi-particle or full QCD Debye mass with the dependency of baryonic chemical potential and temperature defined by the eq.(5) in second section of the manuscript. V(\(k\)) is the FT of the Cornell potential in eq.(8), Now making the FT of eq.(7) of Cornell potential is not an easy job. So, we have considered r as distribution. Then FT of Coulombic part is straight forward to compute. The FT of linear part \(\sigma rexp(-\gamma r)\) is: \[FT(\sigma rexp(-\gamma r))=-\frac{i\sigma}{k\sqrt{2\pi}}\left(\frac{2}{(\gamma -ik)^{3}}-\frac{2}{(\gamma+ik)^{3}}\right) \tag{10}\] At \(\gamma\)=0, we found the FT of \(\sigma r\) is: \[FT(\sigma r)=-\frac{4\sigma}{k^{4}\sqrt{2\pi}} \tag{11}\] The medium correction to the potential after applying inverse FT reads off: \[V(r)=\int\frac{d^{3}{\bf k}}{(2\pi)^{3/2}}(e^{i{\bf k}\cdot{\bf r}}-1)\tilde{V }(k) \tag{12}\] FT of Cornell potential is, \[\bar{\rm V}(k)=-\sqrt{\frac{2}{\pi}}\bigg{(}\frac{\alpha}{k^{2}}+2\frac{\sigma }{k^{4}}\bigg{)} \tag{13}\] Now substituting Eq.(9) and (13) in the Eq.(8), and employing inverse FT, we got the medium modified form of potential [59; 64; 66] depending upon distance (r) as Figure 2: Variation of imaginary potential with distance (r in Fermi) at different values of anisotropy (left panel) and at different values of baryonic chemical potential (right panel) in both parallel and perpendicular case. below: \[V(r,T,\mu_{b})=\left(\frac{2\sigma}{m_{D}^{2}\left(T,\mu_{b}\right) }-\alpha\right)\frac{exp(-m_{D}\left(T,\mu_{b}\right)r)}{r}\\ -\frac{2\sigma}{m_{D}^{2}\left(T,\mu_{b}\right)r}+\frac{2\sigma}{ m_{D}\left(T,\mu_{b}\right)}-\alpha m_{D}\left(T,\mu_{b}\right) \tag{14}\] It is also noticeable that, in hot QCD medium the expression of potential is not same as the lattice parametrized heavy quark free energy in the deconfined phase (which is basically a screened Coulomb, for the exact form we refer the reader to Reference [67] for more details). As emphasized by Dixit [68] that one dimensional FT of the Cornell potential in the medium yields the similar form as used in the lattice QCD to study the quarkonium properties which assumes one-dimensional color flux tube structure. Since the flux tube structure may expand in more dimensional [67]. Therefore, it is better to consider the three dimensional form of the medium modified form of Cornell potential which has been done exactly in the present manuscript. ## IV Quark-Antiquark Potential in Anisotropic Medium Using Baryonic Chemical Potential The spatial anisotropy (\(\xi\)), in non-central heavy ion collisions, generates at the early stages of QGP. As the system evolves with time, different pressure gradients produce in different direction which maps the spatial anisotropy to the momentum anisotropy. The anisotropy in this manuscript has been introduced at the particle phase space distribution level. Applying the method used in references [69; 70; 71], the distribution function of anisotropy has been obtained from isotropic one by stretching and squeezing it in one of the direction in the momentum space as: \[f(\mathbf{p})\to f_{\xi}(\mathbf{p})=C_{\xi}\ f(\sqrt{\mathbf{p}^{2}+\xi( \mathbf{p}\cdot\mathbf{\hat{n}})^{2}}) \tag{15}\] Where, \(f(\mathbf{p})\) represents isotropic distribution function as in [60; 72], \(\mathbf{\hat{n}}\) is unit vector in momentum anisotropy direction [example- In squeezing (\(\xi>0\) or oblate case) and in stretching (\(-1<\xi<0\) or prolate case) in the \(\mathbf{\hat{n}}\) direction]. Whereas, \(\xi\) denotes, anisotropy of the medium. Various EoSs effects enter through the Debye screening mass (\(m_{D}\)), to make Debye mass similar in both isotropic (\(\xi=0\)) and anisotropic (\(\xi\neq 0\)) [73], we used normalization constant, \(C_{\xi}\) as given below: \[C_{\xi}=\left\{\begin{array}{cc}\frac{\sqrt{|\xi|}}{tanh^{-1}\sqrt{|\xi|}}& if\quad-1\leq\xi<0\\ \frac{\sqrt{\xi}}{tan^{-1}\sqrt{\xi}}&if\quad\xi\geq 0\end{array}\right. \tag{16}\] The limit of \(\xi\) is small then we have: \[C_{\xi}=\left\{\begin{array}{cc}1-\frac{\xi}{3}+O(\xi^{\frac{5}{2}})&if\quad -1\leq\xi<0\\ 1+\frac{\xi}{3}+O(\xi^{\frac{5}{2}})&if\quad\xi\geq 0\end{array}\right. \tag{17}\] In the presence of dissipative anisotropic hot QCD medium, we have modified the potential after considering the assumption given in references [74; 75; 76]. We have already discussed it in details in section-II, how to obtain the in medium modification of the heavy quark potential with dielectric permittivity \(\epsilon(k)\). We have already calculated the expression of the FT of Cornell potential (Eq.(13)) in section-II. While modifying potential, the foremost thing is to calculate the dielectric permittivity \(\epsilon(\mathbf{k})\). To estimate dielectric permittivity, we have following two approaches: (I) With the help of gluon self energy in at finite temperature QCD [77; 78] and (II) by the application of semi-classical transport theory [79; 80; 81]. By exploiting any of these two methods, one can find the gluon self energy tensor (\(\Pi^{\mu\nu}\)) and then the static gluon propagator represents inelastic scattering of an off-shell gluon to a thermal gluons: \[\Delta^{\mu\nu}(\omega,\mathbf{k})=k^{2}g^{\mu\nu}-k^{\mu}k^{\nu}+\Pi^{\mu\nu }(\omega,\mathbf{k}) \tag{18}\] Where, \(\omega\) is the frequency, and the gluon self energy tensor is symmetric and transverse in nature, i.e., \(\Pi^{\mu\nu}(\omega,\mathbf{k})=\Pi^{\nu\mu}(\omega,\mathbf{k})\) and follow the Ward's identity. \[\Pi^{\mu\nu}(\omega,\mathbf{k})=g^{2}\int\frac{d^{3}p}{(2\pi)^{3}}u^{\mu} \frac{\partial f(p)}{\partial p^{\beta}}\left[g^{\nu\beta}-\frac{u^{\nu}k^{ \beta}}{uk+i\epsilon}\right] \tag{19}\] The term, \(\mu^{\mu}=(1+\frac{\mathbf{k}}{|\mathbf{k}|})\) is a light like vector define the propagation of plasma particle in space time, and in quantum chromodynamics plasma whereas \(f(p)\) is denoted the arbitrary particle distribution function. In Fourier space, gluon propagators with real and imaginary part of potential obtained from dielectric tensor of temporal component, thus, becomes: \[\epsilon^{-1}(\mathbf{k})=-\lim_{\omega\to 0}k^{2}\Delta^{00}(\omega,\mathbf{k}) \tag{20}\] Where, \(\Delta^{00}\) represents the static limit of the \(00\) components of the gluon propagators in the Coulomb gauge. After performing calculation (shown in Appendix), we calculate the real and imaginary part of the temporal component of the propagator in the static limit using quasi-particle Debye mass. The temporal component of the real part of retarded propagator in the Fourier space which is required to obtain the real part of the potential in static limit [46], is given below as: \[Re[\Delta_{R(A)}^{00}](\omega=0,\mathbf{k})=-\frac{1}{k^{2}+m_{D} ^{2}\left(T,\mu_{b}\right)}\\ -\xi\left\{\frac{1}{3(k^{2}+m_{D}^{2}\left(T,\mu_{b}\right))}-\frac{m_{D}^{2} \left(T,\mu_{b}\right)\left(3cos2\theta_{n}-1\right)}{6(k^{2}+m_{D}^{2} \left(T,\mu_{b}\right))^{2}}\right\} \tag{21}\] Similarily, the imaginary part can be derived from the imaginary part of the temporal component of the symmetric propagator [46], in the static limit, which is given as below: \[Im[\Delta_{S}^{00}](\omega=0,\mathbf{k})+\frac{\pi Tm_{D}^{2}\left(T, \mu_{b}\right)}{k(k^{2}+m_{D}^{2}\left(T,\mu_{b}\right))^{2}}=\pi Tm_{D}^{2} \left(T,\mu_{b}\right)\xi\left[\frac{-1}{3k(k^{2}+m_{D}^{2}\left(T,\mu_{b} \right))^{2}}+\frac{3sin^{2}\theta_{n}}{4k(k^{2}+m_{D}^{2}\left(T,\mu_{b} \right))^{2}}\right]\\ -\pi Tm_{D}^{2}\left(T,\mu_{b}\right)\xi\left[\frac{2m_{D}^{2} \left(T,\mu_{b}\right)(3sin^{2}\theta_{n}-1)}{3k(k^{2}+m_{D}^{2}\left(T,\mu_{ b}\right))^{3}}\right] \tag{22}\] Where, \[\cos(\theta_{n})=\cos(\theta_{r})\cos(\theta_{pr})+\sin(\theta_{r})\sin( \theta_{pr})\cos(\phi_{pr}) \tag{23}\] In the above expression, \(\theta_{n}\) represents angle between the particle momentum (\(\mathbf{p}\)) and the direction of anisotropy, \(\theta_{r}\) denotes angle between \(\mathbf{r}\), and \(\mathbf{n}\). \(\phi_{pr}\) and \(\theta_{pr}\) are azimuthal and polar angle. Next to modify the real part of the potential, \(\epsilon(\mathbf{k})\) can be obtained using eq.(21) in eq.(20) as: \[\epsilon^{-1}(\mathbf{k})=\frac{k^{2}}{k^{2}+m_{D}^{2}\left(T, \mu_{b}\right)}+k^{2}\xi\\ \left\{\frac{1}{3(k^{2}+m_{D}^{2}\left(T,\mu_{b}\right))}-\frac{ m_{D}^{2}\left(T,\mu_{b}\right)(3cos2\theta_{n}-1)}{6(k^{2}+m_{D}^{2}\left(T, \mu_{b}\right))^{2}}\right\} \tag{24}\] Figure 3: Shows the variation of binding energy of the \(J/\psi\) (left panel) and \(\Upsilon\) (right panel) with T/\(T_{c}\) at different values of baryonic chemical potential (\(\mu_{b}\)) when the value of \(\xi\) is fixed. Similarly for imaginary part, \(\epsilon({\bf k})\) can be obtained by employing eq.(22) in eq.(20) as: \[\frac{\epsilon^{-1}({\bf k})}{\pi\ T\ m_{D}^{2}\left(T,\mu_{b}\right) }=\left(\frac{k^{2}}{k(k^{2}+m_{D}^{2}\left(T,\mu_{b}\right))^{2}}\right)\\ -\xi k^{2}\bigg{(}\frac{-1}{3k(k^{2}+m_{D}^{2}\left(T,\mu_{b} \right))^{2}}\\ +\frac{3\sin^{2}\theta_{n}}{4k(k^{2}+m_{D}^{2}\left(T,\mu_{b} \right))^{2}}-\frac{2m_{D}^{2}\left(T,\mu_{b}\right)\left(3\sin^{2}(\theta_{n} )-1\right)}{3k(k^{2}+m_{D}^{2}\left(T,\mu_{b}\right))^{3}}\bigg{)} \tag{25}\] Real and imaginary part of inter-quark potential can be obtained in static limit using \(\epsilon^{-1}({\bf k})\), [72]. Using eq.(24) in eq.(12), we can write the real part of the potential: \[Re[V(r,\theta_{r},\xi,T,\mu_{b})]=\int\frac{d^{3}{\bf k}}{(2\pi) ^{3/2}}(e^{i{\bf k}\cdot{\bf r}}-1)\bigg{(}-\sqrt{\frac{2}{\pi}}\frac{\alpha} {k^{2}}-\frac{4\sigma}{\sqrt{2\pi}k^{4}}\bigg{)}\left(\frac{k^{2}}{k^{2}+m_{D} ^{2}\left(T,\mu_{b}\right)}\right)\\ +k^{2}\xi\left(\frac{1}{3(k^{2}+m_{D}^{2}\left(T,\mu_{b}\right))} -\frac{m_{D}^{2}\left(T,\mu_{b}\right)(3\cos 2\theta_{n}-1)}{6(k^{2}+m_{D}^{2} \left(T,\mu_{b}\right))^{2}}\right)\bigg{(}-\sqrt{\frac{2}{\pi}}\frac{\alpha} {k^{2}}-\frac{4\sigma}{\sqrt{2\pi}k^{4}}\bigg{)} \tag{26}\] Where s=\(rm_{D}\left(T,\mu_{b}\right)\), and after considering the limit \(s\ll 1\), solution to the above integral yields: Figure 4: Shows the variation of binding energy of the \(\psi^{\prime}\) (left panel) and \(\Upsilon^{\prime}\) (right panel) with T/\(T_{c}\) at different values of baryonic chemical potential (\(\mu_{b}\)) where the value of \(\xi\) is fixed. \[Re[V(r,\theta_{r},\xi,T,\mu_{b})]=\frac{s\sigma}{m_{D}\left(T,\mu_{b} \right)}\left(1+\frac{\xi}{3}\right)-\frac{\alpha m_{D}\left(T,\mu_{b}\right)}{s }\left[1+\frac{s^{2}}{2}+\xi\left(\frac{1}{3}+\frac{s^{2}}{16}\left(\frac{1}{3 }+cos(2\theta_{r})\right)\right)\right] \tag{27}\] The imaginary potential, using eq.(25) in eq.(12) will be: \[Im[V(r,\theta_{r},\xi,T,\mu_{b})]=\pi Tm_{D}^{2}\left(T,\mu_{b} \right)\int\frac{d^{3}\mathbf{k}}{(2\pi)^{3/2}}(e^{i\mathbf{k}\cdot\mathbf{r}} -1)\bigg{(}-\sqrt{\frac{2}{\pi}}\frac{\alpha}{k^{2}}-\frac{4\sigma}{\sqrt{2 \pi}k^{4}}\bigg{)}\left(\frac{k}{(k^{2}+m_{D}^{2}\left(T,\mu_{b}\right))^{2}} \right)\\ -\pi Tm_{D}^{2}\left(T,\mu_{b}\right)\xi\int\frac{d^{3}\mathbf{k} }{(2\pi)^{3/2}}(e^{i\mathbf{k}\cdot\mathbf{r}}-1)\\ \bigg{(}-\sqrt{\frac{2}{\pi}}\frac{\alpha}{k^{2}}-\frac{4\sigma} {\sqrt{2\pi}k^{4}}\bigg{)}\left(\frac{-k}{3(k^{2}+m_{D}^{2}\left(T,\mu_{b} \right))^{2}}+\frac{3ksin^{2}\theta_{n}}{4(k^{2}+m_{D}^{2}\left(T,\mu_{b} \right))^{2}}-\frac{2m_{D}^{2}\left(T,\mu_{b}\right)k(3sin^{2}\theta_{n}-1)}{( k^{2}+m_{D}^{2}\left(T,\mu_{b}\right))^{2}}\bigg{)} \tag{28}\] \(\mu_{b}\)=300 MeV and \(\theta\)=0\({}^{o}\) (parallel case) and \(\theta\)=90\({}^{o}\) (perpendicular case). It was observed that there was an increase in real potential as one goes from prolate to oblate case. The imaginary potential decreases from prolate to oblate case in parallel case but increases for perpendicular case. The right panel of fig.1 and 2 represents same variation for real and imaginary potential at constant \(\xi\)=0.3 for different \(\mu_{b}\)=300, 1000 and 2000 MeV. It was observed that the real potential increases with \(r\) for different baryonic chemical potential, and imaginary potential shows decreasing pattern. In concise, potential (real and imaginary) have higher values for perpendicular case (in right panel). This indicates that anisotropy and baryonic chemical potential has significant effect on complexed valued potential. ## V Binding energy (b.e.) of the different quarkonium S-states Now using the references [82; 83; 84], binding energies of heavy quarkonium states in anisotropic medium can thus be obtained by solving Schrodinger equation with first order perturbation in anisotropy parameter (\(\xi\)). Now the expression of real part of binding energy is written below: \[\mathrm{Re}[B.E]=\frac{m_{Q}\sigma^{2}}{m_{D}^{4}\left(T,\mu_{b} \right)n^{2}}+\alpha m_{D}\left(T,\mu_{b}\right)\\ +\frac{\xi}{3}\Big{(}\frac{m_{Q}\sigma^{2}}{m_{D}^{4}\left(T,\mu _{b}\right)n^{2}}+\alpha m_{D}\left(T,\mu_{b}\right)+\frac{2m_{Q}\sigma^{2}}{m _{D}^{4}\left(T,\mu_{b}\right)n^{2}}\Big{)}. \tag{30}\] Where, n=1 and 2 corresponds to the ground and the first excited states of the heavy quarkonia respectively. It should be noted here that the above expression of binding energy(eq. 30) is applicable only for \(J/\psi\), \(\Upsilon\), \(\Psi^{\prime}\) and \(\Upsilon^{\prime}\). Fig.3 and 4, shows variation of the binding energy of \(J/\psi\), \(\Upsilon\), \(\psi^{\prime}\) and \(\Upsilon^{\prime}\) with T/\(T_{c}\) at different values of baryonic chemical potential (\(\mu_{b}\)) (i.e., \(\mu_{b}\)=200, 1000 and 2000 MeV) at constant values of anisotropy (\(\xi\)) (i.e., \(\xi\)=0.3). From fig.3 and 4 we have deduced that binding energy of \(J/\psi\), \(\Upsilon\), \(\psi^{\prime}\) and \(\Upsilon^{\prime}\) decreases if we increases the values of \(\mu_{b}\). Fig.5 and fig.6, shows variation of the binding energy of \(J/\psi\), \(\Upsilon\), \(\psi^{\prime}\) and \(\Upsilon^{\prime}\) with T/\(T_{c}\) at different values of anisotropy (\(\xi\)) (i.e., \(\xi\)=0.3, 0 and -0.3) at constant value of \(\mu_{b}\) (i.e., \(\mu_{b}\)=1000 MeV). From fig.5 and 6, we have deduced that binding energy of \(J/\psi\), \(\Upsilon\), \(\psi^{\prime}\) and \(\Upsilon^{\prime}\) Figure 6: Shows the variation of binding energy of the \(\psi^{\prime}\) (left panel) and \(\Upsilon^{\prime}\) (right panel) with T/\(T_{c}\) at different values of anisotropy (\(\xi\)) where the value of \(\mu_{b}\) is fixed. increases if we increases the values of \(\xi\). We noticed that, the binding energy have higher values as one moves from prolate (\(\xi<0\)) to oblate (\(\xi>0\)) case. In anisotropic medium, the binding energy of \(Q\bar{Q}\) pair get stronger with increase in anisotropy. This is due to the fact that, the variation of binding energy increases if we goes from prolate to oblate case, hence quarkonium states strongly bounds with anisotropy. ## VI Dissociation of quarkonium states in presence of \(\xi\) and baryonic chemical potential The dissociation temperature for real binding energies can be obtained by using thermal energy effect. According to the references [85; 86] it is not necessary to have zero binding energy for dissolution of the quarkonium states. When binding energy (\(B.E.\leq T\)) of quarkonium state is weakly bonded, dissociates by means of thermal fluctuations. The quarkonium state also said to be dissociated when \(2B.E.<\Gamma(T)\), \(\Gamma(T)\) is thermal width of respective quarkonium states. When binding energy of charmonium and bottomonium state at a particular value of temperature becomes smaller or equal to the value of mean thermal energy, the state which said to be dissociated and this can eatinated by using the following expression B.E.=\(T_{D}\) (for upper bound of quarkonium dissociation) and B.E.=\(3T_{D}\) (for lower bound of quarkonium dissociation) as can be found in [87] and reference therein and is written as below: \[B.E_{\left(J/\psi,\Upsilon,\psi^{{}^{\prime}},\Upsilon^{{}^{ \prime}}\right)}=\frac{m_{Q}\sigma^{2}}{m_{D}^{4}\left(T,\mu_{b}\right)n^{2}}+ \alpha m_{D}\left(T,\mu_{b}\right)\\ +\frac{\xi}{3}\Big{(}\frac{m_{Q}\sigma^{2}}{m_{D}^{4}\left(T,\mu _{b}\right)n^{2}}+\alpha m_{D}\left(T,\mu_{b}\right)+\frac{2m_{Q}\sigma^{2}}{m _{D}^{4}\left(T,\mu_{b}\right)n^{2}}\Big{)}\\ =\left\{\begin{array}{ll}T_{D}&:For\;Upper\;bound\\ 3T_{D}&:For\;Lower\;bound\end{array}\right. \tag{31}\] Further, we have calculated the dissociation temperature by using two criteria: firstly by using mean thermal energy and second by using thermal width. The dissociation temperature of quarkonium states by using mean thermal energy effect criteria are listed in the table 1 to table 1 for both lower and upper bound. In general, the dissociation temperature decreases with increase in the values of \(\mu_{b}\) (i.e., \(\mu_{b}\)=200, 1000 and 2000 MeV), and Figure 7: Shows the variation of mass spectra of the \(J/\psi\) (left panel) and \(\Upsilon\) (right panel) with T/\(T_{c}\) at different values of anisotropy (\(\xi\)) when the value of \(\mu_{b}\) is fixed. increases with increase in the values of \(\xi\) (i.e., \(\xi\)=-0.3, 0 and 0.3). ## VII Thermal width of S-states of quarkonium As already mentioned in the section-III that, the quarkonia potential have both real and imaginary part. The real part gives rise to binding energy discussed earlier. Whereas, thermal width comes from imaginary part of the potential. The thermal width now employed for calculating dissociation point by exploiting twice of real binding energy with thermal width of the quarkonium states. Thus thermal width can be obtained as: \[\Gamma(T)=-\int d^{3}{\bf r}\ \left|\Psi(r)\right|^{2}{\rm Im}\ V({\bf r}) \tag{32}\] Where, \(\Psi(r)\) is the Coulombic type wave function. The Coulombic wave function for J/\(\psi\), \(\Upsilon\), \(\psi^{\prime}\) and \(\Upsilon^{\prime}\) is given as: \[\Psi_{1S}(r) = \frac{1}{\sqrt{\pi a_{0}^{3}}}e^{\frac{-r}{a_{0}}}\] \[\Psi_{2S}(r) = \frac{1}{4\sqrt{2\pi a_{o}^{3}}}\left(2-\frac{r}{a_{o}}\right)e^{ \frac{-r}{a_{0}}} \tag{33}\] \begin{table} \begin{tabular}{|l||l|l|l|} \hline \multicolumn{4}{|c|}{Temperatures are in the unit of \(T_{c}\)} \\ \hline \multicolumn{4}{|c|}{Dissociation by thermal width criteria} \\ \hline States & \(\xi\)=-0.3 & \(\xi\)=0 & \(\xi\)=0.3 \\ \(\Downarrow\) & & & \\ \hline States & \(\xi\)=-0.3 & \(\xi\)=0 & \(\xi\)=0.3 \\ \(\Downarrow\) & & & \\ \hline \(J/\psi\) & 1.7766 & 1.9162 & 2.0304 \\ \(\Upsilon\) & 2.2081 & 2.3730 & 2.5253 \\ \(\Upsilon^{\prime}\) & 1.7893 & 1.8821 & 1.9162 \\ \hline \end{tabular} \end{table} Table 5: Dissociation for \(\mu_{b}\)=1000 MeV at \(T_{c}\)=197 MeV \begin{table} \begin{tabular}{|l||l|l|l|} \hline \multicolumn{4}{|c|}{Temperatures are in the unit of \(T_{c}\)} \\ \hline \multicolumn{4}{|c|}{Dissociation by thermal energy effect criteria} \\ \hline States & \(\xi\)=-0.3 & \(\xi\)=0 & \(\xi\)=0.3 \\ \(\Downarrow\) & & & \\ \hline \(J/\psi\) & 1.4593 & 1.5482 & 1.6243 \\ \(\Upsilon\) & 1.7766 & 1.9035 & 2.0050 \\ \(\Upsilon^{\prime}\) & 1.4086 & 1.5355 & 1.5931 \\ \hline \end{tabular} \end{table} Table 4: Upper bound of dissociation for \(\mu_{b}\)=1000 MeV at \(T_{c}\)=197 MeV Figure 9: Shows the variation of 2B.E., \(\Gamma\) of \(J/\psi\) with \(T/T_{c}\) at different values of \(\mu_{b}\) (left panel) and at different values of \(\xi\) (right panel). \begin{table} \begin{tabular}{|l||l|l|l|} \hline \multicolumn{4}{|c|}{Temperatures are in the unit of \(T_{c}\)} \\ \hline \multicolumn{4}{|c|}{Dissociation by thermal width criteria} \\ \hline States & \(\mu_{b}\)=200 MeV & \(\mu_{b}\)=1000 MeV & \(\mu_{b}\)=2000 MeV \\ \(\Downarrow\) & & & \\ \hline \(J/\psi\) & 1.4618 & 1.4467 & 1.4082 \\ \(\Upsilon\) & 3.0775 & 2.9385 & 2.6794 \\ \(\Upsilon^{\prime}\) & 1.6127 & 1.5913 & 1.5379 \\ \hline \end{tabular} \end{table} Table 6: Dissociation for \(\xi\)=0.3 at \(T_{c}\)=197 MeV Figure 10: Shows the variation of 2B.E., \(\Gamma\) of \(\Upsilon\) with \(T/T_{c}\) at different values of \(\mu_{b}\) (left panel) and at different values of \(\xi\) (right panel). Thus, the dissociation width for \(1S\)-state up to leading logarithmic order of imaginary potential following reference [84] would be of the form: \[\frac{\Gamma_{1S}(T)}{m_{D}^{2}\log\left(\frac{m_{D}}{\alpha m_{O}}\right)}=T \biggl{(}\frac{4}{\alpha m_{Q}^{2}}+\frac{12\sigma}{\alpha^{4}m_{Q}^{4}} \biggr{)}\biggl{(}1-\frac{\xi}{6}\biggr{)} \tag{36}\] Similarly for the 2S-state, using wave function for the 2S, state we have: \[\Gamma_{2S}(T)=\frac{T(\xi-6)}{45\alpha^{2}m_{Q}^{2}}m_{D}^{2}\left(35(12\gamma -31)\alpha)+\frac{72(160\gamma-447)\sigma}{\alpha^{2}m_{Q}^{2}}\right)+\frac{T (\xi-6)}{45\alpha^{2}m_{Q}^{2}}m_{D}^{2}\left\{60\left(7\alpha+\frac{192\sigma }{\alpha^{2}m_{Q}^{2}}\right)log\frac{\alpha m_{Q}}{2m_{D}}\right\} \tag{37}\] And the leading logarithmic order for 2S-state is given as: \[\frac{\Gamma_{2S}(T)}{log\left(\frac{2m_{D}}{\alpha m_{Q}}\right)}=\frac{8m_{ D}^{2}T}{\alpha^{4}m_{Q}^{4}}\left(1-\frac{\xi}{6}\right)(7\alpha^{3}m_{Q}^{2}+ 192\sigma) \tag{38}\] The dissociation temperature of different quarkonium states by exploiting the thermal width and twice of real binding energy has been shown in the fig.9 (for J/\(\psi\)), fig.10 (for T) and fig.11 (for T\({}^{\prime}\)). The dissociation temperature obtained from the intersection point of twice of real binding energy and thermal width for different states at different values of anisotropy (\(\xi\)) and baryonic chemical potential are enlisted in the tables 5 and 6. There were no dissociation temperature found for the \(\psi^{\prime}\) due to its small mass value and hence decays earlier than ground state. Figure 11: Shows the variation of 2B.E., \(\Gamma\) of \(\Upsilon^{\prime}\) with \(T/T_{c}\) at different values of \(\mu_{b}\) (left panel) and at different values of \(\xi\) (right panel). VIII Mass spectra of quarkonium states in the presence of anisotropy and baryonic chemical potential The mass spectra of \(1S\) and \(2S\) states of charmonium and bottomonium in anisotropic medium can be calculated by using following conditions: \[M=2m_{Q}+B.E \tag{39}\] Hence, we have: \[2m_{Q}+\left(\frac{m_{Q}\sigma^{2}}{m_{D}^{4}n^{2}}+\alpha m_{D}+\frac{\xi}{3} \Big{(}\frac{m_{Q}\sigma^{2}}{m_{D}^{4}n^{2}}+\alpha m_{D}+\frac{2m_{Q}\sigma^ {2}}{m_{D}^{4}n^{2}}\Big{)}\right) \tag{40}\] Where, \(m_{Q}\) is the mass of heavy quarkonia. Fig.7 and fig.8, shows the variation of mass spectra of \(J/\psi\) (left panel) and \(\Upsilon\) (right panel) with T/\(T_{c}\) at different values of baryonic chemical potential (\(\mu_{b}\)) (i.e., \(\mu_{b}\)=200, 1000 and 2000 MeV) (in fig.7) and at different values of \(\xi\) (i.e., \(\xi\)=-0.3, 0 and 0.3) (in fig.8). From Fig.7 and 8 it was deduced that mass spectra of \(J/\psi\) and Figure 12: Variation of P/\(T^{4}\) with T/\(T_{c}\) for EoS1 at \(N_{f}\)=3 quark-gluon plasma and potential is in parallel condition (\(\theta\)=0 degree) (left panel) and right panel figure shows the inner view of the minimum separation of left panel figure. In this figure black line with circle represents the results obtained from Nilima EoS’s [88] and red line with diamond represents the results obtained from Solanki EoS’s [39]. \begin{table} \begin{tabular}{|l||l|l|l|l|l|} \hline \multicolumn{5}{|c|}{Mass spectra are in the unit of GeV} \\ \hline \multicolumn{5}{|c|}{For \(m_{J/\psi}\)=1.5 GeV and \(m_{\Upsilon}\)=4.5 GeV} \\ \hline States & \(\mu_{b}\)=200 MeV & \(\mu_{b}\)=1000 MeV & \(\mu_{b}\)=2000 MeV & Theoritical & Experimental \\ \(\Downarrow\) & & & Result [39] & Result [97] \\ \hline \(J/\psi\) & 3.520 & 3.480 & 3.391 & 3.060 & 3.096 \\ \(\Upsilon\) & 10.32 & 10.18 & 9.909 & 9.200 & 9.460 \\ \hline \end{tabular} \end{table} Table 7: Mass spectra of ground state of quarkonium at \(\xi\)=0 \(\Upsilon\) decreases if we increases the values of \(\mu_{b}\) and increases if we increases the values of \(\xi\). In table 7 and table 8, we have calculated the values of mass spectra. In table 8, we noticed that if we increases the values of \(\mu_{b}\), then the values of mass spectra decreases. In table 8, we noticed that if we increases the values of \(\xi\) then the values of mass spectra also increases. We have also compared the results of mass spectra at different values of \(\mu_{b}\) (in table 7) and at different values of \(\xi\) (in table 8) with the previous published theoretical [39] and experimental [97] results. From table we have seen that, the values of this result of mass spectra is approximately near to the experimental and theoretical result value. IX Thermodynamical properties of quark-matter with anisotropic parameter (\(\xi\)) using EOS's of QGP The EoS's played invaluable role to understand the behavior of the QGP which is produced in the relativistic nucleus-nucleus collisions. EoS's are very sensitive to the matter and are important to verify/investigates the quarkonium suppression [90; 91]. The expansion of QGP is highly sensitive to EoS's via the speed of sound, and it investigates the sensitivity of quarkonium suppression to the EoS [90; 91]. Bannur [89] created an EoS for a strongly coupled QGP by appropriate modification of strongly coupled plasma in QED with integrating the running coupling constant and making suitable adjustment to account for color and flavor degree of freedom, and found a pretty good fit to the lattice findings. Now, we have gone through the EoS's, which is stated as a function of plasma parameter [92] briefly: \[\epsilon_{QED}-nT\mu_{ex}(\Gamma)=\frac{3}{2}nT \tag{41}\] Where, the first term represent the ideal contribution and the deviations from ideal EoS as given below: \[\mu_{ex}(\Gamma)(1+3\times 10^{3}\Gamma^{5.7})\\ =\mu_{ex}^{Abe}(\Gamma)+3\times 10^{3}\Gamma^{5.7}\mu_{ex}^{OCP}(\Gamma) \tag{42}\] Where, \(\mu_{ex}^{Abe}\) is: \[\mu_{ex}^{Abe}+3\Gamma^{3}\left[\frac{3}{8}ln(3\Gamma)+\frac{\gamma}{2}-\frac {1}{3}\right]=-\frac{\sqrt{3}}{2}\Gamma^{\frac{3}{2}} \tag{43}\] Figure 13: Variation of \(\epsilon_{s}/T^{4}\) with T/\(T_{c}\) for EoS1 at \(N_{f}\)=3 quark-gluon plasma and potential is in parallel condition (\(\theta\)=0 degree) (left panel) and right panel figure shows the inner view of the minimum separation of left panel figure. In this figure black line with circle represents the results obtained from Nilima EoS’s [88] and red line with diamond represents the results obtained from Solanki EoS’s [39]. Where the term \(\mu_{ex}^{Abe}\), determined for plasma component and valid for all \(\Gamma<180\)[93], is given below: Where, the term \(\mu_{ex}^{OCP}\) is, \[\mu_{ex}^{OCP}-(0.220703\Gamma^{-\frac{1}{2}}-0.86097)\\ =-0.898004\Gamma+0.96786\Gamma^{\frac{1}{4}} \tag{44}\] For strongly coupled plasma, in QCD it was assumed that the hadron exists for \(T<T_{c}\) and goes to QGP for \(T>T_{c}\). At \(T>T_{c}\), it is the strongly interacting plasma of quarks, gluons and no hadrons because it was assumed that interaction of confinement due to QCD vacuum has been melted [89] at \(T=T_{c}\). Hence, only coulomb interaction present in the deconfined plasma phase. So, the plasma parameter, which is the ratio of particle average potential energy to particle average kinetic energy, is assumed to be weak \(\Gamma\ll 1\) and as given by: \[\Gamma\equiv\frac{<PE>}{<KE>}=\frac{Re[V(r,T)]}{T} \tag{45}\] Finally, the EoS has been obtained by using the potential eq.(8) in the plasma parameter after inclusion of quantum and relativistic effects as: \[\frac{\epsilon_{s}}{nT}=(3+\mu_{ex}(\Gamma)) \tag{46}\] Where, the \(\mu_{ex}\) remains same as in eq.(42). The scaled energy density is now expressed in terms of ideal contribution: \[e(\Gamma)\equiv\frac{\epsilon_{s}}{\epsilon_{SB}}=1+\frac{1}{3}\mu_{ex}(\Gamma) \tag{47}\] \[\epsilon_{SB}\equiv(16+21N_{f}/2)\pi^{2}T^{4}/30 \tag{48}\] Here, \(N_{f}\) denotes the number of quark and gluon flavors. For the \(\overline{MS}\) approach, we have now used two loop level QCD running coupling constants [94]. \[g^{2}(T)\approx 2b_{0}ln\frac{\bar{\mu}}{\Lambda_{\overline{MS}}}\left(1+ \frac{b_{1}}{2b_{0}^{2}}\frac{ln\left(2ln\frac{\bar{\mu}}{\Lambda_{\overline{ MS}}}\right)}{ln\frac{\bar{\mu}}{\Lambda_{\overline{MS}}}}\right)^{-1} \tag{49}\] Where, \(b_{0}=\frac{33-2N_{f}}{48\pi^{2}}\) and \(b_{1}=\frac{153-19N_{f}}{384\pi^{4}}\). In case of \(\overline{MS}\) scheme, \(\Lambda_{\overline{MS}}\) and \(\bar{\mu}\) are considered as the renormalization Figure 14: Variation of \(C_{s}^{2}\) with T/\(T_{c}\) for EoS1 at \(N_{f}\)=3 quark-gluon plasma and potential is in parallel condition (\(\theta\)=0 degree) (left panel) and right panel figure shows the inner view of the minimum separation of left panel figure. In this figure black line with circle represents the results obtained from Nilima EoS’s [88] and red line with bar represents the results obtained from Solanki EoS’s [39]. scale and the scale parameter respectively: \[\bar{\mu}exp(\gamma_{E}+c)=\Lambda_{\overline{MS}}(T)\] \[\Lambda_{\overline{MS}}(T)exp(\gamma_{E}+c)=4\pi\Lambda_{T}. \tag{50}\] Here, \(\gamma_{E}\)=0.5772156 and \(c=\frac{N_{f}-4N_{f}ln4}{22N_{c}-N_{f}}\) that is a constant depending on colors and flavors. There are various uncertainties in the formula for the running coupling constant, which are connected with the scale parameter and renormalization scale \(\overline{MS}\). This problem has been superseded by using Brodsky, Lepage and Mackenzie criteria [95]. \(\overline{MS}\) was permitted to fluctuate between \(\pi\)T and 4\(\pi\)T [96]. For our motivation, we chose the \(\overline{MS}\) near to the center value 2\(\pi T_{c}\)[86] for \(N_{f}\)=0, and \(T_{c}\) for both \(N_{f}\)=2 and \(N_{f}\)=3 flavors. When the factor \(\frac{b_{i}}{2b_{0}^{2}}\frac{ln\left(2ln\frac{a}{\overline{MS}}\right)}{ln \frac{b}{\overline{MS}}}\) is \(\gg\) 1 then expression is reduced as used in [89], ignoring the higher order terms of the preceding component. However, this option does not hold true for the temperature ranges employed in the computation, resulting in a coupling mistake that ultimately causes the difference in findings between our model and the Bannur model [89]. First, we have computed the energy density \(\epsilon_{s}\)(T) using eq.(47) and the thermodynamic relation: \[\epsilon_{s}+P=T\frac{dp}{dT} \tag{51}\] Further, pressure was calculated as: \[\frac{P}{T^{4}}=\left(\frac{P_{0}}{T_{0}}+3a_{f}\int_{T_{0}}^{T}d\tau\tau^{2}e (\Gamma(\tau))\right)/T^{3} \tag{52}\] Here, \(P_{0}\) denotes the pressure at some temperature \(T_{0}\) and \(a_{f}=(16+\frac{21}{2}N_{f})\frac{\pi^{2}}{90}T^{4}\). Thus, speed of sound that can be evaluated once we have pressure (P) and energy density (\(\epsilon_{s}\)) in hand and is given below: \[c_{s}^{2}d\epsilon_{s}=dP \tag{53}\] All the above thermodynamical properties is potential dependent, and the potential is Debye mass dependent. In that case, we invade the problem by trading off the dependence on baryonic chemical potential (\(\mu_{b}\)), anisotropy (\(\xi\)) and temperature to a dependence on these thermodynamic properties of matter. The thermodynamical properties of quark matter (i.e., pressure, energy density and speed of sound) plays a curious role in the study of QGP and also provide useful information about the strange quark-matter. The thermodynamic behavior of QCD matter at high temperature or above critical temperature is currently studied by lattice QCD [98; 99]. In fig.(12) we have plotted the variation of pressure (\(\frac{P}{T^{4}}\)) with temperature (\(T/T_{c}\)) using EoS1 for \(N_{f}\)=3 QGP along with Nilima EoS's [88] and Solanki EoS's [39]. Now, energy density \(\epsilon_{s}\), speed of sound (\(C_{s}^{2}\)), and so forth can be derived since we had obtained pressure. In fig.(13), we have plotted the energy density (\(\frac{\epsilon_{s}}{T^{4}}\)) with temperature (\(T/T_{c}\)) using Eos1 for \(N_{f}\)=3 QGP along with Nilima EoS's [88] and Solanki EoS's [39]. In figure (14), we have plotted the speed of sound (\(C_{s}^{2}\)) with temperature (\(T/T_{c}\)) using Eos1 for \(N_{f}\)=3 QGP along with Nilima EoS's [88] and Solanki EoS's [39]. Our results of these thermodynamical properties of quark matter is approximately matched with the result of Nilima EoS's [88] and Solanki EoS's [39] with anisotropy parameter. The effect of anisotropy was also observed in these thermodynamical properties of quark matter as shown in figures (12), (13) and (14). If we increases the value of anisotropy (\(\xi\)=0 to 0.3), then the variation of P/\(T^{4}\), \(\epsilon_{s}/T^{4}\) and \(C_{s}^{2}\) also increases slightly respectively (the right panel of the figures 12, 13 and 14 is shows the minimum separation of the left panel). In figure (15), we have shows the variation of (\(\frac{P}{T^{4}}\)) with temperature (\(T/T_{c}\)) (left panel), \(\epsilon_{s}/T^{4}\) with temperature (\(T/T_{c}\)) (middle panel) using EoS1 at \(N_{f}\)=3 and (\(C_{s}^{2}\)) with temperature (\(T/T_{c}\)) (right panel) using EoS1 at \(N_{f}\)=0, and compared with the lattice QCD results [88; 89]. Since lattice QCD (LQCD) results are available for only pure gauge, therefore comparison (in figure (15)) has been taken for the above mentioned value of flavor \(N_{f}\) only. Our flavored results match approximately good with the LQCD results at \(\xi\)=0 and \(\mu_{b}\)=0. The main featured are the sharp rise of the curves of (\(\frac{P}{T^{4}}\)), \(\epsilon_{s}/T^{4}\) and (\(C_{s}^{2}\)) around the value of critical temperature and then shows a linear curve to the ideal value. We calculate, these thermodynamical properties (i.e., (\(\frac{P}{T^{4}}\)), \(\epsilon_{s}/T^{4}\) and (\(C_{s}^{2}\))) to calculate hydrodynamical expansion of Quark Gluon Plasma, and in future we extend our work to calculate the suppression of quarkonia in nuclear collisions with the effect of anisotropy and baryonic chemical potential. of dissociation pattern of quarkonia. Real part of potential has been used for solving schrodinger equation to obtain binding energy of quarkonia, and the imaginary part give rise to thermal width of heavy quarkonia. We observed that binding energy decreases and thermal width increases with increasing the values of \(\mu_{b}\). However, binding energy tends to get higher with increasing value of \(\xi\). In conclusion, the dissociation temperature of heavy quarkonia decreases with baryonic chemical potential and increases with the anisotropy as shown in table 1 to table 6. We have also calculated the values of mass spectra, and noticed that if we increases the values of \(\mu_{b}\) then the values of mass spectra decreases, but if we increases the values of \(\xi\) then the values of mass spectra also increases. We also extend this work to calculate the thermodynamical properties of the QGP with \(\xi\) and \(\mu_{b}\).These EoS's are important to study the Suppression phenomena in the presence of \(\xi\) and \(\mu_{b}\). We have also extended this work, after calculating the thermodynamical properties of QGP (i.e., pressure, energy density and speed of sound) using the \(\xi\) and \(\mu_{b}\), mainly for the calculation of nucleus-nucleus suppression with the effect of anisotropy and baryonic chemical potential. We found that, if we increases the values of \(\xi\) from 0 to 0.3, variation of pressure, energy density and speed of sound with T/\(T_{c}\) increases little bit. In future, we will extend this work to calculate the Survival property of different quarkonium states in the presence of \(\xi\) and \(\mu_{b}\) at different states of energy density (\(\sqrt{s_{NN}}\)), this survival probability will be calculated with respect to anisotropy, baryonic chemical potential, transverse momentum, centrality, and rapidity which is the key point to quantify various properties of the medium produced during Heavy Ion Collisions (HICs) at LHC and RHIC. The results of this work might be helpful for expanding the studies of the highly dense objects like Figure 15: Variation of P/\(T^{4}\) with T/\(T_{c}\) (left panel), \(\epsilon_{s}/T^{4}\) with T/\(T_{c}\) (middle panel)for EoS1 at \(N_{f}\)=3 and \(C_{s}^{2}\) with T/\(T_{c}\) (right panel) for EoS1 at \(N_{f}\)=0 quark-gluon plasma and potential is in parallel condition (\(\theta\)=0 degree). In this figure black line with circles represents the lattice QCD results (for pure gauge) obtained from [88] and blue line with diamonds represents the Our EoS at \(\xi\)=0 and \(\mu_{b}\)=0. \begin{table} \begin{tabular}{|l||l|l|l|l|l|} \hline \multicolumn{6}{|c|}{Mass spectra are in the unit of GeV} \\ \hline \multicolumn{6}{|c|}{For \(m_{J/\psi}\)=1.5 GeV and \(m_{\Upsilon}\)=4.5 GeV} \\ \hline States & \(\xi\)=0.3 & \(\xi\)=0 & \(\xi\)=0.3 & Theoritical & Experimental \\ \(\Downarrow\) & & & Result [39] & Result [97] \\ \hline \(J/\psi\) & 3.361 & 3.480 & 3.597 & 3.060 & 3.096 \\ \(\Upsilon\) & 9.864 & 10.18 & 10.53 & 9.200 & 9.460 \\ \hline \end{tabular} \end{table} Table 8: Mass spectra of ground state of quarkonium at \(\mu_{b}\)=1000 MeV neutron stars. Since the Compressed Baryonic Matter (CBM) experiment at facility for anti proton and ion research (FAIR) is exploring the QGP at higher baryon densities, so such type of theoretical studies may participate to the physics of highly dense bodies with high baryon densities. ### Acknowledgement VKA acknowledge the Science and Engineering research Board (SERB) Project No. **EEQ/2018/000181** New Delhi for providing the financial support. We record our sincere gratitude to the people of India for their generous support for the research in basic sciences.
2301.12876
Guiding Online Reinforcement Learning with Action-Free Offline Pretraining
Offline RL methods have been shown to reduce the need for environment interaction by training agents using offline collected episodes. However, these methods typically require action information to be logged during data collection, which can be difficult or even impossible in some practical cases. In this paper, we investigate the potential of using action-free offline datasets to improve online reinforcement learning, name this problem Reinforcement Learning with Action-Free Offline Pretraining (AFP-RL). We introduce Action-Free Guide (AF-Guide), a method that guides online training by extracting knowledge from action-free offline datasets. AF-Guide consists of an Action-Free Decision Transformer (AFDT) implementing a variant of Upside-Down Reinforcement Learning. It learns to plan the next states from the offline dataset, and a Guided Soft Actor-Critic (Guided SAC) that learns online with guidance from AFDT. Experimental results show that AF-Guide can improve sample efficiency and performance in online training thanks to the knowledge from the action-free offline dataset. Code is available at https://github.com/Vision-CAIR/AF-Guide.
Deyao Zhu, Yuhui Wang, Jürgen Schmidhuber, Mohamed Elhoseiny
2023-01-30T13:30:56Z
http://arxiv.org/abs/2301.12876v2
# Guiding Online Reinforcement Learning with Action-Free Offline Pretraining ###### Abstract Offline RL methods have been shown to reduce the need for environment interaction by training agents using offline collected episodes. However, these methods typically require action information to be logged during data collection, which can be difficult or even impossible in some practical cases. In this paper, we investigate the potential of using action-free offline datasets to improve online reinforcement learning, name this problem Reinforcement Learning with Action-Free Offline Pretraining (AFP-RL). We introduce Action-Free Guide (AF-Guide), a method that guides online training by extracting knowledge from action-free offline datasets. AF-Guide consists of an Action-Free Decision Transformer (AFDT) implementing a variant of Upside-Down Reinforcement Learning. It learns to plan the next states from the offline dataset, and a Guided Soft Actor-Critic (Guided SAC) that learns online with guidance from AFDT. Experimental results show that AF-Guide can improve sample efficiency and performance in online training thanks to the knowledge from the action-free offline dataset. Code is available at [https://github.com/Vision-CAIR/AF-Guide](https://github.com/Vision-CAIR/AF-Guide). Machine Learning, ICML ## 1 Introduction Training a reinforcement learning agent directly in the environment from scratch can be a challenging task, as it usually requires a large number of time-consuming and costly interaction steps to explore the environment. As a result, improving sample efficiency in RL has become one of the most important directions in the reinforcement learning community. Offline reinforcement learning methods use offline collected episodes only to train RL agents. After the offline RL training, RL agents can be further online finetuned in the environment with much fewer interaction steps. To apply offline RL methods, actions need to be logged when collecting the offline episodes. However, recording actions (e.g., motor torques) can be difficult or even impossible in certain practical cases, like learning from large-scale internet videos. Despite the lack of actions, these data (e.g., a video of a robot making a pizza) still hold valuable information about agents' movements and environments' transitions, which can be utilized to instruct the RL agent on the distinctions between advantageous and detrimental movements. In this paper, we explore the potential of utilizing action-free offline reinforcement learning datasets to guide online Reinforcement Learning. We name this setting Reinforcement Learning with Action-Free Offline Pretraining (AFP-RL). We propose Action-Free Guide (AF-Guide), a method that improves online training by learning to plan good target states from the action-free offline dataset. AF-Guide comprises two main components that we introduce: an Action-Free Decision Transformer (AFDT), and a Guided Soft Actor-Critic (Guided SAC). AFDT, a variant of the Upside Down Reinforcement Learning (Schmidhuber, 2019) model Decision Transformer (Chen et al., 2021), is trained on an offline dataset without actions to plan the next states based on the past states and the desired future returns. Guided SAC, a variation of SAC (Haarnoja et al., 2018; 20), follows the planning of AFDT by maintaining an additional Q function that fits an intrinsic reward built from the discrepancy between the planned state and the achieved state with zero discount factor. An overview of our method is summarized in Fig.1. Our experimental results demonstrate that AF-Guide can significantly improve sample efficiency during online training by utilizing action-free offline datasets. Our contribution can be summarized as follows: * We propose Reinforcement Learning with Action-Free Offline Pretraining (AFP-RL), a novel setting to study how to guide online Reinforcement Learning with action-free offline datasets. * We present Action-Free Guide (AF-Guide), a method that pretrains a model which can extract knowledge from the action-free offline dataset and conduct state-space planning to guide online policy learning. * Experimental results show that AF-Guide can benefit from the action-free offline dataset to improve sample efficiency and performance during online training. ## 2 Related Work Offline Reinforcement LearningOffline reinforcement learning methods learn policies using pre-collected episodes from unknown behavior policies. Many offline RL methods, such as CQL (Kumar et al., 2020), IQL (Kostrikov et al., 2021), AWAC (Nair et al., 2020), BCQ, and COMBO (Yu et al., 2021), have been developed from off-policy algorithms, with additional constraints to avoid out-of-distribution actions that are not covered by the dataset. Recently, Decision Transformer (Chen et al., 2021) and Trajectory Transformer (Janner et al., 2021) convert the offline RL problem as a context-conditioned sequential modeling problem and generate good actions by either conditioning on desired future return following the Upside Down Reinforcement Learning framework (Schmidhuber, 2019) or searching for a good rollout with a high future return. In our AFP-RL setting, datasets do not contain explicitly labeled actions (although the dataset may contain videos of acting agents). In this case, learning an offline policy directly is infeasible. Our method AFDT-Guide instead leverages action-free data to plan good target states and guide online training for improved performance. Imitation Learning from ObservationThe target of imitation learning from observation is to learn a policy through state-only action-free demonstrations from experts. imitation Learning from observation methods can be broadly classified into different categories. Methods like GSP (Pathak et al., 2018) and BCO (Torabi et al., 2018) train an inverse dynamic model to infer the expert actions given state transitions. Intrinsic-reward-based methods like DeepMimic (Peng et al., 2018), Context-Aware Translation (Liu et al., 2018), and Lee et al. (2021), create surrogate reward functions to guide the online training. Other methods like GAIfO (Torabi et al., 2018), IDDM (Yang et al., 2019), MobILE (Kidambi et al., 2021) employ adversarial learning. The difference between imitation learning from observation and our setting AFP-RL is similar to the difference between imitation learning and offline RL. In imitation learning from observation, the dataset is collected by an expert policy, and agents are trained to directly imitate the collected episodes. In contrast, episodes in AFP-RL are collected by behavior policies that may be suboptimal. As a result, directly imitating these episodes would lead to suboptimal performance. Motion ForecastingMotion forecasting is the task of predicting the future motion of agents given past and context information. It helps autonomous systems like autonomous driving and robotics to foresee and avoid potential risks like collisions in advance. Recent methods for motion forecasting have explored various architectural designs. For example, Social-LSTM (Alahi et al., 2016) and Trajectron++ (Salzmann et al., 2020) are based on RNN. Social-GAN (Gupta et al., 2018) and HalentNet (Zhu et al., 2021) benefit from generative adversarial training. Social-STGCNN (Mohamed et al., 2020) and Social-Implicit (Mohamed et al., 2022) predict the future via spatial-temporal convolution. AgentFormer (Yuan et al., 2021), mmTransformer (Liu et al., 2021), and ST-Transformer (Aksan et al., 2021) are models Figure 1: An overview of AF-Guide. Action-Free Decision Transformer (AFDT) is trained on the action-free offline dataset to plan the next state \(\tilde{s}_{t+1}\) given previous states and the desired return-to-go \(\hat{R}_{t}\). The guiding reward \(r_{g}\) is formed based on the negative L2 distance between the planned state \(\tilde{s}_{t+1}\) and the real state \(s_{t+1}\). In addition to SAC’s original Q function denoted as \(Q_{e}\) that fits the environment reward \(r_{e}\), Guided SAC has an additional Q function \(Q_{g}\) to fit the guiding reward \(r_{g}\) with zero discount factor to discard the future return. The policy is trained by the weighted sum over the two Q functions. based on Transformer architecture (Vaswani et al., 2017) designed for pedestrian or vehicle trajectory prediction. Our state-planner AFDT is a Transformer model. Instead of simply predicting the future states conditioned on the past, AFDT plans the future states by additionally conditioning on the desired future return in the UDRL framework (Schmidhuber, 2019). ## 3 Background Soft Actor-Critic (SAC)SAC is an actor-critic RL approach based on the maximum entropy framework (Haarnoja et al., 2018; 2018), which involves optimizing a Q network \(Q_{e}\)1 and the policy network \(\pi\). The Q function \(Q_{e}\) is learned with the following objective Footnote 1: We use the subscript \(e\) to denote notations related to the environment reward, and will use \(g\) to differentiate the notations related to the guiding reward (see Section 4.2). \[\min_{Q_{e}}\mathbb{E}_{\mathcal{D}_{\mathrm{online}}}\|Q_{e}(s_{t},a_{t})-Q_{ e,t}^{\mathrm{target}}\|_{2}^{2} \tag{1}\] where \(\mathcal{D}_{\mathrm{online}}\triangleq\{(s_{t},a_{t},r_{e,t},s_{t+1})\}\) with state \(s_{t}\), action \(a_{t}\), environment reward \(r_{e,t}\), and next state \(s_{t+1}\), is the online replay buffer. \(Q_{e,t}^{\mathrm{target}}\) is the target Q value computed as follows \[Q_{e,t}^{\mathrm{target}}=r_{e,t}+\gamma\mathbb{E}_{\pi}\left[Q_{e}(s_{t+1},a _{t+1})-\alpha\log\pi(a_{t+1}|s_{t+1})\right] \tag{2}\] Here, \(\gamma\) is the discount factor and \(\alpha\) is the temperature parameter to weight the entropy. The policy network is learned by the following objective \[\min_{\pi}\mathbb{E}_{s_{t}\sim\mathcal{D}_{\mathrm{online}},a_{t}\sim\pi} \left[\alpha\log(\pi(a_{t}|s_{t}))-Q_{e}(s_{t},a_{t})\right] \tag{3}\] Upside Down Reinforcement Learning and Decision TransformerTraditional Reinforcement Learning methods are trained to predict future rewards (e.g., a Q function) first and convert the prediction into rewarding actions. In contrast, Upside Down Reinforcement Learning (UDRL) (Schmidhuber, 2019) framework takes desired future rewards as inputs to generate actions. As an instance of UDRL in the offline RL setting, Decision Transformer (DT) (Chen et al., 2021) is trained in the offline dataset to regress the current action \(a_{t}\) conditioned on the past \(K\) states \(s_{t-k:t}\), actions \(a_{t-k:t-1}\), and the future returns (named Return-To-Go, RTG) \(\hat{R}_{t-k:t}\), with \(\hat{R}_{t}=\sum_{t^{\prime}=t}^{T}r_{t^{\prime}}\). The architecture of DT is based on the language model GPT (Radford et al., 2018). When evaluated in an environment, the model is provided with an initial state \(s_{0}\) and a desired initial RTG \(\hat{R}_{0}\) to generate the first action. After executing the action \(a_{t}\) in the environment and observing the reward \(r_{t}\) and the next state \(s_{t+1}\), the RTG is updated by \(\hat{R}_{t+1}=\hat{R}_{t}-r_{t}\). The executed action \(a_{t}\), the current return-to-go \(\hat{R}_{t+1}\), and the current state \(s_{t+1}\) are then fed back into DT to infer the next action. Given a high initial RTG \(\hat{R}_{0}\), DT is able to generate good actions that lead to high future returns. Due to the dependence of standard DT on action labels, it can not be directly applied for Action Free Pretraining. ## 4 Action-Free Guide Action-Free Offline PretrainingIn the setting of Reinforcement Learning with Action-Free Offline Pretraining (AFP-RL), an action-free offline dataset, \(\mathcal{D}=\{\tau_{1},\tau_{2},...,\tau_{N}\}\), is provided to boost the online training in the environment. The trajectories in the dataset have been pre-collected in the environment by behavior policies that are unknown to the agent. Each trajectory, \(\tau\), contains states and rewards in the format \(\tau=(s_{0},r_{0},s_{1},r_{1},...,s_{T},r_{T})\), with \(T\) time steps. Unlike traditional offline RL, where the policy is learned directly from the offline dataset, it is infeasible to learn a policy from an action-free offline dataset as it lacks the necessary action information. However, such a dataset still contains valuable information about the agent's movements and the environment's dynamics. Our proposed setting, AFP-RL, aims to leverage this information to improve online training. Methodology OverviewOur method, Action-Free Guide (AF-Guide), utilizes knowledge from action-free offline datasets by training an Action-Free Decision Transformer (AFDT) on these datasets to plan the next states that lead to high future returns. Then, the online agents, trained by Guided Soft Actor-Critic (Guided SAC), follow the planning with an additional Q function optimized for an intrinsic reward based on the planned states. AF-Guide is similar to the Learning-to-Think framework (Schmidhuber, 2015, 2018): Guided SAC sends queries (in this case desired future returns) into AFDT and learns to use the answers (in this case good next states) to improve its own performance. The overall methodology is illustrated in Fig.1. ### Action-Free Decision Transformer Action-Free Guide(AF-Guide) can be considered as a variant of the UDRL model Decision Transformer (DT) (Chen et al., 2021) that we designed to operate on action-free offline datasets. Unlike DT, which predicts actions based on past RTGs, states, and actions, AFDT plans the next state based on previous states and RTGs only. The overall architecture of AFDT is illustrated in Fig.2. AFDT takes \(K\) steps of input, consisting of \(2K\) tokens, where each step contains a state and an RTG. Similar to DT, states and RTGs are first mapped to token embedding via separate single-layer state and return-to-go encoders Embed\({}_{s}\) and Embed\({}_{R}\). The positional embedding mapped from time steps \(t\) by a single-layer temporal encoder Embed\({}_{t}\) is then added to the token embedding to include temporal information, followed by layer normalization. These token embeddings are then processed by a GPT model (Radford et al., 2018). The next states are generated from the processed RTG tokens through a single-layer decoder Pred\({}_{s}\). Note that we don't predict the next state \(s_{t+1}\) directly, but rather predict the state change \(\Delta s_{t+1}=s_{t+1}-s_{t}\) first and add it back to \(s_{t}\) to obtain \(s_{t+1}\). This is a common practice in motion forecasting (e.g., Mohamed et al. (2020); Salzmann et al. (2020)) to improve the prediction accuracy and has been observed to improve the performance of our model in experiments. The algorithm of AFDT is listed in Algo.1. TrainingAt each training step, a batch of trajectories truncated to length \(K\) is randomly sampled from the dataset. Each trajectory contains states and precomputed RTGs, represented as \(\tau=(s_{t-K+1},\hat{R}_{t-K+1},...,s_{t},\hat{R}_{t})\). The model is trained autoregressively with L1 loss to predict the next state from the processed RTG token at each time step, using a causal mask to mask out future information. ``` Input: states \(s\), returns-to-go \(\hat{R}\), time steps \(t\) # get positional embedding for each time step \(f_{t}\) = Embed\({}_{t}(t)\) # compute the state and return-to-go embeddings \(f_{s},\ f_{\hat{R}}\) = Embed\({}_{s}(s)+f_{t},\ \text{Embed}_{R}(\hat{R})+f_{t}\) # send to transformer in the order \((s_{0},\hat{R}_{0},s_{1},\hat{R}_{1},...)\) # compute current guiding reward using Eq.4 \(r_{g}\)= \(-\|\frac{1}{\sigma_{s}}\odot(\widetilde{s}_{t+1}-s_{t+1})\|_{2}\) # update return-to-go (same as DT) and time step \(\hat{R}_{t+1}\) = \(\hat{R}_{t}\) - \(r_{e}\) \(t=t+1\) until Episode is finished ``` **Algorithm 1** Action-Free Guide ### Guided Soft Actor-Critic Now we illustrate how to use the AFDT model to benefit the learning of Soft Actor-Critic (SAC) (Haarnoja et al., 2018; 20). As AFDT can conduct planning in the state space and infer the subsequent states that lead to a high future return, our idea is to guide the agent to follow AFDT's planning. Our method, named _Guided SAC_, contains the following three main procedures. Guiding RewardWe first design an intrinsic reward \(r_{g,t}\), named _guiding reward_, which is the discrepancy between the planned state \(\widetilde{s}_{t+1}\) inferred by AFDT and the actual state \(s_{t+1}\sim\mathrm{P}(\cdot|s_{t},a_{t})\) achieved by the agent: \[r_{g,t}=-\|\frac{1}{\sigma_{\mathcal{D}}}\odot(\widetilde{s}_{t+1}-s_{t+1})\| _{2} \tag{4}\] where \(\sigma_{\mathcal{D}}\) is the standard deviation of the states over the entire offline dataset \(\mathcal{D}\) and is used to normalize the different-scale state values on different dimensions. The process to compute the guiding reward with the AFDT model is summarized in Algo.2. Guiding Q FunctionWe then use the guiding reward \(r_{g}\) to learn the Q function. A common practice is to combine the intrinsic reward and the environment reward by \(r_{t}=r_{e,t}+\beta r_{g,t}\) with a coefficient \(\beta\), and use a single Q network \(Q\) to approximate the long-term future return (Schmidhuber, 1990; 1991; Houthooft et al., 2016; Pathak et al., 2017; Tao et al., 2020). However, this is not the case for the guiding reward, where _the current action should only be responsible for the next immediate result rather than all the future results_. Assume a robot gets stuck at step \(t+1\) due to a bad action \(a_{t}\) at step \(t\). A good AFDT will give the robot a low guidance reward at step \(t\) and predict a static future, resulting in high future guidance rewards for Figure 2: Action-Free Decision Transformer. The next state is planned given previous states and a desired future return, named return-to-go. getting the robot stuck in the same state. More generaly, as AFDT replans the target states at every timestep, an agent missing the planned state \(\tilde{s}_{t}\) due to a bad action \(a_{t-1}\) can still reach the replanned state \(\tilde{s}_{t+1}\) at the next step and receive a high guiding reward \(r_{g,t}\), which is not desirable. Hence, the action \(a_{t}\) should not be rewarded by \(r_{g,t+1}\) as it didn't reach the original plan \(\tilde{s}_{t}\). Therefore, to prevent the guiding reward from misleading the agent, it is more reasonable to discard the future return for the Q value calculation of the current action. Due to the reason above, we set up an additional independent Guiding Q function \(Q_{g}\) which is optimized in the same way as the original Q function \(Q_{e}\) (see Eq.1), but the target Q value only involve the immediate reward \(r_{g,t}\) without future information, which is computed as follows: \[Q_{g,t}^{\rm target}=r_{g,t} \tag{5}\] Compared to Eq.(2), here the Q target of the current action is removed from the future information by setting the discount factor \(\gamma\) to zero. Our ablation study in Sec.5.2 demonstrates that the Guiding Q function is crucial to effective guidance. Combined Q functionWe finally replace the Q function \(Q_{e}\) in Eq.(3) with the following combined Q function to guide the policy learning: \[Q(s_{t},a_{t})=Q_{e}(s_{t},a_{t})+\beta Q_{g}(s_{t},a_{t}) \tag{6}\] where \(\beta\) is the coefficient. Note that when \(\beta=0\), Guided SAC degenerates to a standard SAC trained using environment rewards \(r_{e}\) and the corresponding Q function \(Q_{e}\) only. ## 5 Experiments In this section, we demonstrate the effectiveness of our approach AF-Guide for utilizing action-free offline reinforcement learning datasets in online reinforcement learning through experimental evaluation. Furthermore, we provide evidence for the validity of our design choices for the two components of AF-Guide, Action-Free Decision Transformer, and Guided SAC, through three ablation studies. Figure 3: Experimental results of our methods. Utilizing the knowledge learned from the action-free offline dataset, AF-Guide outperforms SAC in all evaluated locomotion and ball maze environments in terms of learning speed. Furthermore, while SAC struggles to complete the task of Antmaze-Umaze due to the challenging exploration, AF-Guide successfully solves it, owing to the guidance signals provided by AFDT. Action-Free D4RL BenchmarkTo evaluate methods on AFP-RL, we build on top of the widely-used offline reinforcement learning benchmark, D4RL (Fu et al., 2020), and adapt it to the action-free reinforcement learning setting. We denote the introduced benchmark as Action-Free D4RL. The original D4RL benchmark provides offline datasets collected using various strategies across different environments. These episodes in the original D4RL datasets include state, action, and reward sequences. To create our action-free offline RL datasets, we remove the action labels from the original datasets. We evaluate six environments, including three locomotion tasks (Hopper, Halfcheetah, Walker2d), two ball maze environments (Maze2d-Medium, Maze2d-Large), and one robot ant maze environment (Antmaze-Umaze). For each locomotion task, we test our method on three different datasets: Medium, Medium-Replay, and Medium-Expert. For the environment Antmaze-Umaze, we test on two datasets: Antmaze-Umaze and Antmaze-Umaze-Diverse. There is only one dataset for each ball maze environment, where the ball navigates to random goal locations. Details of the datasets can be found in Appx.A. Implementation DetailsThe training of AF-Guide contains two stages: an offline stage training AFDT using the offline dataset and an online stage training Guided SAC in the environment. For the architecture and the training of AFDT, we follow the default hyperparameters used in DT paper (Chen et al., 2021). The context length \(K\) is set to 20. The batch size for AFDT training is 64 and the learning rate is 1e-4 with AdamW optimizer. In the online training stage, we set the return-to-go \(\hat{R}\) to 6000, 3600, and 5000 for Halfcheetah, Hopper, and Walker2d, respectively, the same as the values used in the original DT paper. The robot ant maze environment and the ball maze environments are not used in the original DT paper. We set \(\hat{R}\) to 1 and 5000, separately. For the hyperparameters of Guided SAC, we follow the default hyperparameters of SAC in the widely used Stable Baseline 3 (Raffin et al., 2021) implementation. The training batch size is 256 and the learning rate is 3e-4 with Adam optimizer. The discount factor for the environment reward is 0.99. The coefficient of the Guided Q function \(\beta\) in Eq.6 is set to 3. More details of the hyperparameters can be found in Appx.B. ### Main Experiments Results AnalysisExperimental results are presented in Figure 3. We run each experiment four times, using different random seeds, and report the average and the standard deviation band. Our method AF-Guide, using knowledge learned from the action-free offline dataset, outperforms SAC in all the evaluated environments. In the tasks of Halfcheetah and Walker2d, AF-Guide shows a significant advantage in learning speed compared to SAC across all the three datasets. In Halfcheetah, AF-Guide demonstrates a significant improvement of 50% at 500k steps, with an achieved performance of 6000 compared to the 4000 achieved by SAC alone. Similarly, in Walker2d, AF-Guide improves the performance by 50% at 1M steps, from 2000 to 3000. Additionally, we observe that different offline datasets do not result in significant performance differences. In the tasks of Hopper, Maze2d-Medium, and Maze2d-Large, while both AF-Guide and SAC reach similar performance at 500k steps, AF-Guide converges faster. In the task of Antmaze-Umaze, SAC is unable to complete it in 1M steps, whereas AF-Guide has an 80% success rate when pretrained in the dataset Antmaze-Umaze and a 60% success rate in Antmaze-Umaze-Diverse. This is likely due to the exploration challenge faced by SAC. The robot ant in Antmaze-Umaze has 4 Figure 4: Ablation study on the usage of guiding reward \(r_{g}\). ‘AF-Guide [SAC]’ denotes the variant adding guiding reward to the environment reward and training with SAC. The results show that AF-Guide [SAC] performs similarly to SAC in Maze2d-Medium, but does not work in Halfcheetah and Walker2d, which indicates that simply adding the guiding reward is detrimental to the policy training and verifies the effectiveness of our Guided SAC design. legs with a total of 8 joints and only receives rewards when reaching the target location, resulting in large state/action spaces and sparse reward signals. As a result, the agent trained by SAC does not receive any rewards during exploration and does not know how to move. In contrast, our guiding reward learned from the action-free offline dataset provides dense learning signals that guide the agent's motion towards the target. Therefore, agents trained by AF-Guide can successfully solve the maze in this task. ### Ablation Study Do we really need Guided SAC?In this ablation study, we investigate whether our Guided SAC with an additional Q function is necessary to process the guiding reward \(r_{g}\), or if it can be simply added to the environment reward and processed by SAC, referred to as 'AF-Guide [SAC]'. This study is conducted in the locomotion environments of Halfcheetah and Walker2d using the Medium dataset and the maze environment of Maze2d-Medium. The results, shown in Fig.4, reveal that AF-Guide [SAC] has a similar performance as SAC in Maze2d-Medium and does not work at all in Halfcheetah and Walker2d, indicating that the guiding reward \(r_{g}\) does not help or even hinders the training of SAC. In contrast, the original AF-Guide with Guided SAC benefits from the guiding reward \(r_{g}\) by ignoring guiding rewards in future steps and setting the corresponding discount factor to zero. This is in line with our explanation in Sec.4.2, where we stated that high future guiding rewards are unrelated to the action quality at the current step and therefore should be ignored in the Q function. Experimental results verify the effectiveness of our Guided SAC design. Does AFDT plan better states than that from behavior policy?In this ablation study, we investigate whether the states planned by Action-Free Decision Transformer (AFDT) are superior to the original states collected by the behavior policy. To do this, we trained an AFDT in an 'imitation' style by directly regressing future states based solely on past states without any return-to-go information in the offline dataset. We denote this AFDT variant as AF-Imitation. In the online training stage, AF-Imitation plans the next state based on past states only without returns-to-go. We refer to the method trained with the guidance of AF-Imitation as 'AF-Guide [Imi]' and evaluate it in the locomotion environments of Halfcheetah and Walker2d using the Medium-Replay dataset and the maze environment of Maze2d-Large. The results, shown in Fig.5, demonstrate that AF-Guide [Imi] performs worse than the original version in Walker2d and Maze2d-Large and is close to the original version in Halfcheetah. This verifies that AFDT can plan the next states that lead to higher future returns than the behavior policy when conditioned on a proper return-to-go. Additionally, AF-Guide [Imi] performs better than SAC in Halfcheetah and slightly better in Walker2d. While the predicted state from AF-Guide [Imi] may not be optimal, it can still benefit policy training in some cases. Can Action-Free Trajectory Transformer replace Action-Free Decision Transformer?Guided SAC is built on the predictions of AFDT, our action-free variant of the Decision Transformer (DT). In theory, AFDT can be replaced by any other sequential-modeling-based offline RL method after removing the action information. In this experiment, we replaced AFDT with an action-free variant of Trajectory Transformer (TT) (Janner et al., 2021), and evaluated its performance on locomotion tasks using the Figure 5: Ablation study on the effectiveness of Action-Free Decision Transformer (AFDT). We train a variant of AFDT by regressing the behavior policy trajectories, and use this variant to guide the online training, referred to as AF-Guide [Imi]. Compared to AF-Guide, AF-Guide [Imi] performs worse in Walker2d and Maze2d-Large. This suggests that AFDT can infer the next states better than those collected by the behavior policy. Medium dataset. We denote this variant as AF-Guide [TT]. Details of AF-Guide [TT] can be found in Appx.C. Compared to DT, which generates the future in a UDRL style, TT directly rollouts future predictions via beam search and selects the predictions with the highest predicted future returns. Additionally, TT discretizes the state and action spaces dimension-by-dimension for improved prediction accuracy. Experimental results are shown in Fig.6. AF-Guide [TT] performs worse than AF-Guide in Halfcheetah, but it showed a clear advantage over AF-Guide in Hopper and Walker2d. This advantage of AF-Guide [TT] in Hopper and Walker2d may be due to the better prediction quality from the dimension-wise discretization. However, the training time of AF-Guide [TT] is much longer than AF-Guide. As TT discretizes the state dimension-by-dimension, predicting one step of the state requires multiple forward passes of the transformer model, and to pick the best prediction, TT needs to generate multiple rollouts via beam-search. This dramatically increases the computational cost of future reasoning and slows down the training. A brief comparison of the training time between AF-Guide and AF-Guide [TT] is shown in Tab.1. AF-Guide [TT] is at least 10 times slower than AF-Guide in our experiments. The more dimensions the state space has, the slower AF-Guide [TT] is. Therefore, we select Decision Transformer in our method. The experiments show that our pipeline is compatible with different sequential-modeling-based offline RL methods. ### Limitations As an attempt to utilize action-free offline datasets for improved online learning, our method has some limitations in its current form. Firstly, our current design of AF-Guide is based on Decision Transformer following UDRL framework (Schmidhuber, 2019). Therefore, our planning ability is also limited by Decision Transformer's design and UDRL's drawback: Strupl et al. (2022) show that UDRL may diverge from the optimal policy in an episodic setting with stochastic environments. Note that the framework of our method AF-Guide is agnostic to the sequential planning model as we show in the ablation study with Trajectory Transformer based on beam-search instead of UDRL in Sec.5.2. Secondly, our current guiding reward is based on L2 distance, which may not be optimal in some state spaces where L2 distance doesn't represent the state similarities well, such as images We believe that combining AF-Guide with more semantically meaningful similarity metrics can extend its applications in the future for vision, language, and other multimodal problems. We leave this for future research to explore. ## 6 Conclusion In this paper, we explore the potential of utilizing action-free offline datasets to guide online reinforcement learning, and denote this setting by Reinforcement Learning with Action \begin{table} \begin{tabular}{l l l} \hline \hline Env. & AF-Guide [TT] & AF-Guide \\ \hline Halfcheetah & \(\sim\)14 hours & \(\sim\)1 hour \\ Hopper & \(\sim\)10 hours & \(\sim\)1 hour \\ Walker2d & \(\sim\)20 hours & \(\sim\)1 hour \\ \hline \hline \end{tabular} \end{table} Table 1: Training time comparison of AF-Guide [TT] and AF-Guide for 500k environment steps in locomotion tasks. AF-Guide [TT] increases the training time dramatically, due to the huge planning cost in the original Trajectory Transformer design. Experiments are done using a single A100 GPU. Figure 6: Ablation study on using Action-Free Trajectory Transformer to guide the training (AF-Guide [TT]). The results showed that AF-Guide [TT] had better performance in the Hopper and Walker2d tasks, but performed worse in the Halfcheetah task. These results suggest that our pipeline is compatible with different sequential-modeling-based offline RL methods, but the choice of method may impact performance depending on the specific task. Free Offline Pretraining (AFP-RL). We propose Action-Free Guide (AF-Guide), a method that learns to plan the target state from offline datasets to guide the online learning of an SAC agent. Our experimental results demonstrate that AF-Guide has better sample efficiency than SAC in various locomotion and maze environments, highlighting the benefits of incorporating action-free offline datasets. We hope our work may encourage further research in other areas where action-free offline pretraining can be an effective learning approach. These applications may include combining video prediction models with semantically meaningful similarity metrics to build guidance rewards learned from large-scale Internet data.
2306.07562
Statistical Beamformer Exploiting Non-stationarity and Sparsity with Spatially Constrained ICA for Robust Speech Recognition
In this paper, we present a statistical beamforming algorithm as a pre-processing step for robust automatic speech recognition (ASR). By modeling the target speech as a non-stationary Laplacian distribution, a mask-based statistical beamforming algorithm is proposed to exploit both its output and masked input variance for robust estimation of the beamformer. In addition, we also present a method for steering vector estimation (SVE) based on a noise power ratio obtained from the target and noise outputs in independent component analysis (ICA). To update the beamformer in the same ICA framework, we derive ICA with distortionless and null constraints on target speech, which yields beamformed speech at the target output and noises at the other outputs, respectively. The demixing weights for the target output result in a statistical beamformer with the weighted spatial covariance matrix (wSCM) using a weighting function characterized by a source model. To enhance the SVE, the strict null constraints imposed by the Lagrange multiplier methods are relaxed by generalized penalties with weight parameters, while the strict distortionless constraints are maintained. Furthermore, we derive an online algorithm based on an optimization technique of recursive least squares (RLS) for practical applications. Experimental results on various environments using CHiME-4 and LibriCSS datasets demonstrate the effectiveness of the presented algorithm compared to conventional beamforming and blind source extraction (BSE) based on ICA on both batch and online processing.
Ui-Hyeop Shin, Hyung-Min Park
2023-06-13T06:21:59Z
http://arxiv.org/abs/2306.07562v3
# Statistical Beamformer Exploiting Non-stationarity and Sparsity with Spatially Constrained ICA ###### Abstract In this paper, we present a statistical beamforming algorithm as a pre-processing step for robust automatic speech recognition (ASR). By modeling the target speech as a non-stationary Laplacian distribution, a mask-based statistical beamforming algorithm is proposed to exploit both its output and masked input variance for robust estimation of the beamformer. In addition, we also present a method for steering vector estimation (SVE) based on a noise power ratio obtained from the target and noise outputs in independent component analysis (ICA). To update the beamformer in the same ICA framework, we derive ICA with distortionless and null constraints on target speech, which yields beamformed speech at the target output and noises at the other outputs, respectively. The demixing weights for the target output result in a statistical beamformer with the weighted spatial covariance matrix (wSCM) using a weighting function characterized by a source model. To enhance the SVE, the strict null constraints imposed by the Lagrange multiplier methods are relaxed by generalized penalties with weight parameters, while the strict distortionless constraints are maintained. Furthermore, we derive an online algorithm based on an optimization technique of recursive least squares (RLS) for practical applications. Experimental results on various environments using CHiME-4 and LibriCSS datasets demonstrate the effectiveness of the presented algorithm compared to conventional beamforming and blind source extraction (BSE) based on ICA on both batch and online processing. Independent component analysis, beamforming, steering vector estimation, mask, robust speech recognition ## I Introduction In order to achieve noise-robustness in automatic speech recognition (ASR), various multi-channel pre-processing methods have been adopted such as beamforming based on a steering vector [1, 2] or blind source extraction/separation (BSE/BSS) that directly extracts the target speech based on independent vector analysis (IVA) (e.g. [3, 4, 5, 6, 7]). However, it is known that distortions and artifacts accompanied by pre-processing can cause recognition performance degradation due to mismatch in an ASR model [8]. To address the degradation, the ASR model can be adapted to processed data or jointly optimized with pre-processing methods (e.g. [9, 10, 11, 12]). Nevertheless, in actual applications, we often encounter cases where a pre-processing algorithm should be fit to an elaborately trained large-scale ASR model for general purposes because of a reluctance to make additional tuning for a specific scenario. In addition, beamforming methods with distortionless constraint have successfully achieved great recognition performance by enhancing target speech with minimum distortion (e.g. [1, 2, 13]). Also, there is evidence that beamforming methods that show better performance on a fixed ASR model generally perform better even on the model adapted to enhanced data [14]. Therefore, beamforming has attracted much interest as a pre-processing method. In particular, a minimum-variance distortionless response (MVDR) beamformer is frequently adopted because it can effectively suppress the noises without distortion in the steered source signal by the Lagrange multiplier method for the distortionless constraint [15]. For the MVDR beamformer, the spatial covariance matrix (SCM) of the noise should be estimated to effectively suppress the noises. However, because it is hard to estimate the noise SCM accurately, the minimum-power distortionless response (MPDR) beamformer can be used, which minimizes the whole input power instead [2]. As a result, the MPDR beamformer can replace the noise SCM with an SCM of observations, which is easily obtained from input data. On the other hand, combined with time-frequency (t-f) masking methods [16, 17, 18], the noise SCM of the MVDR can be effectively estimated by using noise masks representing how much the noise is included in each t-f bin. Especially, the mask-based MVDR has shown greatly improved performance with the noise masks generated from a neural network (NN) model trained for a specific dataset [16]. As an alternative, a class of statistical beamforming methods, such as maximum-likelihood distortionless response (MLDR) [13, 19], has recently been presented, showing superior performance to the MPDR. By statistical modeling of a target signal, the input SCM for the MPDR is replaced with a weighted SCM (wSCM) characterized by the weighting function from the source model. Such statistical beamforming methods integrate t-f segments "considered to be more important" with greater weights depending on how to model the target signal. In particular, the MLDR beamformer models the target speech as a complex Gaussian distribution with time-varying variances (TVVs). As a result, the weighting function of the wSCMs in the MLDR beamformer is calculated by the reciprocal value of the TVVs. By estimating the TVVs directly from beamforming outputs, the wSCMs are success fully obtained without prior knowledge of the target signal. The TVVs can be estimated from masked inputs if target masks are available in advance [20]. This is similar to the noise SCMs of the MVDR with the noise masks in the sense that their estimates are mainly determined by the accuracy of the masks. Because the performance of the beamformer should be unavoidably limited to either the accuracy of their beamforming outputs or the masks, such a problem was recently addressed by maximum a posterior (MAP) estimation where TVVs are estimated using a prior variance obtained from NN mask [21]. In this paper, we present a generalized statistical beamforming algorithm for robust ASR. Specifically, we model the target source signal as a complex Laplacian distribution with TVVs, and propose a novel statistical beamforming algorithm to exploit both its output signals and target masks. Rather than introducing a prior distribution for the TVVs to use MAP [21], we directly assume the sparsity of the target source to overcome the limits caused by relying on either poor initial outputs or inaccurate masks. By assuming both the sparsity and non-stationarity of source signals, the Laplacian distribution with TVVs provides a weighting function for the wSCMs that reflects both the beamforming outputs and TVVs estimated from the masked inputs. On the other hand, an accurate steering vector should be estimated for the beamformers. Otherwise, the output may be seriously degraded due to undesirable distortion of the target speech. This steering vector estimation (SVE) with covariance subtraction method is successfully achieved by a complex Gaussian mixture model (CGMM) of multi-channel input data, assuming that t-f segments of multi-channel observations can be categorized into noise components or noisy speech components [22, 23]. Recently, an efficient SVE method was proposed by directly utilizing the wSCM of the MLDR beamformer to estimate the normalized noise SCM [13]. Similarly, we introduce a method for SVE jointly updated with the beamformer in the ICA-based framework by using an output power ratio to estimate the normalized noise SCM. This is based on an auxiliary function in ICA [24] with the distortionless constraint on the target output, in addition to null constraints for the noise outputs [25]. The resulting beamformer is derived in the same form as MLDR, as seen in [26]. Compared to direct BSE without any constraints [6, 7], such SVE followed by beamforming methods can ensure stability as pre-processing for ASR. The target SCM is obtained by subtracting a normalized noise SCM from the corresponding SCM of observations. Therefore, the performance of the proposed SVE depends on how accurately the noises are extracted, as well as the target speech. Inspired by a geometrically constrained IVA method that imposes a power penalty for steered interference [27, 28], we extend the strict constraints of the Lagrange multiplier method to hybrid constraints, using the Lagrange multiplier and the power penalty to enhance the SVE by weakening the null constraints for the noise outputs. Furthermore, an online beamforming and SVE algorithm based on an optimization technique of recursive least squares (RLS) is derived for practical applications. Evaluation of beamforming and SVE methods in terms of the word error rate (WER) on the CHiME-4 dataset [29] demonstrates the effectiveness of the proposed methods. Furthermore, to assess the versatility of the proposed methods in different environments and ASR models, we conducted an utterance-wise evaluation on the LibriCSS [30] dataset. We also simulated a non-stationary situation where the position of a speech source was changed to observe the effectiveness of the online algorithms. The additional experiment also demonstrates the superior results of the proposed methods as a pre-processing technique for ASR. The remainder of this paper is organized as follows. Section II describes the conventional and proposed beamforming methods. In Section III, we introduce an SVE method that uses a power ratio of ICA outputs. We then derive an online beamforming algorithm based on RLS in Section IV. The proposed methods are evaluated through experiments in Section V. Finally, Section VI provides a summary of our concluding remarks. ## II Statistical Beamforming Algorithm In real-world environments with ambient background noise, \(M\) noisy speech observations at frequency bin \(k\) and frame \(\tau\) in the short-time Fourier transform (STFT) domain, \(\mathbf{x}_{k}(\tau)\), can be expressed as \[\mathbf{x}_{k}(\tau)\!=\![X_{1k}(\tau),...,X_{Mk}(\tau)]^{T}\!\!=\!\mathbf{h} _{k}S_{k}(\tau)+\tilde{\mathbf{n}}_{k}(\tau), \tag{1}\] where \(S_{k}(\tau)\) and \(\mathbf{h}_{k}\) denote the corresponding t-f segment of target speech and its steering vector. \(\tilde{\mathbf{n}}_{k}(\tau)\) represents noise components in \(\mathbf{x}_{k}(\tau)\). If \(\mathbf{h}_{k}\) is available, conventional beamforming methods compute a beamforming output by \(Y_{k}(\tau)=\mathbf{w}_{k}^{H}\mathbf{x}_{k}(\tau)\), where \(\mathbf{w}_{k}\) is a beamforming filter optimized under the distortionless constraint of \(\mathbf{w}_{k}^{H}\mathbf{h}_{k}\!=\!1\). ### _MVDR Beamformer Based on Masks (Mask-MVDR)_ Because the MVDR beamformer minimizes the power of filtered noises \(\sum_{\tau=1}^{T}|\mathbf{w}_{k}^{H}\tilde{\mathbf{n}}_{k}(\tau)|^{2}\) where \(T\) denotes the number of frames, the filter is given by \[\mathbf{w}_{k}=\frac{\mathbf{V}_{N,k}^{-1}\mathbf{h}_{k}}{\mathbf{h}_{k}^{H} \mathbf{V}_{N,k}^{-1}\mathbf{h}_{k}}, \tag{2}\] where \(\mathbf{V}_{N,k}\) is the noise SCM defined by \(\mathbf{V}_{N,k}=\frac{1}{T}\sum_{\tau=1}^{T}\tilde{\mathbf{n}}_{k}(\tau) \tilde{\mathbf{n}}_{k}^{H}(\tau)\). If the target mask \(\mathcal{M}_{k}(\tau)\) is available, \(\mathbf{V}_{N,k}\) can be effectively estimated by \[\mathbf{V}_{N,k}=\frac{1}{T}\sum_{\tau=1}^{T}\left(1-\mathcal{M}_{k}(\tau) \right)\mathbf{x}_{k}(\tau)\mathbf{x}_{k}^{H}(\tau). \tag{3}\] ### _MLDR Beamformer Based on Statistical Modeling_ Instead of the MVDR beamformer that requires the masks, a class of statistical beamforming methods including MLDR was presented based on the probabilistic modeling of the target signal. The MLDR assumes that target speech follows a complex Gaussian distribution with TVVs \(\lambda_{k}(\tau)\)[19]: \[q\left(Y_{k}(\tau)\right)\propto\frac{1}{\lambda_{k}(\tau)}\exp{\left(-\frac{ |Y_{k}(\tau)|^{2}}{\lambda_{k}(\tau)}\right)}. \tag{4}\] Therefore, the Lagrangian function based on negative log-likelihood function of (4) with the distortionless constraint is given by \[Q_{k}^{(Y)}=\mathbf{w}_{k}^{H}\mathbf{V}_{k}\mathbf{w}_{k}+a_{k}^{(l)}(\mathbf{w }_{k}^{H}\mathbf{h}_{k}-1), \tag{5}\] where \(a_{k}^{(l)}\) is a Lagrange multiplier and \(\mathbf{V}_{k}\) is a wSCM of \[\mathbf{V}_{k}=\frac{1}{T}\sum_{\tau=1}^{T}\frac{\mathbf{x}_{k}(\tau) \mathbf{x}_{k}^{H}(\tau)}{\lambda_{k}(\tau)}. \tag{6}\] By minimizing (5), the MLDR beamforming filter is given by [13, 19] \[\mathbf{w}_{k}=\frac{\mathbf{V}_{k}^{-1}\mathbf{h}_{k}}{\mathbf{h}_{k}^{H} \mathbf{V}_{k}^{-1}\mathbf{h}_{k}}. \tag{7}\] When TVVs \(\lambda_{k}(\tau)\) are set to a constant, the distribution becomes stationary Gaussian. Therefore, the wSCM becomes the input SCM, which corresponds to the MPDR beamforming filter [2]. The TVV can be directly estimated using the beamforming output. A moving average of the output powers at adjacent frames can also improve the robust estimation of the TVVs by improving the temporal continuity of source signals [19]: \[\lambda_{k}(\tau)=\frac{1}{2\tau_{0}+1}\sum_{t=\tau-\tau_{0}}^{\tau+\tau_{0}} |Y_{k}(t)|^{2}, \tag{8}\] where \(2\tau_{0}+1\) is the number of adjacent frames to be averaged. Therefore, the MLDR beamformer requires initialization of beamforming weights \(\mathbf{w}_{k}\) or TVVs \(\lambda_{k}(\tau)\) before updating alternatively \(\mathbf{w}_{k}\) and \(\lambda_{k}(\tau)\). If the target mask \(\mathcal{M}_{k}(\tau)\) is available, the TVVs can be estimated using the masks (Mask-MLDR) [20]: \[\lambda_{k}(\tau)=\frac{1}{2\tau_{0}+1}\underset{t=\tau-\tau_{0}}{\overset{ \tau+\tau_{0}}{\sum}}\mathcal{M}_{k}(t)\overline{|X_{k}(t)|}^{2}, \tag{9}\] where \(\overline{|X_{k}(\tau)|}\) is the median value of \(\{|X_{mk}(\tau)|,\ m=1,\cdots,M\}\). This Mask-MLDR beamformer does not require iterative updates because the TVVs are uniquely estimated by (9). To incorporate the mask-based TVVs of (9) into the source model of target speech, a conjugate prior can be assumed as inverse-Gamma distribution \(IG(\lambda_{k}(\tau);\alpha_{\lambda},\beta_{\lambda})=\beta_{\lambda}\alpha _{\mathrm{I}}\Gamma(\alpha)^{-1}\lambda_{k}(\tau)^{-(\alpha+1)}\mathrm{exp}( \beta_{\lambda}/\lambda_{k}(\tau))\) with \(\beta_{\lambda}\) set to TVVs from (9). The TVV can be obtained by MAP using the masked inputs as a prior (Mask-P-MLDR) [21]: \[\lambda_{k}(\tau)=\frac{1}{2\tau_{0}+1}\underset{t=\tau-\tau_{0}}{\overset{ \tau+\tau_{0}}{\sum}}\frac{|Y_{k}(t)|^{2}+\mathcal{M}_{k}(t)\overline{|X_{k}(t )|}^{2}}{\alpha_{\lambda}+2}, \tag{10}\] which reflects both masked inputs and beamforming outputs. \(\alpha_{\lambda}\) is simply set to 1. ### _Generalized Statistical Beamforming_ In the MLDR beamforming, the target source model of (4) can be generalized to super-Gaussian distribution by \[q\left(Y_{k}\left(\tau\right)\right)\propto\frac{1}{\lambda_{k}\left(\tau \right)}\mathrm{exp}\left(-\frac{|Y_{k}\left(\tau\right)|^{2\beta}}{\lambda_{ k}^{\beta}\left(\tau\right)}\right), \tag{11}\] where \(\beta\) is a shape parameter, and \(0<\beta<1\) is required for super-Gaussianity. Then, the wSCM in (6) is extended to \[\mathbf{V}_{k}=\frac{1}{T}\sum_{\tau=1}^{T}\phi_{k}(\tau)\mathbf{x}_{k}\left( \tau\right)\mathbf{x}_{k}^{H}\left(\tau\right), \tag{12}\] where \(\phi_{k}(\tau)\) is the weighting function1 given by Footnote 1: The weighting function in the auxiliary function for ICA/IVA is determined by \(\phi_{k}(\tau)=G_{k}^{\prime}(r_{k}(\tau))/r_{k}(\tau)\), where \(G_{k}(r_{k}(\tau))\) is a function of a real-valued scalar \(r_{k}(\tau)\) satisfying \(G_{k}(r_{k}(\tau))=G(Y_{k}(\tau))\), and \(G(Y_{k}(\tau))\) is given by \(-\log q(Y_{k}(\tau))\). Refer to [5] and [24] for details. \[\phi_{k}(\tau)=1/\lambda_{k}(\tau), \tag{14}\] which makes (12) equal to (6) of conventional MLDR beamforming. Strictly speaking, unlike the MPDR and MLDR, the Mask-MVDR beamformer does not have a weighting function \(\phi_{k}(\tau)\) because its output filter is not obtained from the wSCM \(\mathbf{V}_{k}\) based on statistic modeling but from an estimate of the noise SCM in (3). Nevertheless, one may consider the noise SCM as a virtual wSCM by regarding the weighting function for the Mask-MVDR as \(1-\mathcal{M}_{k}(\tau)\). ### _Proposed Sparse MLDR Beamformer Using a Complex Laplacian Distribution with TVVs_ As a target source model, we can consider both the non-stationarity and sparsity of speech by a complex Laplacian distribution as \[q\left(Y_{k}(\tau)\right)\propto\frac{1}{\lambda_{k}\left(\tau\right)}\exp \left(-\frac{|Y_{k}\left(\tau\right)|}{\sqrt{\lambda_{k}\left(\tau\right)}} \right), \tag{15}\] where the source variances are assumed to be time-varying and \(\beta=1/2\). Then, the weighting function is given as \[\phi_{k}(\tau)=\frac{1}{2\sqrt{\lambda_{k}(\tau)}\left|Y_{k}(\tau)\right|}. \tag{16}\] It is noted that \(\phi_{k}(\tau)\) in (16) contains both the TVV and the beamforming output. Although the TVV can be estimated by \(\lambda_{k}(\tau)=|Y_{k}(\tau)|^{2}/4\), the TVV estimate directly obtained from \(Y_{k}(\tau)\) makes no meaningful difference between (16) and (14) with TVVs estimated by (8). Instead, if the target mask is available, we can estimate the TVV with the moving average by \[\lambda_{k}(\tau)=\frac{1}{4(2\tau_{0}+1)}\underset{t=\tau-\tau_{0}}{\overset{ \tau+\tau_{0}}{\sum}}\mathcal{M}_{k}(t)\overline{|X_{k}(t)|}^{2}. \tag{17}\] Then, \(\phi_{k}(\tau)\) in (16) utilizes target speech estimates from both \(Y_{k}(\tau)\) as the beamforming output and observations masked by the target masks \(\mathcal{M}_{k}(\tau)\). We refer to this beamformer as mask-based sparse MLDR (Mask-S-MLDR). With \(\beta=1/2\), the weighting function considers the beamforming outputs and masked inputs with the same exponent. One may obtain a better beamformer when \(\beta\) is a different value between 0 and 1. However, for a distribution with TVVs, the performance difference according to the value of \(\beta\) was not critical in our experience. Therefore, the Laplacian distribution in (15) is considered to obtain a mathematically simple beamformer. ## III Steering Vector Estimation Based on ICA with the Spatial Constraints ### _BSE of Target Speech without SVE Based on IVA_ The formulation for the beamforming problem can be extended to model the spatial mixing process of noises as well as target speech in BSE formulation. The \(M\) noisy speech observations, \(\mathbf{x}_{k}(\tau)\), can be re-modeled as \[\mathbf{x}_{k}(\tau)=\mathbf{A}_{k}\begin{bmatrix}S_{k}(\tau)\\ \mathbf{n}_{k}(\tau)\end{bmatrix}, \tag{18}\] where \(\mathbf{n}_{k}(\tau)\) denotes a source vector of \(\tilde{\mathbf{n}}_{k}(\tau)\). The mixing environment is assumed to be determined for simplicity. Then, let us consider the demixing model: \[\begin{bmatrix}Y_{k}(\tau)\\ \mathbf{z}_{k}(\tau)\end{bmatrix}=\mathbf{W}_{k}\mathbf{x}_{k}(\tau), \tag{19}\] where \(\mathbf{W}_{k}=[\mathbf{w}_{1k},...,\mathbf{w}_{Mk}]^{H}\) denotes a demixing matrix. Without loss of generality, let us assume that \(Y_{k}(\tau)\!=\!S_{k}(\tau)\) whereas \(\mathbf{z}_{k}(\tau)\) represents noise outputs. Then, assuming different source models for target speech and noises, the likelihood function for the target sources can be extended to include the one for noise sources as \[Q_{k}\!=\!\frac{1}{2}\mathbf{w}_{1k}^{H}\mathbf{V}_{k}\mathbf{w}_{1k}\!+\! \frac{1}{2}\!\!\sum_{m=2}^{M}\!\mathbf{w}_{mk}^{H}\mathbf{V}_{\mathbf{z},k} \mathbf{w}_{mk}\!-\!\log|\det\!\mathbf{W}_{k}|, \tag{20}\] which leads to the auxiliary function of ICA [24]. \(\mathbf{V}_{\mathbf{z},k}\) is the wSCM for noises obtained by replacing \(\phi_{k}(\tau)\) with \(\phi_{\mathbf{z},k}(\tau)\) in (12). In the conventional BSE, target speech is directly extracted by optimizing (20) with a source modeled as a multivariate Gaussian distribution in the IVA [5] as \[q(Y_{1}(\tau),...,Y_{K}(\tau))\propto\frac{1}{\tilde{\lambda}^{K}(\tau)}\exp \!\left(\!-\frac{\sum_{k=1}^{K}|Y_{k}(\tau)|^{2}}{\tilde{\lambda}(\tau)}\right) \tag{21}\] to address the random permutation problem without the explicit spatial constraints. \(\tilde{\lambda}(\tau)\) is a shared TVV along frequency bins which is estimated by \(\tilde{\lambda}(\tau)=\frac{1}{K}\sum_{k=1}^{K}|Y_{k}(\tau)|^{2}\). Then, the weighting function is given as \(\phi_{k}(\tau)=1/\lambda(\tau)\)[5]. Noises are often assumed to follow a stationary Gaussian distribution (e.g. [6, 7, 21, 26]). Then, the weighting function for the noises is given by \(\phi_{\mathbf{z},k}(\tau)=1\). Such a distinct modeling enforces the target speech to be extracted at a desired channel. ### _Conventional SVE Based on Covariance Subtraction_ Although the conventional BSE approaches can directly extract the target speech as well as the mixing matrix (the steering vector) based on the frequency-shared source models [5], their lack of stability, especially in real-recorded data, may cause them unsuitable for pre-processing for ASR. Rather, distinct SVE followed by linearly constrained beamforming can provide more stable performance for robust ASR. After modeling noisy speech observations \(\mathbf{x}_{k}(\tau)\) by a CGMM, the steering vector \(\mathbf{h}_{k}\) has been effectively estimated by the principal eigenvector of the target SCM2. Since target speech is uncorrelated with noises, the target SCM can be estimated by the subtraction given by Footnote 2: An estimate of \(\mathbf{h}_{k}\) is normalized by the L2-norm \(\|\mathbf{h}_{k}\|_{2}\) for stable estimation. Also, the phase of \(\mathbf{h}_{k}\) is aligned across the frequency bins as \(\mathbf{h}_{k}/e^{\phi_{k}\mathbf{i}_{k}}\) where \(\mathbf{i}_{k}\) is the phase of reference (first) channel. \[\mathbf{R}_{\mathbf{s},k}=\mathbf{R}_{\mathbf{x},k}-\mathbf{R}_{\mathbf{n},k}, \tag{22}\] where \(\mathbf{R}_{\mathbf{s},k}\), \(\mathbf{R}_{\mathbf{n},k}\), and \(\mathbf{R}_{\mathbf{x},k}\) denote the normalized SCMs of a target, noises, and observations at frequency bin \(k\), respectively. \(\mathbf{R}_{\mathbf{x},k}\) and \(\mathbf{R}_{\mathbf{n},k}\) can be estimated by \[\mathbf{R}_{\mathbf{x},k} =\frac{1}{T}\sum_{\tau=1}^{T}\mathbf{x}_{k}(\tau)\mathbf{x}_{k}^{H }(\tau), \tag{23}\] \[\mathbf{R}_{\mathbf{n},k} =\frac{1}{\sum_{\tau=1}^{T}\!r_{\mathbf{n},k}(\tau)}\!\!\!\sum_{ \tau=1}^{T}\!r_{\mathbf{n},k}(\tau)\mathbf{x}_{k}(\tau)\mathbf{x}_{k}^{H}(\tau), \tag{24}\] where \(r_{\mathbf{n},k}(\tau)\) denotes a ratio representing noise contribution in observations, which can be estimated based on the CGMM [22, 23]. Replacing \(r_{\mathbf{n},k}(\tau)\) with the weighting function \(\phi_{k}(\tau)\) may efficiently lead to \(\mathbf{R}_{\mathbf{n},k}\)[13]: \[\mathbf{R}_{\mathbf{n},k}=\frac{1}{\sum_{\tau=1}^{T}\!\phi_{k}(\tau)}\!\!\sum_ {\tau=1}^{T}\phi_{k}(\tau)\mathbf{x}_{k}(\tau)\mathbf{x}_{k}^{H}(\tau). \tag{25}\] Because the weighting function \(\phi_{k}(\tau)\) is, for example, calculated as a reciprocal value of the TVV in the Gaussian source model, it emphasize t-f components with small value of TVVs \(\lambda_{k}(\tau)\) which correspond to noise-dominant segments, and vice versa [13, 19]. Therefore, the wSCM \(\mathbf{V}_{k}\) effectively estimates the noise SCM by accumulating the contributions of noise. On the other hand, the \(r_{\mathbf{n},k}(\tau)\) can be estimated by \(r_{\mathbf{n},k}(\tau)=1-\mathcal{M}_{k}(\tau)\) if the target mask is available. However, to improve the SVE further, the target mask can be effectively incorporated with SVE methods based on covariance subtraction (CGMM, wSCM, and proposed methods explained later) by using masked observations to suppress corrupted t-f segments in (23), (24), and (25), like [13, 19]. ### _Proposed SVE Using Output Powers of ICA with the Spatial Constraints_ However, estimating \(\mathbf{R}_{\mathbf{n},k}\) based on \(\phi_{k}(\tau)\) might be unstable because its value range is so wide that \(\mathbf{R}_{\mathbf{n},k}\) can excessively depend on some large values of the weighting function \(\phi_{k}(\tau)\). Although the conventional SVE methods are effective, the ratio \(r_{\mathbf{n},k}(\tau)\) can be more elaborately estimated by utilizing noise outputs as well. In other words, the ratio \(r_{\mathbf{n},k}(\tau)\) can be calculated using the ICA outputs from (19). After \(\mathbf{W}_{k}\) is updated, \(Y_{k}(\tau)\) and \(\mathbf{z}_{k}(\tau)\) obtained by (19) should be normalized to estimate an accurate power ratio by the minimal distortion principle [31]: \[\text{diag}\left(\mathbf{W}_{k}^{-1}\right)\begin{bmatrix}Y_{k}(\tau)\\ \mathbf{z}_{k}(\tau)\end{bmatrix}=\begin{bmatrix}\hat{S}_{k}(\tau)\\ \hat{\mathbf{n}}_{k}(\tau)\end{bmatrix}, \tag{26}\] where \(\text{diag}(\cdot)\) returns the diagonal matrix with off-diagonal component being zeros. Using the normalized ICA outputs, the ratio can be estimated by \[r_{\mathbf{n},k}(\tau)=\frac{\|\hat{\mathbf{n}}_{k}(\tau)\|_{2}^{2}}{|\hat{S}_{k} (\tau)|^{2}+\|\hat{\mathbf{n}}_{k}(\tau)\|_{2}^{2}}, \tag{27}\] where \(\|\cdot\|_{2}\) denotes the L2-norm of a vector. Especially, the auxiliary function of (20) can be augmented by the distortionless and null constraints to extract and cancel the target speech at the target and noise outputs regarding the steering vector \(\mathbf{h}_{k}\). In (18), when the mixing matrix \(\mathbf{A}_{k}=[\mathbf{h}_{k}|\mathbf{D}_{k}]\) where \(\mathbf{D}_{k}\) is an \(M\!\times\!(M\!-\!1)\) matrix to get \(\hat{\mathbf{n}}_{k}(\tau)=\mathbf{D}_{k}\mathbf{n}_{k}(\tau)\), \(\mathbf{x}_{k}(\tau)\) is expressed as \(\mathbf{x}_{k}(\tau)=\mathbf{h}_{k}S_{k}(\tau)+\mathbf{D}_{k}\mathbf{n}_{k}(\tau)\). Because \(\mathbf{A}_{k}\mathbf{e}_{1}\!=\!\mathbf{h}_{k}\), where \(\mathbf{e}_{m}\) denotes an \(M\!\times\!1\) unit vector whose \(m\)-th element is unity, we can obtain \(Y_{k}(\tau)\!=\!\!S_{k}(\tau)\) similar to the conventional beamforming by introducing the distortionless constraint of \(\mathbf{w}_{1k}^{H}\mathbf{h}_{k}=1\) in the BSE formulation of (19). In addition, the null constraints for the noises are given as \(\mathbf{w}_{mk}^{H}\mathbf{h}_{k}=0,\ 2\leq m\leq M\)[25]. Accordingly, instead of initializing the demixing matrix \(\mathbf{W}_{k}\) to the identity matrix as in conventional ICA and IVA for BSS, we can initialize it by \[\mathbf{W}_{k}=[\mathbf{h}_{k}|\mathbf{e}_{2},...,\mathbf{e}_{M}]^{-1}\,, \tag{28}\] which corresponds to the identity matrix replacing the \(\mathbf{e}_{1}\) with the steering vector \(\mathbf{h}_{k}\). Therefore, similarly to (5), the auxiliary function of (20) can be extended to Lagrangian function with the Lagrange multipliers \(a_{mk}^{(l)}\) for the constraints (ICA-LC): \[Q_{k}^{(l)}\!=Q_{k}+a_{1k}^{(l)}(\mathbf{w}_{1k}\mathbf{h}_{k}\!-\!1)+\sum_{m =2}^{M}a_{mk}^{(l)}\mathbf{w}_{mk}\mathbf{h}_{k}. \tag{29}\] Then, optimizing the Lagrangian function using the constraints \(a_{1k}^{(l)}\) with respect to \(\mathbf{w}_{1k}\) gives (See Appendix A.) \[\mathbf{w}_{1k}=\frac{\mathbf{V}_{k}^{-1}\mathbf{h}_{k}}{\mathbf{h}_{k}^{H} \mathbf{V}_{k}^{-1}\mathbf{h}_{k}}, \tag{30}\] which has the same form as (7). The statistical beamforming filter of (30) based on the ICA with distortionless and null constraints is reduced to the same form of MLDR beamforming, which is similarly derived in [26]. The noise outputs can be obtained by updating \(\mathbf{w}_{mk},2\!\leq\!m\!\leq\!M\) from (29) and its derivation can be found in Appendix A. Consequently, we can calculate the ratio \(r_{\mathbf{n},k}(\tau)\) from the beamforming output as well as noise outputs to cancel the target speech, which enables the joint updates of the beamforming weights and the steering vector in the same ICA framework. On the other hand, in multi-dimensional ICA [32], noises were modeled by a multidimensional subspace expressed as \(q\left(\mathbf{z}_{k}\left(\tau\right)\right)\propto\exp\left(-\left\| \mathbf{z}_{k}(\tau)\right\|_{2}\right)\), which is a spherical multivariate Laplacian distribution. Its weighting function is derived as \[\phi_{\mathbf{z},k}(\tau)=\frac{1}{2\|\mathbf{z}_{k}(\tau)\|_{2}}. \tag{31}\] The weighting function modeled by multi-dimensional ICA may deal with the various diffuse noises by reflecting the sparsity of the noises. ### _A Proposed Hybrid Approach to Impose the Spatial Constraints_ With incorrect steering vector estimates before convergence, the strict spatial constraints by Lagrange multipliers may prevent finding the correct demixing matrix \(\mathbf{W}_{k}\). Instead, one can consider more mild constraints. In [27, 28], spatial constraints (corresponding to the distortionless constraint for target speech and null constraints for noises) by the steering vector \(\mathbf{h}_{k}\) were imposed for better separation, which can be formulated by the auxiliary function augmented with the power penalty terms of the spatial constraints (ICA-PC): \[Q_{k}^{(p)}\!=Q_{k}+a_{1k}^{(p)}|\mathbf{w}_{1k}^{H}\mathbf{h}_{k}-1|^{2}+a_{ \mathbf{z},k}^{(p)}\!\sum_{m=2}^{M}\!|\mathbf{w}_{mk}^{H}\mathbf{h}_{k}|^{2}, \tag{32}\] where \(a_{1k}^{(p)}\) and \(a_{\mathbf{z},k}^{(p)}\) are parameters determining weights for the distortionless and null constraints for target speech and noises, respectively. However, a slight distortion in estimated speech may result in considerable recognition performance degradation in robust ASR, as mentioned above. Consequently, we propose a hybrid approach for beamforming by imposing the strict distortionless constraint based on a Lagrange multiplier and more flexible null constraints based on power penalties in the auxiliary function (ICA-HC): \[Q_{k}^{(h)}\!=Q_{k}+a_{1k}^{(l)}(\mathbf{w}_{1k}^{H}\mathbf{h}_{k}-1)+a_{ \mathbf{z},k}^{(p)}\!\sum_{m=2}^{M}\!\!\left|\mathbf{w}_{mk}^{H}\mathbf{h}_{k} \right|^{2}\!. \tag{33}\] Through a derivation and some approximation, the target weights \(\mathbf{w}_{1k}\) can be still updated by using (30) in ICA-LC. On the other hand, updates of the noise weights \(\mathbf{w}_{mk},\ 2\leq m\leq M,\) are exactly the same as in ICA-PC: \[\mathbf{H}_{\mathbf{z},k} = \mathbf{V}_{\mathbf{z},k}+a_{\mathbf{z},k}^{(p)}\mathbf{h}_{k} \mathbf{h}_{k}^{H}, \tag{34}\] \[\bar{\mathbf{w}}_{mk} = (\mathbf{W}_{k}\mathbf{H}_{\mathbf{z},k})^{-1}\,\mathbf{e}_{m},\] (35) \[\mathbf{w}_{mk} = \bar{\mathbf{w}}_{mk}/\sqrt{\bar{\mathbf{w}}_{mk}^{H}\mathbf{H} _{\mathbf{z},k}\bar{\mathbf{w}}_{mk}},\ 2\leq m\leq M. \tag{36}\] ## IV Derivation of an RLS-based Online Algorithm for Beamforming and SVE We also present a frame-by-frame online RLS algorithm for ICA-based beamforming and SVE using the formulation of \(Y_{k}(t;t-1)=\mathbf{w}_{1k}^{H}(t-1)\mathbf{x}_{k}(t)\) with the target weights updated at (\(t-1\))-th frame. ### _An RLS-Based Online MLDR Beamformer_ From the cost for the batch method of (5), the RLS-based MLDR cost at the \(t\)-th frame is expressed with forgetting factor \(\alpha\) as \[\tilde{Q}_{k}^{(\gamma)}(t)\!=\!\mathbf{w}_{k}^{H}(t)\mathbf{V}_{k}(t)\mathbf{w}_ {k}(t)+a_{k}^{(l)}(t)(\mathbf{w}_{k}^{H}(t)\mathbf{h}_{k}(t)-1), \tag{37}\] where the wSCM at the \(t\)-th frame is calculated by \[\mathbf{V}_{k}(t)=\frac{1}{\sum_{\tau=1}^{t}\alpha^{t-\tau}}\sum_{\tau=1}^{t} \alpha^{t-\tau}\phi_{k}(\tau)\mathbf{x}_{k}(\tau)\mathbf{x}_{k}^{H}(\tau). \tag{38}\] Therefore, \(\mathbf{V}_{k}(t)\) can be recursively obtained as \[\mathbf{V}_{k}(t)=\rho(t)\mathbf{V}_{k}(t-1)+(1-\rho(t))\phi_{k}(t)\mathbf{x}_{k}(t )\mathbf{x}_{k}^{H}(t). \tag{39}\] where \(\rho(t)=1-1/\sum_{\tau=1}^{t}\alpha^{t-\tau}\). The inverse matrix \(\mathbf{U}_{k}(t)=\mathbf{V}_{k}^{-1}(t)\) can be directly updated by using the matrix inversion lemma [33]: \[\mathbf{U}_{k}(t) = \frac{1}{\rho(t)}\mathbf{U}_{k}(t\!-\!1) \tag{40}\] \[-\frac{\mathbf{U}_{k}(t\!-\!1)\mathbf{x}_{k}(t)\mathbf{x}_{k}^{H }(t)\mathbf{U}_{k}^{H}(t\!-\!1)}{\rho^{2}(\!)/\!(\!(1\!-\!\rho(t)\phi_{k}(t))\! +\!\rho(t)\mathbf{x}_{k}^{H}(t)\mathbf{U}_{k}(t\!-\!1)\mathbf{x}_{k}(t)}.\] Then, the update of the beamforming filter at the \(t\)-th frame can be obtained by \[\mathbf{w}_{k}(t)=\frac{\mathbf{U}_{k}(t)\mathbf{h}_{k}(t)}{\mathbf{h}_{k}^{ H}(t)\mathbf{U}_{k}(t)\mathbf{h}_{k}(t)}. \tag{41}\] Also, if the weighting function \(\phi_{k}(t)=1/\lambda_{k}(t)\) using the Gaussian distribution of (4), \(\lambda_{k}(t)\) can be recursively estimated by \[\lambda_{k}(t)=\gamma\lambda_{k}(t-1)+(1-\gamma)|Y_{k}(t;t-1)|^{2}, \tag{42}\] where \(\gamma\) is a smoothing factor. Note that we use \(Y_{k}(t;t-1)\) obtained by the previous beamforming filter \(\mathbf{w}_{k}(t-1)\) because \(Y_{k}(t;t)=\mathbf{w}_{k}^{H}(t)\mathbf{x}_{k}(t)\) is not available. If the target mask is available, it can be estimated by replacing \(|Y_{k}(t;t-1)|^{2}\) with \(\mathcal{M}_{k}(t)|\overline{X_{k}(t)}|^{2}\) in (42), similar to (9) in batch processing. Or, based on MAP, we can update \(\lambda_{k}(t)\) by using both the values as \[\lambda_{k}(t)\!=\!\gamma\lambda_{k}(t\!-\!1)+(1\!-\!\gamma)\frac{\mathcal{M }_{k}(t)|\overline{X_{k}(t)}|^{2}\!+\!|Y_{k}(t;t\!-\!1)|^{2}}{\alpha_{\lambda} +2}, \tag{43}\] which corresponds to (10) in Mask-P-MLDR. For the Mask-S-MLDR beamformer using Laplacian distribution of (15), the weighting function at the t-th frame is calculated as \[\phi_{k}(t)=\frac{1}{2\sqrt{\lambda_{k}(t)}\left|Y_{k}(t;t-1)\right|}. \tag{44}\] From (17), the TVVs of the Mask-S-MLDR beamformer at the \(t\)-th frame can be easily updated by \[\lambda_{k}(t)=\gamma\lambda_{k}(t-1)+\frac{1\!-\!\gamma}{4}\mathcal{M}_{k}(t )|\overline{X_{k}(t)}|^{2}. \tag{45}\] ### _An RLS-Based Online Updates for SVE with ICA-HC_ Unlike the conventional online IVA algorithm [34] using auto-regressive estimation of the wSCM, we derive an RLS-based online beamformer that may result in a generalized online beamformer. The auxiliary function at the \(t\)-th frame is defined as \[\tilde{Q}_{k}(t)\!=\!\mathbf{w}_{\mathbf{i}k}^{H}(t)\mathbf{V}_{k }(t)\mathbf{w}_{\mathbf{i}k}(t)\!+\!\!\!\sum_{m=2}^{M}\!\mathbf{w}_{mk}^{H}(t) \mathbf{V}_{\mathbf{z},k}(t)\mathbf{w}_{mk}(t)\\ -\log|\det\mathbf{W}_{k}(t)|, \tag{46}\] Also, the update equations of \(\mathbf{V}_{\mathbf{z},k}(t)\) and \(\mathbf{U}_{\mathbf{z},k}(t)=\mathbf{V}_{\mathbf{z},k}^{-1}(t)\) can be obtained by replacing \(\phi_{k}(\tau)\) with \(\phi_{\mathbf{z},k}(\tau)\) in (39) and (40), respectively. For the online updates of the wSCMs, the weighting functions of (31) can be computed by \[\phi_{\mathbf{z},k}(t)=1/(2\|\mathbf{z}_{k}(t;t-1)\|_{2}). \tag{47}\] For the online updates of the weights in ICA-HC, the augmented auxiliary function is given as \[\tilde{Q}_{k}^{(\!B\!)}(t)\!=\!\tilde{Q}_{k}(t)\!+\!a_{\mathbf{i}k}^{(\!I\!)} \!(t)(\mathbf{w}_{\mathbf{i}k}^{H}(t)\mathbf{h}_{k}(t)\!-\!1)\!+\!a_{\mathbf{z },k}^{(\!p\!)}\!\!\!\sum_{m=2}^{M}\!\!\!\!\mathbf{w}_{mk}^{H}(t)\mathbf{h}_{k }(t)\!]^{2}. \tag{48}\] Similar to Subsection III-D, the update rules of the noise weights can be easily induced as follows: \[\mathbf{H}_{\mathbf{z},k}(t) = \mathbf{V}_{\mathbf{z},k}(t)+a_{\mathbf{z},k}^{(p)}\mathbf{h}_{k}( t)\mathbf{h}_{k}^{H}(t), \tag{49}\] \[\mathbf{G}_{\mathbf{z},k}^{(p)}(t) = \mathbf{U}_{\mathbf{z},k}(t)\!-\!\frac{\mathbf{U}_{\mathbf{z},k}(t )\mathbf{h}_{k}(t)\mathbf{h}_{k}^{H}(t)\mathbf{U}_{\mathbf{z},k}(t)}{1/a_{ \mathbf{z},k}^{(p)}+\mathbf{h}_{k}^{H}(t)\mathbf{U}_{\mathbf{z},k}(t)\mathbf{h} _{k}(t)},\] (50) \[\tilde{\mathbf{w}}_{mk}(t) = \mathbf{G}_{\mathbf{z},k}^{(p)}(t)\mathbf{A}_{k}(t)\mathbf{e}_{m},\] (51) \[\mathbf{w}_{mk}(t) = \tilde{\mathbf{w}}_{mk}(t)\!/\!\sqrt{\tilde{\mathbf{w}}_{mk}^{H}(t )\mathbf{H}_{\mathbf{z},k}(t)\tilde{\mathbf{w}}_{mk}(t)},\,2\leq m\leq M, \tag{52}\] where \(\mathbf{G}_{\mathbf{z},k}^{(p)}(t)\!=\!\mathbf{H}_{\mathbf{z},k}^{-1}(t)\). Because the mixing matrix \(\mathbf{A}_{k}(t)\!=\!\mathbf{W}_{k}^{-1}(t)\) is required at every update of \(\mathbf{w}_{mk}(t)\), it should be updated by [34] \[\mathbf{A}_{k}(t)=\mathbf{A}_{k}(t)-\frac{\mathbf{A}_{k}(t)\mathbf{e}_{m} \Delta\mathbf{w}_{mk}^{H}(t)\mathbf{A}_{k}(t)}{1+\Delta\mathbf{w}_{mk}^{H}(t) \mathbf{A}_{k}(t)\mathbf{e}_{m}}, \tag{53}\] where \(\Delta\mathbf{w}_{mk}(t)=\mathbf{w}_{mk}(t)-\mathbf{w}_{mk}(t-1)\). The SVE can be performed at every frame using online updates of the normalized SCM of observations and normalized noise SCMs, given by \[\mathbf{R}_{\mathbf{x},k}(t)=\frac{1}{\sum_{\tau=1}^{t}\alpha^{t- \tau}}\sum_{\tau=1}^{t}\alpha^{t-\tau}\mathbf{x}_{k}(\tau)\mathbf{x}_{k}^{H}( \tau), \tag{54}\] \[\mathbf{R}_{\mathbf{n},k}(t)\!=\!\frac{1}{\sum_{\tau=1}^{t}\alpha^{ t-\tau}r_{\mathbf{n},k}(\tau)}\!\sum_{\tau=1}^{t}\!\alpha^{t-\tau}r_{\mathbf{n},k}( \tau)\mathbf{x}_{k}(\tau)\mathbf{x}_{k}^{H}(\tau). \tag{55}\] Then, \(\mathbf{R}_{\mathbf{x},k}(t)\) and \(\mathbf{R}_{\mathbf{n},k}(t)\) are recursively updated by \[\mathbf{R}_{\mathbf{x},k}(t) = \rho(t)\mathbf{R}_{\mathbf{x},k}(t-1)+(1\!-\!\rho(t))\mathbf{x}_{k} (t)\mathbf{x}_{k}^{H}(t), \tag{56}\] \[\mathbf{R}_{\mathbf{n},k}(t) = \tilde{\rho}_{k}(t)\mathbf{R}_{\mathbf{n},k}(t\!-\!1)\!+\!(1\!-\! \tilde{\rho}_{k}(t))\mathbf{x}_{k}(t)\mathbf{x}_{k}^{H}(t), \tag{57}\] where \[\tilde{\rho}_{k}(t)=1-\frac{r_{\mathbf{n},k}(t)}{\sum_{\tau=1}^{t}\alpha^{t-\tau}r _{\mathbf{n},k}(\tau)}. \tag{58}\] The output power ratio is calculated by \[r_{\mathbf{n},k}(t)=\frac{\|\mathbf{\hat{n}}_{k}(t)\|_{2}^{2}}{|\hat{S}_{k}(t)|^ {2}+\|\mathbf{\hat{n}}_{k}(t)\|_{2}^{2}}, \tag{59}\] where the outputs normalized by the minimal distortion principle [31] are obtained by \[\text{diag}\left(\mathbf{A}_{k}(t-1)\right)\begin{bmatrix}Y_{k}(t;t-1)\\ \mathbf{z}_{k}(t;t-1)\end{bmatrix}=\begin{bmatrix}\hat{S}_{k}(t where \(\nu\) is a scale factor for stable online processing [13]. The scale factor \(\nu\) is useful for the online SVE because subtracting an inaccurate estimate of the noise SCM may result in a disastrous SVE. The value of \(\nu\) can be set to even zero to enable stable SVE, even if there is little noise in the initial frames where the estimate of the noise SCM is likely to be inaccurate. Finally, the steering vector is obtained by finding the principal eigenvector of \(\mathbf{R}_{\mathbf{s},k}(t)\) with the normalization. The overall RLS-based online beamforming and SVE algorithm based on ICA with the constraints is summarized in Algorithm 1. ``` 1Initialize \(\mathbf{W}_{k}\), \(\mathbf{A}_{k}\), \(\alpha\), \(\gamma\), \(\gamma_{n}\), \(\nu\), and \(a_{\mathbf{s},k}^{(p)}\) ; 2for\(t=1,...,T\)do 3for\(k=1,...,K\)do 4 Compute outputs \(\begin{bmatrix}Y_{k}(t;t-1)\\ \mathbf{z}_{k}(t;t-1)\end{bmatrix}\)=\(\mathbf{W}_{k}(t-1)\mathbf{x}_{k}(t)\). 5 /* Estimate Steering Vector */ 6 Normalize the outputs by (60). 7 Update \(P_{n,k}(t)=\gamma_{n}P_{n,k}(t-1)+(1-\gamma_{n})\|\mathbf{\hat{n}}_{k}(t)\|_{ 2}^{2}\). 8 Calculate \(\mathbf{r}_{n,k}(t)\) by (61). 9 Update \(\mathbf{R}_{\mathbf{x},k}(t)\) and \(\mathbf{R}_{\mathbf{n},k}(t)\) using (56) and (57). 10 Calculate \(\mathbf{R}_{\mathbf{s},k}(t)\) by (62) and update \(\mathbf{h}_{k}(t)\). 11 /* Update the wSCMs */ 12 Update \(\lambda_{k}(t)\) by (45). 13 Calculate \(\phi_{k}(t)\) and \(\phi_{\mathbf{x},k}(t)\) using (44) and (47). 14 Update \(\mathbf{U}_{k}(t)\) using (40). 15 Update \(\mathbf{V}_{\mathbf{x},k}(t)\) and \(\mathbf{U}_{\mathbf{x},k}(t)\) with (39) and (40) after replacing \(\phi_{k}(t)\) with \(\phi_{\mathbf{x},k}(t)\). 16 /* Update Beamforming Output Weights */ 17 Set \(\mathbf{A}_{k}(t)=\mathbf{A}_{k}(t-1)\). 18 Update \(\mathbf{w}_{1k}(t)\) by (49). 19 Update \(\mathbf{A}_{k}(t)\) by (53). 20 /* Update Noise Output Weights */ 21 Update \(\mathbf{H}_{\mathbf{z},k}(t)\) and \(\mathbf{G}_{\mathbf{z},k}^{(p)}(t)\) by (49) and (50). 22for\(m=2,...,M\)do 23 Update \(\mathbf{w}_{mk}(t)\) by (49)-(52) 24 Update \(\mathbf{A}_{k}(t)\) by (53). 25 end if 26 /* Calculate beamforming Output */ 27 Compute \(Y_{k}(t;t)=\mathbf{w}_{1k}^{H}(t)\mathbf{x}_{k}(t)\). 28 29 end for 30 31 end for ``` **Algorithm 1**Proposed online Mask-S-MLDR beamforming and SVE algorithm based on ICA-HC ## V Experimental Evaluation The algorithms presented were evaluated through ASR experiments on the CHiME-4 challenge dataset [29]. The dataset was recorded using six microphones in four noisy environments, and the evaluation was based on the WERs. The ASR system was constructed by the Kaldi toolkit [35], which was the same as in [19]. The 13-order Mel-frequency cepstral coefficients feature is used for training the acoustic model, extracted from input noisy training data, corresponding close-talk microphone data for real-recorded data, and clean data simulated at the six microphones. We also utilized the Kaldi recurrent-neural-network-based language model. For a more detailed description of the ASR system construction, please refer to [19]. When analyzing using the STFT, a Hanning window with a length of 1024 samples and a shift of 256 samples was commonly applied to the data with a sampling rate of 16 kHz. For target masks, we used the ones estimated by the same NN model as in [17]. Furthermore, when obtaining the median value \(\overline{|X_{k}(\tau)|}\) of microphone observations \(\mathbf{x}_{k}(\tau)\), the observations at the second microphone were excluded to obtain a better estimation of TVVs. For robust processing, it is necessary to clip the weighting functions \(\phi_{k}(\tau)\) by a value of \(\phi_{0}\), as \(\phi_{k}(\tau)\leftarrow\min{(\phi_{k}(\tau),\phi_{0})}\)[5, 19], when calculating the wSCMs in all the statistical beamformers. The clipping value \(\phi_{0}\) was experimentally set to \(10^{6}\). In ICA, the weighting function \(\phi_{\mathbf{z},k}(\tau)\) was also clipped by the same value for the proposed noise source model. ### _Comparison of the Proposed Beamforming and SVE with Conventional Enhancement Methods_ In Table I, various conventional methods for speech enhancement were compared with the proposed beamforming and SVE methods based on batch processing. As a BSE method without explicit spatial constraints, over-determined IVA (OIVA) [7] was also compared. When the masks were not used, the Gaussian source model with the shared TVVs \(\tilde{\lambda}(\tau)\) was used for target speech by (21) to solve the permutation problem. Additionally, the output was normalized using the minimal distortion principle of (26). If target masks were used, two enhancement methods without constraints were compared: mask-based generalized eigenvalue beamforming (Mask-GEV) [17] and OIVA with masked inputs \(\mathcal{M}_{k}(\tau)\overline{|X_{k}(\tau)|}^{2}\) as the prior (Mask-P-OIVA). TVVs were estimated by (10) using MAP. Note that the output \(Y_{k}(\tau)\) of Mask-P-OIVA has scale ambiguity without spatial constraints, so \(Y_{k}(\tau)\) was replaced with \(\tilde{S}_{k}(\tau)\) by (26) when estimating the TVVs in (10). On the other hand, two conventional beamforming and SVE methods were compared. One method was the Mask-MVDR beamformer with the CGMM-based SVE [22] whose overall SVE procedure was conducted before the beamform ing. Without the masks, the Mask-MVDR beamforming was replaced with MPDR beamforming. The other method was the MLDR beamformer with wSCM-based SVE [13] based on (25) where the SVE and MLDR beamforming were alternately updated. In the MLDR beamformer with masks, the Mask-P-MLDR beamformer was used, where TVVs were estimated by (10) using the masked input as the prior. As a proposed beamforming and SVE method, the Mask-S-MLDR with SVE based on the ICA-HC using TVVs of (17) and the stationary Laplacian noise model of (31) was evaluated. Additionally, to evaluate the performance of the Mask-S-MLDR beamformer alone without SVE based on the ICA-HC, the Mask-S-MLDR with the SVE methods of CGMM and wSCM were also compared. For the family of MLDR beamformers (MLDR, Mask-P-MLDR, and Mask-S-MLDR) that require iterations, \(\mathbf{w}_{k}\) is initialized to \(\mathbf{e}_{1}\). This means that the initial \(Y_{k}(\tau)\) is equal to \(X_{1k}(\tau)\) in batch processing. For the ICA-HC-based SVE, \(\mathbf{W}_{k}\) is initialized using (28) with \(\mathbf{h}_{k}=\mathbf{1}\). Then, the initial \(\mathbf{w}_{1k}\) for the beamformer is still equal to \(\mathbf{e}_{1}\), while the weights for the noises are initialized to \(\mathbf{w}_{mk}=\mathbf{e}_{m}-\mathbf{e}_{1}\). For all iterative methods, including CGMM and MLDR beamformers, the number of iterations in batch processing was set to 10. For all BSE and beamforming methods with the TVVs, \(\tau_{0}\) was set to 1. For the proposed SVE based on ICA-HC, the penalty weight \(a_{\mathbf{a},k}^{(p)}\) for the null constraint was set to 1. In OIVA and Mask-P-OIVA, they were empirically initialized in the same way using (28). For all SVE methods, the input \(\sqrt{\mathcal{M}_{k}(\tau)}\mathbf{x}_{k}(\tau)\) was used to estimate SCMs in (23), (24), and (25) to further suppress noise components if masks were available. In particular, the posterior probability of the CGMM was fixed at a noise mask \(1-\mathcal{M}_{k}\) for the first five iterations to sufficiently exploit the information of NN masks and further enhance performance. In addition, to see the effectiveness of the SVE methods (CGMM, wSCM, and ICA-HC) with the NN masks, we experimented the SVE based on the NN mask alone by setting \(r_{\mathbf{n},k}(\tau)=1-\mathcal{M}_{k}(\tau)\) estimated from the input \(\mathbf{x}_{k}(\tau)\) instead of masked input in the (23), (24), and (25). In Table I, SVE-based methods improved performance without masks. While OIVA and GEV with masks improved recognition performance, especially for simulated data, linearly constrained beamforming methods (MPDR, MVDR, MLDR, and Mask-S-MLDR) with SVE based on (22) achieved robustness even for real-recorded data, demonstrating the stability of SVE by (22). Among the two SVE methods with masks based on Mask-MVDR beamformer, combining CGMM with NN masks showed more improved performance than simply using NN masks alone. Additionally, wSCM-based methods with Mask-P-MLDR beamformer outperformed Mask-MVDR beamformer with CGMM-based SVE. When substituting Mask-MVDR with Mask-S-MLDR in CGMM-based SVE, WERs become lower, demonstrating the effectiveness of the proposed beamformer. On the other hand, for wSCM-based SVE, Mask-S-MLDR provided comparable performance with Mask-P-MLDR because the proposed Mask-S-MLDR effectively utilized beamforming outputs and masked inputs in the weighting function. Furthermore, among SVE methods for Mask-S-MLDR beamformer, the proposed SVE based on ICA-HC generally showed the best results with NN masks. Without NN masks, MLDR beamformer with proposed SVE based on ICA-HC still yielded the best results on average. Moreover, proposed SVE based on ICA-HC with Mask-S-MLDR beamformer achieved better recognition performance on average than all compared enhancement methods for NN masks. These results confirm the effectiveness of both the proposed Mask-S-MLDR beamformer and SVE based on ICA-HC. ### _Comparison of Proposed Online Beamforming and SVE with Conventional Online Enhancement Methods_ Next, we compared the proposed online algorithm of joint beamforming and SVE with conventional online enhancement methods in Table II. For the online RLS methods based on frame-by-frame processing, the forgetting factor \(\alpha\) was initially set to \(0.96\) and switched to \(0.99\) at the 100th frame to enhance the initial convergence. For the recursive estimation of the TVVs, the smoothing factor \(\gamma\) for TVV estimation was fixed to 0.1. In the online processing, the floor value \(\epsilon\) for the target mask \(\mathcal{M}_{k}(t)\) was required to replace \(\mathcal{M}_{k}(t)\) with \(\max\left\{\mathcal{M}_{\tilde{k}}(t),\epsilon\right\}\) for robust estimation of masked inputs \(\mathcal{M}_{k}(t)|\overline{X_{k}(t)}|\). Here, \(\epsilon\) was set to \(10^{-2}\). The experiment was conducted by using NN masks for comparison3. Footnote 3: Although the NN masks [17] are not attained by online processing, the experiments using these NN masks were also conducted to compare the online beamformers with more accurate steering vectors and masks. We used the Mask-S-MLDR beamformer with SVE based on the ICA-HC as the proposed online method. As a conventional online beamformer with SVE, we considered the Mask-P-MLDR with the wSCM-based SVE in [13]. We also evaluated the proposed Mask-S-MLDR beamformers with the wSCM-based SVE. Without the masks, the Mask-P-MLDR and Mask-S-MLDR were replaced with MLDR beamformers. As online methods without SVE, OIVA and Mask-P-OIVA were evaluated. We also evaluated the Mask-MVDR with SVE using NN mask alone when the target masks were provided. Similar to the derivation in Subsection IV-B, the algorithm for the online OIVA was derived based on RLS. For the OIVA, Mask-P-OIVA, and the proposed SVE based on ICA-HC, the scale factor \(\nu\) in (62) was initially set to zero and switched to \(0.8\) without the masks and \(0.99\) with the masks at the 100th frame to enhance the stability for both the conventional wSCM-based and proposed ICA-HC-based SVE methods. For the proposed SVE based on ICA-HC, the smoothing factor \(\gamma_{\mathbf{n}}\) for recursive estimation of smoothed noise power \(P_{\mathbf{n},k}(t)\) in (61) was set to 0.9. The values of hyper-parameters for the proposed method are summarized in Table III. In Table II, all online methods except OIVA improved performance compared to the baseline without any processing. Conventional OIVA's performance was unstable but significantly improved by using masked input as a prior, providing even better performance than Mask-MVDR with SVE based on NN alone. However, Mask-P-OIVA still showed higher WERs than the statistical beamformers (MLDR and Mask-S-MLDR) with SVE combined with NN, due to inferior results in real-recorded data. With wSCM-based SVE, the WERs of Mask-P-MLDR were similar to those of the proposed Mask-S-MLDR, like the results in Table I. For beamforming with SVE methods, MLDR and the proposed Mask-S-MLDR with ICA-HC-based SVE still showed improved performance. ### _Comparison of Various Beamformers with Fixed Steering Vectors Estimated by Batch CGMM_ Table IV summarizes the WERs of six different beamformers based on both batch and online processing, using fixed steering vectors pre-estimated by the batch CGMM-based SVE. In addition to the five beamformers (MPDR, MLDR, Mask-MVDR, Mask-P-MLDR, and Mask-S-MLDR) in Table I, Mask-MLDR, which uses TVVs directly estimated by masked inputs of (9), was also compared. Assuming that the NN masks for target speech were available, the CGMM-based SVE was also performed for observations masked by the target masks. Regardless of which steering vectors were given in Table IV, the MLDR showed better performance than the MPDR by considering the non-stationarity of the target speech without masks, especially for online processing. In general, all the beamformers using steering vector estimates for observations masked by NN masks outperformed those using steering vector estimates for unmasked observations. This demonstrates that the accuracy of steering vector estimates was improved by using the masks. Compared to Mask-MVDR, MLDR and Mask-MLDR showed improved performance, which means proper statistical modeling with TVV estimation on target speech sources can be effective enough to outperform Mask-MVDR. Furthermore, regardless of the masks used, MLDR and Mask-MLDR showed comparable performance. This means that TVV estimation of MLDR is sufficiently accurate, resulting in robust performance as long as steering vectors are estimated accurately with the masks. In particular, Mask-P-MLDR and Mask-S-MLDR utilized both masked observations and beamformer outputs, achieving robustness unlike the other beamformers that showed higher WERs. Because the weighting function \(\phi_{k}(\tau)\) is critical for the beamformer performance, the analysis of \(\phi_{k}(\tau)\) is meaningful to understand the beamformers. Table V summarizes the calculations of \(\phi_{k}(\tau)\) for all the compared beamformers. According to (3), the weighting function for the Mask-MVDR can be considered as \(\phi_{k}(\tau)=1-\mathcal{M}_{k}(\tau)\), which is completely determined by the mask value. For the MLDR including Mask-MLDR and Mask-P-MLDR, speech-dominant t-f segments are close to zero because \(\phi_{k}(\tau)\) is given as the reciprocal value of a TVV. Especially, \(\phi_{k}(\tau)\) of the Mask-MLDR uses observation powers in addition to the mask value. As a result, the Mask-MLDR was able to secure robustness in estimating wSCMs derived from proper speech modeling with TVVs unlike the Mask-MVDR, which was confirmed by Table IV where the Mask-MLDR obtained lower WERs than the Mask-MVDR. Nevertheless, the weighting function of the Mask-MLDR is calculated by masked observations without considering beamforming outputs while that of the MLDR entirely depends on the beamforming outputs. Note that beamforming methods are iterative if their beamformed output \(Y_{k}(\tau)\) is utilized in the calculation of the weighting function \(\phi_{k}(\tau)\) or the estimation of TVVs \(\lambda_{k}(\tau)\). Otherwise, the beamforming weights are uniquely determined without iterations based on mask values. On the other hand, the Mask-P-MLDR and Mask-S-MLDR can achieve more robust beamforming by using wSCMs estimated from both masked observations and beamforming outputs as shown in Table IV. Using the prior distribution for TVVs, the Mask-P-MLDR utilizes both the values in the form of addition in TVV estimation, whereas the Mask-S-MLDR does so in the form of multiplication in the calculation of the weighting function based on the assumption of source sparsity. ### _Evaluation of Joint Beamforming and SVE Based on Proposed Frameworks_ In Table VI, we evaluated joint beamforming and SVE according to the types of spatial constraints for both Gaussian and Laplacian source models when steering vector estimates were not available in advance. The experiment was conducted for both batch and online processing. When the Gaussian source models using (4) and \(q\left(\mathbf{z}_{k}\left(\tau\right)\right)\propto\exp(-\left\|\mathbf{z}_{ k}(\tau)\right\|_{2}^{2})\) were used as the conventional ones, beamforming filters were obtained by the Mask-P-MLDR beamformer. On the other hand, the Laplacian source models using (15) and \(q\left(\mathbf{z}_{k}\left(\tau\right)\right)\propto\exp\left(-\left\|\mathbf{ z}_{k}(\tau)\right\|_{2}\right)\) were used for the proposed Mask-S-MLDR beamformer. These beamformers were spatially constrained by three types of methods (ICA-PC, ICA-LC, and ICA-HC). While the penalty weights for null constraints \(a_{\mathbf{z},k}^{(p)}\) were still experimentally set to 1, the penalty weights for distortionless constraints \(a_{\mathbf{1}k}^{(p)}\) were varied from 10 to infinity for ICA-PC or ICA-HC. Specifically, pair values of the penalty weights of distortionless and null penalties are indicated in each row of Table VI. Because the constraints with Lagrange multipliers correspond to the special cases of ICA-PC with infinite penalty weights, they are marked by infinite values of penalty weights (\(\infty\),\(\infty\)) while the pair (\(\infty\), 1) means the constraints based on ICA-HC. Unlike the former experiment, the recursive estimation using \(P_{\mathbf{n},k}(t)\) was omitted to assess the pure estimation performance of the ICA, which was equivalent to setting \(\gamma_{\mathbf{n}}\) to 0. The WERs were evaluated using NN masks as the target masks for estimating TVVs and steering vectors. Regardless of the types of masks and source models, overall performance improved as the distortionless constraint was getting stricter with a weak null constraint (\(a_{\mathbf{z},k}^{(p)}=1\)). It might be because strong constraints enhanced the stability of beamformers by preventing undesirable distortions in beamforming outputs. For both the source models in batch processing, the ICA-PC with \(a_{1k}^{(p)}=50\) outperformed the ICA-LC. Too strict null constraints, in addition to strict distortionless constraints, resulted in degraded recognition performance with the ICA-LC, because strict constraints prevent the accurate SVE obtained from the demixing matrix outputs. The WERs for online processing rise because of instability and instantaneous convergence in the frame-by-frame update of beamforming weights and steering vectors compared to the batch processing using sufficiently converged outputs after many iterations. Unlike batch processing, the ICA-LC achieved superior results to the ICA-PC due to the strict constraints in the online processing where stability is much more preferred. For the online processing, it is obvious that the strict distortionless constraint played a far more critical role in avoiding undesirable distortions in beamforming outputs. In terms of the source model, the methods with the proposed Laplacian source model, where Mask-S-MLDR beamformer and MICA models are used, mostly show improved performance. It could be attributed to the effectiveness of the Laplacian source models by considering the sparsity of target speech and noises. Regardless of the types of masks and source models, the ICA-HC improved the recognition performance by avoiding too low degrees of freedom in extracting the noises with strict distortionless constraints on target outputs. Especially, the proposed Laplacian source model with the ICA-HC showed the best performances. ### _Evaluation on Another Dataset with a Different ASR Model_ To assess the versatility of the proposed methods as a pre-processing step for robust ASR, we also evaluated the WER scores on LibriCSS [30] as an additional real-recorded dataset. The LibriCSS dataset was recorded using a 7-channel circular microphone array with a random speaker position. We used a pre-trained ASR model provided in [30] and evaluated the performance in the utterance-wise fashion with batch processing. Although the dataset was originally presented for continuous speech separation, it also provides segmented utterances for the utterance-wise evaluation. Each segmented main utterance was set to target speech with the other assumed to be an interference, which enabled us to apply our beamforming methods for the main target speech. We utilized target masks estimated by the pre-trained NN model based on the conformer in [37]. We used a 512-sample length and 256-sample shift for the STFT to adjust to the NN masks obtained by the pre-trained conformer. Similar to Table I, we compared Mask-GEV and Mask-P-OIVA as conventional BSE methods and Mask-P-MLDR with wSCM-based SVE and Mask-MVDR with CGMM-based SVE as conventional beamforming methods. As a proposed beamforming method, Mask-S-MLDR with SVE based on ICA-HC in addition to CGMM and wSCMs was evaluated. Unlike the former experiments based on the CHiME-4 dataset, \(\mathbf{W}_{k}\) was initialized to the identity matrix in the Mask-P-OIVA and the proposed method because the direction of the target speaker was random. All the other parameters were set identically. In Table VII, the results are evaluated depending on the overlap ratio of interference speakers in the utterance-wise evaluation. From the results, we can observe that the WERs of input data increased with the overlap ratio. Degrees of improvement with the enhancement methods also increased with the overlap ratio by suppressing the interference. For the original data, all the enhancement methods improved the recognition performance even for no-overlap (0S, 0L) situations because the original data include inherent distortions due to distant recordings. In the beamforming methods, the proposed Mask-S-MLDR with SVE based on CGMM and wSCMs generally showed better performance than the conventional beamformers. Moreover, the proposed method (Mask-S-MLDR with ICA-HC) generally showed the best results among the compared methods. The additional evaluation on another dataset and ASR model confirmed the versatility and effectiveness of the proposed methods. ### _Evaluation on Dynamic Target Positions_ To assess the effectiveness of online algorithms, we tested them on a simulated dynamic environment where the position of the target speaker was changed. We mixed speech sources extracted from 0S of LibriCSS and noise sources from CHiME-4 using image methods with a reverberation time of 0.2 s and a randomly chosen signal-to-noise ratio (SNR) between -5 and 0 dB. The detailed configuration is shown in Fig. 1. We instantaneously moved the speech position from one location to another, like [38]. In Table VIII, we compared the batch and online enhancement methods for two scenarios: one with the position changed and another with the position fixed in an utterance. To obtain the NN mask, we trained the conformer [30], which was originally used for continuous separation in LibriCSS. Different from Subsection V-E, we modified the conformer to output a single target mask \(\mathcal{M}_{k}(\tau)\) as a network for speech enhancement. The conformer was trained based on a single-channel input, as in [17]. All other parameters were the same as in the original conformer in [30]. The network was trained using speech sources from the WSJ0 dataset [39] and noise sources from the CHiME-4 dataset. All parameters were set to the same values as in Subsection V-E, except the \(\nu\) in (62) was switched to 0.8 without masks and 0.9 with masks from 0 for stable SVE in moving situations when online processing was performed. Except for batch processing of OIVA on the moving situation, all methods improved recognition results compared to unprocessed input data. When the speech source were stationary, batch processing led to better results than online processing, except for OIVA. However, when the speech sources were moving, enhancement through online processing showed more Fig. 1: Simulated room and microphone configuration: a target speech source moved instantaneously within an utterance between two random blue points (A, B, C, and D). A linear microphone array was simulated, with an inter-distance of 5 cm between adjacent microphones. The center microphone was fixed at (3.5 m, 2.5 m). There were 28 noise sources located at intervals of 1 m along the sides of the room. The heights of all the sources were 2 m. stable performance. In this scenario, online SVE based on wSCM showed more stable results than ICA-HC due to better convergence when target masks were not provided. When the target masks were used, Mask-P-OIVA stably enhanced the speech, achieving even better performance than the Mask-MVDR with NN-based SVE. This result was consistent with the result from Table I, which showed that simulation data were well enhanced by Mask-P-OIVA. The proposed ICA-HC showed comparable WERs with wSCM-based SVE when the target masks were provided. Furthermore, when the speech sources were not moving, the proposed SVE based on ICA-HC still achieved the best performance for both batch and online processing, with and without target masks. ## VI Conclusion In this paper, we presented joint beamforming and SVE methods based on ICA with distortionless and null constraints, as a pre-processing step for robust ASR. In particular, by modeling the target signal as a Laplacian distribution with TVVs, a mask-based sparse MLDR beamformer was proposed to exploit both its outputs and target masks in the weighting function of wSCMs for robust estimation. In addition, an SVE method was also derived by using the ratio of target and noise outputs of ICA optimized with the constraints. To enhance the accuracy of steering vector estimates, the strict constraints based on the Lagrange multiplier method were extended to hybrid constraints, or ICA-HC, using the power penalty as well as the Lagrange multiplier. Moreover, an RLS-based online beamforming and SVE algorithm estimated by frame-by-frame updates was derived for practical applications. Experimental results on the various environments using CHiME-4 and LibriCSS datasets confirmed the effectiveness of the proposed methods for both batch and online processing. ## Appendix A Update Rules of Weights in ICA-LC \(\mathbf{w}_{mk}\) to minimize (29) can be obtained by a solution of \(\nabla_{\mathbf{w}_{mk}^{*}}Q_{k}+a_{mk}^{(l)}\mathbf{h}_{k}=0\). In particular, \(\mathbf{w}_{1k}\) for the target output is given by \[\mathbf{V}_{k}\mathbf{w}_{1k}-\mathbf{W}_{k}^{-1}\mathbf{e}_{1}+a_{1k}^{(l)} \mathbf{h}_{k}=0, \tag{63}\] which can be rearranged to \(\mathbf{W}_{k}\mathbf{V}_{k}\mathbf{w}_{1k}{=}(1-a_{1k}^{(l)})\mathbf{e}_{1}\). Therefore, finding \(a_{1k}^{(l)}\) gives \(a_{1k}^{(l)}=1-(\mathbf{h}_{k}^{H}(\mathbf{W}_{k}\mathbf{V}_{k})^{-1}\mathbf{ e}_{1})^{-1}\), and the update equation of \(\mathbf{w}_{1k}\) is induced as (30). Note that the constraint of \(\mathbf{W}_{k}^{-1}\mathbf{e}_{1}=\mathbf{h}_{k}\) is utilized to derive (30). In ICA-LC of (29), the noise outputs \(\mathbf{z}_{k}(\tau)\) constrained to cancel the steered direction can be obtained by updating \(\mathbf{w}_{mk},\;2\leq m\leq M\): \[\mathbf{V}_{\mathbf{z},k}\mathbf{w}_{mk}-\mathbf{W}_{k}^{-1}\mathbf{e}_{m}+a_ {mk}^{(l)}\mathbf{h}_{k}=0, \tag{64}\] which can be rearranged to \(\mathbf{W}_{k}\mathbf{V}_{\mathbf{z},k}\mathbf{w}_{mk}=\mathbf{e}_{m}-a_{mk} ^{(l)}\mathbf{e}_{1}\). Similar to the derivation of \(\mathbf{w}_{1k}\), the Lagrange multipliers \(a_{mk}^{(l)},2\leq m\leq M,\) can be found as \[a_{mk}^{(l)}=\frac{\mathbf{h}_{k}^{H}\left(\mathbf{W}_{k}\mathbf{V}_{\mathbf{z },k}\right)^{-1}\mathbf{e}_{m}}{\mathbf{h}_{k}^{H}\left(\mathbf{W}_{k} \mathbf{V}_{\mathbf{z},k}\right)^{-1}\mathbf{e}_{1}}, \tag{65}\] which leads to the following update rules: \[\mathbf{G}_{\mathbf{z},k}^{(l)} = \mathbf{V}_{\mathbf{z},k}^{-1}-\frac{\mathbf{V}_{\mathbf{z},k}^{ -1}\mathbf{h}_{k}\mathbf{h}_{k}\mathbf{W}_{\mathbf{z},k}^{-1}}{\mathbf{h}_{k }^{H}\mathbf{V}_{\mathbf{z},k}^{-1}\mathbf{h}_{k}}, \tag{66}\] \[\tilde{\mathbf{w}}_{mk} = \mathbf{G}_{\mathbf{z},k}^{(l)}\mathbf{W}_{k}^{-1}\mathbf{e}_{m},\] (67) \[\mathbf{w}_{mk} = \tilde{\mathbf{w}}_{mk}/\sqrt{\tilde{\mathbf{w}}_{mk}^{H}\mathbf{ V}_{\mathbf{z},k}\tilde{\mathbf{w}}_{mk}},\;2\leq m\leq M. \tag{68}\] ## Appendix B Update Rules of Weights in ICA-PC and ICA-HC From (32) in ICA-PC, the optimization equation for \(\mathbf{w}_{1k}\) is given as \(\mathbf{H}_{k}\mathbf{w}_{1k}-\mathbf{W}_{k}^{-1}\mathbf{e}_{1}-a_{1k}^{(p)} \mathbf{h}_{k}=0\), where \(\mathbf{H}_{k}=\mathbf{V}_{k}+a_{1k}^{(p)}\mathbf{h}_{k}\mathbf{h}_{k}^{H}\). Then, the optimization method of \(\mathbf{w}_{1k}\) can be presented as the vector coordinate descent algorithm in [27], but the update rules are omitted due to the page limitation. On the other hand, the optimization equation for \(\mathbf{w}_{mk},m=2,...,M\) is expressed as \[\mathbf{H}_{\mathbf{z},k}\mathbf{w}_{mk}-\mathbf{W}_{k}^{-1}\mathbf{e}_{m}=0, \tag{69}\] where \(\mathbf{H}_{\mathbf{z},k}\!=\!\mathbf{V}_{\mathbf{z},k}\!+\!a_{\mathbf{z},k}^ {(p)}\mathbf{h}_{k}\mathbf{h}_{k}^{H}\). By the conventional iterative projection algorithm in [4], \(\mathbf{w}_{mk}\) can be updated by \[\tilde{\mathbf{w}}_{mk} = \left(\mathbf{W}_{k}\mathbf{H}_{\mathbf{z},k}\right)^{-1}\mathbf{ e}_{m}, \tag{70}\] \[\mathbf{w}_{mk} = \tilde{\mathbf{w}}_{mk}/\sqrt{\tilde{\mathbf{w}}_{mk}^{H}\mathbf{ H}_{\mathbf{z},k}\tilde{\mathbf{w}}_{mk}},\;2\leq m\leq M. \tag{71}\] Through a derivation from the extended auxiliary function for ICA-HC in (33) similar to that in Appendix A, the target weights \(\mathbf{w}_{1k}\) can be updated by \[\mathbf{w}_{1k}=\frac{\mathbf{V}_{k}^{-1}\mathbf{h}_{k}}{\mathbf{h}_{k}^{H} \mathbf{V}_{k}^{-1}\mathbf{h}_{k}}+\hat{\mathbf{w}}_{1k}-\mathbf{h}_{k}^{H} \hat{\mathbf{w}}_{1k}\frac{\mathbf{V}_{k}^{-1}\mathbf{h}_{k}}{\mathbf{h}_{k}^{H} \mathbf{V}_{k}^{-1}\mathbf{h}_{k}}, \tag{72}\] where \(\hat{\mathbf{w}}_{1k}=\left(\mathbf{W}_{k}\mathbf{V}_{k}\right)^{-1}\mathbf{e}_{1}\). If \(\mathbf{W}_{k}\mathbf{h}_{k}\approx\mathbf{e}_{1}\) by sufficiently reflecting the null constraints imposed by the power penalty, we have \(\hat{\mathbf{w}}_{1k}\approx\mathbf{V}_{k}^{-1}\mathbf{h}_{k}\). Then, we can approximate (72) into \[\mathbf{w}_{1k}\approx\mathbf{w}_{1k}^{(l)}=\frac{\mathbf{V}_{k}^{-1}\mathbf{h} _{k}}{\mathbf{h}_{k}^{H}\mathbf{V}_{k}^{-1}\mathbf{h}_{k}}, \tag{73}\] which results in the same update equation as (30). On the other hand, updates of the noise weights \(\mathbf{w}_{mk},m=2,\cdots,M,\) are exactly the same as in ICA-PC.
2306.11314
Analysis of a Skyrme energy density functional with deep learning
Over the past decade, machine learning has been successfully applied in various fields of science. In this study, we employ a deep learning method to analyze a Skyrme energy density functional (Skyrme-EDF), that is a Kohn-Sham type functional commonly used in nuclear physics. Our goal is to construct an orbital-free functional that reproduces the results of the Skyrme-EDF. To this end, we first compute energies and densities of a nucleus with the Skyrme Kohn-Sham + Bardeen-Cooper-Schrieffer method by introducing a set of external fields. Those are then used as training data for deep learning to construct a functional which depends only on the density distribution. Applying this scheme to the $^{24}$Mg nucleus with two distinct random external fields, we successfully obtain a new functional which reproduces the binding energy of the original Skyrme-EDF with an accuracy of about 0.04 MeV. The rate at which the neural network outputs the energy for a given density is about $10^5$--$10^6$ times faster than the Kohn-Sham scheme, demonstrating a promising potential for applications to heavy and superheavy nuclei, including the dynamics of fission.
N. Hizawa, K. Hagino, K. Yoshida
2023-06-20T06:24:24Z
http://arxiv.org/abs/2306.11314v1
# Analysis of a Skyrme energy density functional with deep learning ###### Abstract Over the past decade, machine learning has been successfully applied in various fields of science. In this study, we employ a deep learning method to analyze a Skyrme energy density functional (Skyrme-EDF), that is a Kohn-Sham type functional commonly used in nuclear physics. Our goal is to construct an orbital-free functional that reproduces the results of the Skyrme-EDF. To this end, we first compute energies and densities of a nucleus with the Skyrme Kohn-Sham + Bardeen-Cooper-Schrieffer method by introducing a set of external fields. Those are then used as training data for deep learning to construct a functional which depends only on the density distribution. Applying this scheme to the \({}^{24}\)Mg nucleus with two distinct random external fields, we successfully obtain a new functional which reproduces the binding energy of the original Skyrme-EDF with an accuracy of about 0.04 MeV. The rate at which the neural network outputs the energy for a given density is about \(10^{5}\)-\(10^{6}\) times faster than the Kohn-Sham scheme, demonstrating a promising potential for applications to heavy and superheavy nuclei, including the dynamics of fission. ## I Introduction Recent progress of deep learning is quite remarkable. It has actually gained popularity in various fields of science and technology, such as natural language processing, computer vision, and speech recognition [1; 2; 3; 4; 5]. In several fields of physics, such as condensed matter physics, a multitude of ideas to utilize machine learning are arising. For instance, in Ref. [6], an energy density functional (EDF) for electron systems that depends solely on an electron number density was constructed using the method developed in Ref. [7], in which an attempt was made with a neural network to predict the solution of a two-dimensional Schrodinger equation in a random potential. Other applications have already existed also for a variety of problems, including those in spin systems [8] and superconducting systems [9]. In contrast, an application of deep learning to nuclear physics has still been in its early stage [10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25]. We mention that nuclear physics continues to face numerous unresolved challenges that call for innovative solutions, including a description of large amplitude collective motions. Such problems may be solved efficiently by applying the machine learning techniques, developed in other fields of physics. In particular, the recent application of deep learning to the Kohn-Sham type DFT [6] mentioned above could be readlily applied also to nuclear physics. In nuclear physics, phenomenological models for a functional have often been employed [26]. The resultant Kohn-Sham type energy density functional (KS-EDF) is not an explicit functional of the particle number density only, but is parameterized together with other local densities, such as the kinetic energy density, the spin-orbit density, and the pair density when considering explicitly the nucleonic superfluidity. To calculate observables using the KS-EDF, such as the binding energy of a nucleus, one needs to solve a self-consistent differential equation of the same form as that in the mean-field theory many times, which is computationally expensive especially for heavy systems. Therefore, it is desirable to develop an orbital-free EDF (OF-EDF) theory that does not depend on Kohn-Sham orbitals. Deep learning can be a powerful tool for that purpose [6]. Such theory will be based on a functional that depends sorely on the particle number density. Notice that this is totally consistent with the original philosophy of the density functional theory (DFT). The aim of this paper is to apply the method developed in Ref. [6] to a nuclear system and construct a deep-learning-based nuclear OF-EDF that reproduces results of the Skyrme-EDF. In applying the method of Ref. [6] to a nuclear system, one has to take into account several aspects that make nuclear systems different from electron systems. One obvious difference is that a nucleus is a self-bound attractive system. In a electron system without phonons, the only interaction between electrons is the repulsive Coulomb force, which causes two electrons to distribute as far as possible. In marked contrast, nucleons tend to get closer to each other due to a short-ranged attractive nuclear force, and thus the mechanism which determines the density distribution is quite different between electron and nucleon systems [27]. In addition, for electron systems, the KS-EDFs, which are inspired by the Hartree-Fock method, works well in general. On the other hand, in nuclear systems, superfluidity plays a crucial role in open-shell nuclei, and observables are better explained using a KS-EDF that is inspired by the Hartree-Fock-Bardeen-Cooper-Schrieffer (BCS) or Hartree-Fock-Bogoliubov method rather than by the Hartree-Fock method. This leads to a technical difference in that a nuclear KS-EDF depends also on the pair density. It will be intriguing to investigate how well the deep learning method works in such attraction-dominated nuclear systems. The paper is organized as follows. In Sec. II, we introduce the KS-EDF which we employ, and define a protocol for deep learning. We also discuss how to generate data sets to train neural networks. In Sec. III, we carry out the deep learning for the \({}^{24}\)Mg nucleus and discuss how well the data sets can be learned. We then summarize the paper in Sec. IV and discuss future perspectives. ## II Formulation ### Skyrme EDF We first introduce a Kohn-Sham type energy density functional (KS-EDF) for training on a neural network. Throughout this study, we consistently employ the following Skyrme-type EDF [28]: \[E_{\rm tot}=E_{\rm kin}+E_{\rm int}+E_{\rm pair}+E_{\rm CoM}, \tag{1}\] with \[E_{\rm kin}[\tau] = \frac{\hbar^{2}}{2m}\left(1-\frac{1}{A}\right)\sum_{q}\int d^{3}r \,\tau_{q}(\mathbf{r}), \tag{2}\] \[E_{\rm int}[\rho,\tau,\mathbf{J}] = \int d^{3}r\biggl{\{}\frac{b_{0}}{2}\rho^{2}-\frac{b_{0}^{\prime} }{2}\sum_{q}\rho_{q}^{2}\] (3) \[+ \frac{b_{3}}{3}\rho^{\alpha+2}-\frac{b_{3}^{\prime}}{3}\rho^{ \alpha}\sum_{q}\rho_{q}^{2}+b_{1}\rho\tau-b_{1}^{\prime}\sum_{q}\rho_{q}\tau_ {q}\] \[\qquad\qquad-\frac{b_{2}}{2}\rho\nabla^{2}\rho+\frac{b_{2}^{ \prime}}{2}\sum_{q}\rho_{q}\nabla^{2}\rho_{q}\] \[- b_{4}\rho\nabla\cdot\mathbf{J}-b_{4}^{\prime}\sum_{q}\rho_{q}\nabla \cdot\mathbf{J}_{q}\biggr{\}},\] \[E_{\rm pair}[\rho,\tilde{\rho}] = \sum_{q}\frac{V_{0}^{(q)}}{4}\int d^{3}r\,\biggl{\{}1-\left(\frac {\rho}{\rho_{0}}\right)^{\gamma}\biggr{\}}\,\tilde{\rho}_{q}^{2}, \tag{4}\] and \[E_{\rm CoM}[\rho]=\frac{C}{2}\left(\int d^{3}r\,z\rho(\mathbf{r})\right)^{2}, \tag{5}\] where \(m\) is the nucleon mass and \(A\) is the mass number of a nucleus. \(E_{\rm kin},\ E_{\rm int},\ E_{\rm pair}\), and \(E_{\rm CoM}\) are the kinetic energy, the interaction energy, the pairing energy, and a cost function for the center-of-mass, respectively. \(\rho,\tau,\mathbf{J}\), and \(\tilde{\rho}\) are the particle number density, the kinetic density, the spin density, and the pair density, respectively, in which the subscript \(q\) refers to neutron or proton. Those are defined as \[\rho(\mathbf{r}) = 2\sum_{q}\sum_{k>0}v_{q,k}^{2}|\varphi_{q,k}(\mathbf{r})|^{2}, \tag{6}\] \[\tau(\mathbf{r}) = 2\sum_{q}\sum_{k>0}v_{q,k}^{2}|\nabla\varphi_{q,k}(\mathbf{r})|^{2},\] (7) \[\mathbf{J(\mathbf{r})} = 2\sum_{q}\sum_{k>0}v_{q,k}^{2}\varphi_{q,k}^{*}(\mathbf{r})\left(-i \nabla\times\mathbf{\sigma}\right)\varphi_{q,k}(\mathbf{r}),\] (8) \[\tilde{\rho}(\mathbf{r}) = -2\sum_{q}\sum_{k>0}u_{q,k}v_{q,k}|\varphi_{q,k}(\mathbf{r})|^{2}, \tag{9}\] where \(\varphi_{q,k}(\mathbf{r})\) is the \(k\)-th Kohn-Sham orbital in a spinor form with isospin \(q\), and \(v_{q,k}^{2}=1-u_{q,k}^{2}\) is the occupation probability for the \(k\)-th orbital. Notice that we take the BCS approximation for the treatment of the pairing correlation. In the interaction part of the functional, \(b_{i}\) and \(b_{i}^{\prime}\) (\(i=1\)-\(4\)) as well as \(\alpha\) are the Skyrme parameters. In this paper, we use the SLy4 parameter set [29] for these parameters. For simplicity, we ignore the Coulomb interaction, even though the entire Coulomb interaction term can be explicitly described as a functional of the proton number density if the Slater approximation is introduced to the exchange term. For the pairing part, we employ a surface-type functional of the Density-Dependent Delta-Interaction (DDDI) [30], which contains the parameters \(V_{0}^{(q)},\rho_{0}\), and \(\gamma\). In this study, we take \(\gamma=1\) and \(\rho_{0}=0.16\,\text{fm}^{-3}\), and determine \(V_{0}^{(q)}\) so that the average pairing gap coincides with the empirical pairing gap, \(\Delta_{q}=12/\sqrt{A}\) MeV [28; 31]. The zero-range pairing interaction has to be supplemented with an energy cut-off. In this paper, the sharp cut-off energy of 60 MeV is introduced to the single particle energy of the Kohn-Sham orbitals. The resultant strengths for the pairing are \(V_{0}^{(n)}=V_{0}^{(p)}=-683.344\) MeV fm\({}^{3}\). In addition to the ordinal Skyrme EDF, we introduce a functional \(E_{\rm CoM}[\rho]\) to fix the center-of-mass position in the \(z\) direction. This is necessary as we introduce external fields (see Sec. II C below) to generate various density distributions. By fixing the center-of-mass position, one can prevent a nucleus from localizing around the edges of the box, which is useful to generate various deformed states in a small box. In this study, we take \(0.625\,\text{MeV}/\text{fm}^{2}\) for the value of \(C\). In this paper, we consider only the \({}^{24}\)Mg nucleus. This choice of a nucleus is convenient, as this nucleus has equal numbers of protons and neutrons, and thus the proton and the neutron densities coincide to each other when the Coulomb interaction is ignored. Furthermore, we impose the axial symmetry and the time-reversal symmetry on the system, enabling the local densities to be expressed in the cylindrical coordinates \((r,z)\)[32]. Notice that Ref. [6] also used a two-dimensional EDF for electron systems. With these simplifications, in principle, the EDF of the system should be able to be expressed solely with the nucleon number density \(\rho(r,z)\), which can be considered as a monochromatic image. We solve the Kohn-Sham equations for this EDF by introducing various external fields to obtain a set of ground state energies and nucleon number densities. The explcit forms of the external fields are specified in Sec. II C. We solve the Khon-Sham equations by discretizing the real space, with the mesh size of \(0.8\) fm in both the \(r\) and \(z\) directions. We take \(10\) grid points in the \(r\) direction and \(20\) points in the \(z\)-direction, with which the density \(\rho(r,z)\) can be considered as a \(10\times 20\)-dimensional vector in our calculations. We choose the box boundary condi tion and include the \(z\)-component of angular momentum up to \(9/2\). ### Neural network In this paper, we carry out a regression analysis of \(E=E[\rho]\) using a set of the particle number density and the energy \(D=\{E^{(i)},\rho^{(i)}\}_{i}\) generated by the KS-DFT. To this end, we utilize a neural network with fully-connected layers for the fitting function. The fundamental structure of a neural network involves a repetition of linear and nonlinear transformations on the input vector; fully-connected layers signify that all the neurons in the previous layer are connected to all the neutrons in the next layer. We mention that neural networks composed solely of fully-connected layers may encounter an issue of an excessive number of parameters when the dimension of an input vector is large. To avoid this problem, a convolutional neural network (CNN) is often employed, which has demonstrated a remarkable success in the field of computer vision [33, 34]. In fact, in the previous application of deep learning to KS-DFT [6], the input size of a vector has as large as \(256\times 256\) dimension, and thus the CNN was employed. However, the dimension of our studies in this paper is much smaller, with \(10\times 20\) dimension. Therefore, we do not need to introduce the CNN, and a simpler neural network consisting of the fully-connected layers, as depicted in Fig. 1, is employed in this study (see the caption for the details). We use the Adam optimizer [35], which has three tunable parameters. Among the three parameters, we set a learning rate to be \(10^{-4}\) and the others to be default value of the Keras API [36]. The batch size is 128, namely we divide training data into subsets, each of which contains 128 components. In each update of the fitting parameters, we do it only within each subset to minimize a loss function, for which we take a mean square loss function. To avoid the problem of overfitting, we adopt the early stopping strategy and stop the learning at the 500th epoch. We decrease the learning rate sequentially to \(10^{-5}\) (at epoch = 101), \(10^{-6}\) (at epoch = 201), \(5.0\times 10^{-7}\) (at epoch = 301), and \(10^{-7}\) (at epoch = 401). ### External fields For a given EDF, one can make a correspondence between the particle number density and the energy of the ground state for a specific external field. This property will be used to construct a data set to be trained for an OF-EDF. For this purpose, a diverse range of external fields is required. In this subsection, we introduce two methods to generate the external potentials used in this study. The basic idea of these methods is adapted from the previous studies [6, 7] on two-dimensional systems, but we modify them for the axial-symmetric systems. #### ii.2.1 Simple Harmonic Oscillators (SHO) The first method is to use external fields based on a Simple Harmonic Oscillator (SHO). As the name implies, this is a deformed harmonic oscillator potential shifted in the \(z\)-direction: \[v_{\rm{SHO}}^{(i)}(r,z)=\frac{1}{2}k_{r}^{(i)}r^{2}+\frac{1}{2}k_{z}^{(i)}(z- z_{0}^{(i)})^{2}. \tag{10}\] The parameters in the range of \(0\leq k_{r},k_{z}\leq 1.1\,\)MeV/fm\({}^{2}\), and \(-1.6\,\)fm \(\leq z_{0}\leq 1.6\,\)fm in the potential are generated from uniform random parameters 1. Footnote 1: Unless otherwise noticed, all the random numbers used in this paper are uniform random numbers. The SHO potentials would be able to encompass only a small portion of a domain of the external fields to be used in the Skyrme-EDF. However, for practical calculations, only a limited variety of external fields, such as a quadrupole moment, has frequently been utilized, if a constrained field is regarded as an external field in a broad sense. It is therefore still useful to examine the effectiveness of the learning process with the SHO potentials. #### ii.2.2 Random Potentials (RND) As the second method, we introduce a Random Potential (RND). This is a highly random potential with many random numbers: \[v_{\rm{SHO}}^{(i)}(r,z)=m(r,z)\times{\rm{sr}}^{(i)}(r,z), \tag{11}\] where \(m(r,z)\) and \({\rm{sr}}^{(i)}(r,z)\) are defined as, \[m(r,z)=e^{-4.0\max\{0,\,\sqrt{r^{2}+z^{2}}-r_{0}\}^{2}/r_{0}^{2}}, \tag{12}\] Figure 1: A neural network employed in this work to learn the Skyrme-EDF, \(E[\rho]\). It consists of 10 hidden layers, all of which are fully connected. Their activation functions are the ReLU, and the sigmoid activation function is employed for the output layer. The number of neurons in each layer is listed below the layers. and \[\text{sr}^{(i)}(r,z)=\sum_{r^{\prime},z^{\prime}}s^{(i)}(r,z;r^{\prime},z^{\prime })\,\text{rnd}^{(i)}(r^{\prime},z^{\prime}), \tag{13}\] respectively, with \[s^{(i)}(r,z;r^{\prime},z^{\prime})=e^{-\{(r-r^{\prime})^{2}+(z-z^{\prime})^{2} \}/\mu_{2}^{(i)}(r^{\prime},z^{\prime})}. \tag{14}\] The meaning of \(\text{rnd}^{(i)}(r,z)\) in Eq. (13) and \(\mu_{2}^{(i)}(r,z)\) in Eq. (14) is as follows. First, for each grid point \((r,z)\), a random number within the range of \([v_{\text{min}},v_{\text{max}}]\) is generated and labeled as \(\text{rnd}^{(i)}(r,z)\). Since the potential with those random numbers is too irregular to be used as a potential, it is smoothed with a Gaussian filter, denoted as \(s^{(i)}\), as in Eq. (13). At this stage, the square of the Gaussian width \(\mu_{2}^{(i)}(r,z)\) in Eq. (14) is randomly generated within the range of \([\mu_{2\text{min}},\mu_{2\text{max}}]\) to prevent the external field from acquiring scale information due to the standard deviation of the Gaussian. Finally, a mask defined by Eq.(12) is applied in Eq.(11) to circumvent a numerical instability caused by a reduction of the external field near the boundary. In this study, we take \(r_{0}=1.4\times 1.2A^{1/3}\,\text{fm}\), \(v_{\text{min}}=-1.1\) MeV, \(v_{\text{max}}=1.1\) MeV, \(\mu_{2\text{min}}=0.8\) fm\({}^{2}\), and \(\mu_{2\text{max}}=1.2\) fm\({}^{2}\). In Refs. [6; 7], random \(\{0,1\}\) binary data were utilized for \(\text{rnd}^{(i)}(r,z)\). For electronic systems, such a choice would be plausible because the potential primarily arises from the Coulomb potential due to a nucleus. On the other hand, in nuclear systems, it would be a highly non-trivial question to ask which potential is useful to describe static and dynamical properties of atomic nuclei. While many calculations employ a phenomenological deformed mean-field potential with e.g., a qudrupole deformation to study deformed nuclei, it is not obvious whether such choice is optimal. Therefore, in this study, we use random real numbers for \(\text{rnd}^{(i)}(r,z)\) to generate more diverse external fields than in the previous studies. Additionally, since the constraint on the center-of-mass position is included in the definition of the KS-EDF (1), a different mask function \(m\) from that in the previous studies is also introduced. Figure 2: Histograms for the results of the Skyrme EDF calculations with 250,000 different external fields based on the SHO (the top panels) and the RND (the bottom panels) fields. The left and right panels show the binding energy and the pairing energy in units MeV, respectively. It can be observed that the structure in the shape of the histograms is washed out to a large extent for the RND external fields, which are more random than the SHO cases. ## III Results ### Generation of a dataset Let us now apply the deep learning protocol discussed in the previous section to the Skyrme EDF. We first prepare 250,000 data sets for each of the SHO and the RND external fields. For each calculation, the outputs are i) the nucleon number density \(\rho\), ii) the kinetic energy \(E_{\rm kin}\), iii) the interaction energy \(E_{\rm int}\), iv) the pairing energy \(E_{\rm pair}\), and v) the energy for the external field \(E_{\rm ex}\). The binding energy \(E_{\rm bin}\) is also computed as a sum of \(E_{\rm kin}\), \(E_{\rm int}\), and \(E_{\rm pair}\), as \(E_{\rm bin}=E_{\rm kin}+E_{\rm int}+E_{\rm pair}\). Figure 2 displays the distribution of \(E_{\rm bin}\) and \(E_{\rm pair}\) for each of the SHO and the RND external fields. The distributions of the other components of the energy are summarized in Appendix A (see Fig. 8). In order to use these data for deep learning, we reject those outside the regions given in Tab. 1. From the remaining data, we select 200,000 data for training. Out of those 200,000 data, we adopt 90 % of them for training data, while the rest for test data, which are not used for training. ### \(\rho\to E[\rho]\) We first discuss the results for each energy as an objective variable with the nucleon number density as an explanatory variable. In other words, we construct the OF-EDF, which yields the energies from a density distribution as an input. In DFT, apart from an external field, there would be an ambiguity to divide the functional into components: \(E_{\rm kin},E_{\rm int}\), and \(E_{\rm pair}\) themselves may not have strict physical meanings. Nevertheless, these components can be employed as indicators at least for qualitative discussions, and we thus follow Ref. [6] to examine the subparts of the EDF. In particular, it is interesting to investigate the pairing energy \(E_{\rm pair}\), as it qualitatively verifies whether we can learn the effect of superfluidity, or the pair density, with deep learning. Notice that this was not addressed in the previous study in Ref. [6]. The top panels in Fig. 3 compare the results of the Kohn-Sham method (the horizontal axes) with the neural network predictions (the vertical axes) for the test data with the RND external fields not used in learning. The results with the SHO external fields (see the third top panels) are found to be more accurate 2. Figure 3 shows only \(E_{\rm bin}\), \(E_{\rm pair}\), and \(E_{\rm ex}\), while the other components of the energy are displayed in Appendix A (see Fig. 9). If the learning is perfect, the distribution should be diagonal: actually this is almost the case for all the energies except for the energy of the external fields plotted in the rightmost figure. Footnote 2: This is a natural outcome from the simplicity of the SHO external fields. We have found that the large error in \(E_{\rm ex}\) was not improved by changing the learning method, such as a CNN model (see Appendix). This may be due to the fact that the particle number densities with different external fields tend to have a similar shape because of the saturation property, which results in an information loss in the process of compressing information on the external fields into the density distributions. Of course, according to the principle of DFT, ideally there should be no loss of information because there is a bijection between an appropriately defined density and an external field. However, in actual calculations, information on the detailed structure of external fields may be lost due to several numerical errors such as rounding errors, finite difference errors, and errors associated with a convergence criterion in self-consistent calculations. It is then natural that the prediction error becomes large when one attempts to recover the external field information from such a density distribution. The inaccuracy in predicting the energy of external fields was reported also in the previous study [6], but the inaccuracy seems more pronounced in atomic nuclei, which are systems with an attractive interaction. As we will show in the next subsection, this problem can be improved by using the external fields as explanatory variables. To quantitatively evaluate the errors, we calculate the mean absolute error (MAE) for each learning, which are summarized in Tab. 2. It is remarkable that the MAE for the binding energy is as small as 0.0051 MeV for the SHO external fields and 0.0433 MeV for the RND external fields, which are much more accurate than the accuracy required e.g., for a fission barrier of heavy nuclei as well as for nuclear masses. For instance, for the latter, the accuracy of 100 keV is required for the r-process studies [37]. The MAE for the pairing energy is 0.0233 MeV for the SHO and 0.1567 MeV for the RND. These values indicate that the particle number density predicts well the contribution of the pairing correlation, even though the error is slightly larger than that for the binding energy. Finally, let us discuss a computational time. For the \({}^{24}\)Mg nucleus, it typically takes about a minute to solve \begin{table} \begin{tabular}{c|r r|r r} \hline \hline & \multicolumn{2}{c|}{SHO} & \multicolumn{2}{c}{RND} \\ \hline type & lower & upper & lower & upper \\ \hline \(E_{\rm bin}\) & \(-\infty\) & \(-217.5\) & \(-\infty\) & \(-217.5\) \\ \(E_{\rm kin}\) & 395.0 & 450.0 & 360.0 & 420.0 \\ \(E_{\rm int}\) & \(-650.0\) & \(-600.0\) & \(-630.0\) & \(-550.0\) \\ \(E_{\rm pair}\) & \(-22.0\) & \(+\infty\) & \(-35.0\) & \(+\infty\) \\ \(E_{\rm ex}\) & \(-\infty\) & 120.0 & \(-70.0\) & 50.0 \\ \hline \hline \end{tabular} \end{table} Table 1: The lower and the upper cut-off energies, in units MeV, for the two different types of the external fields, SHO and RND. For each learning, only the data within the intervals are employed. The value of cutoffs are determined so that approximately all the data shown in Figs. 2 and 8 can be included. Figure 3: Comparisons of the Kohn-Sham method (the horizontal axes) and the predicted results from the neural network (the vertical axis) for \(E[\rho]\) and \(E[v]\) for the RND (the top panels) and SHO (the bottom panels) external fields, all given in units of MeV. From the left to the right panels, the training results are shown for the binding energy, the pairing energy, and the energy of the external fields. The results for 20,000 test data points are plotted in each figure, in which densely populated (under populated) points are displayed in red (blue). the Skyrme-EDF with the Kohn-Sham method and obtain a single training data point. In marked contrast, the time to predict the energy with the neural network used in this paper from a given density is much shorter, about 0.1 ms. The difference in the computational speed will become larger for heavy nuclei. This makes a great advantage of using the deep learning method e.g., in plotting a multi-dimensional potential energy surface for nuclear fission studies of heavy nuclei. ### \(v\to E[v]\) While it is somewhat tangential to the topic of DFT, there is a certain demand in electronic systems for a functional that directly predicts the energy from a given external field. Because of this, in the previous study [6], an energy functional \(E[v]\) was constructed following the same procedure as that to construct a functional \(E[\rho]\). Even though it is unclear whether such a functional is useful in nuclear physics, it may be worth investigating whether a functional \(E[v]\) can be constructed in connection to the discussion in Ref. [6]. We therefore carry out similar calculations using the same neural network and dataset as those in the previous subsection, but with the external fields as the explanatory variables. The MAEs for \(E[v]\) are summarized in Tab. 2, which shows that the MAE for \(E[v]\) tends to be decreased compared to that for \(E[\rho]\). This is because the external field contains more information than the density distribution. This is particularly true for learning the energy from the external fields. On the other hand, the accuracy gets lowered for the binding energy with the SHO external fields. To investigate the origin for this, the lower panels in Fig. 3 show comparisons between \(E[v]\) from the Skyrme KS calcaulations and the result of the deep learning. We find that the points with large errors are due to external fields that have small amplitudes, that is, almost flat potentials. Since many SHO potentials used in the dataset have a large curvature, it is difficult to learn information about external fields with a small curvature. Such a problem is less likely to occur in fermionic systems when density distributions are used as the explanatory variables, leading to a somewhat better accuracy. ### \(v\to\rho[v]\) Observables are in general calculated in DFT with a particle number density, which is obtained with a given functional. That is, a functional has to be known in advance in obtaining a particle number density. As demonstrated in Ref. [6], if a neural network can directly predict the density for a given external field, the calculation speed will be significantly improved. We therefore carry out deep learning for the nuclear system with the external fields as the explanatory variables and density distributions as the objective variables. To this end, we have to take into account the fact that the densities are normalized to the particle number, that is, \(\int d^{3}r\,\rho=A\). The softmax function, which is commonly used in classification problems, enables one to require the normalization condition. We shall employ this approach in this study for the output layer. For the axial symmetric system, the following relationship exists with a discretized spatial mesh: \[\frac{2\pi}{A}\iint rdrdz\,\rho(r,z)\simeq\sum_{i,j}\rho(r_{i},z_{j})\frac{2 \pi r_{i}\Delta r\Delta z}{A}=1, \tag{15}\] where \(\Delta r=\Delta z=0.8\,\text{fm}\) are the mesh width. Thus, by selecting \(2\pi r_{i}\Delta r\Delta z\rho(r_{i},z_{j})/A\) as the objective variable, the normalization is automatically imposed. In this study, we use a neural network with an encoder-decoder structure for training, as is shown in Fig. 4. The MAE for the learning of \(\rho[v]\) is defined as \[\text{MAE}=\overline{2\pi\iint rdrdz\,|\rho_{\text{pred}}(r,z)-\rho_{\text{ ans}}(r,z)|}, \tag{16}\] \begin{table} \begin{tabular}{c|c c|c c} \hline \hline & \multicolumn{2}{c|}{SHO} & \multicolumn{2}{c}{RND} \\ \hline type & \(E[\rho]\) & \(E[v]\) & \(E[\rho]\) & \(E[v]\) \\ \hline \hline \(E_{\text{bin}}\) & 0.0051 & 0.0054 & 0.0433 & 0.0237 \\ \(E_{\text{kin}}\) & 0.0165 & 0.0071 & 0.1131 & 0.0900 \\ \(E_{\text{int}}\) & 0.0105 & 0.0182 & 0.0431 & 0.1499 \\ \(E_{\text{pair}}\) & 0.0233 & 0.0261 & 0.1567 & 0.1411 \\ \(E_{\text{exc}}\) & 0.0318 & 0.0105 & 6.6973 & 0.1338 \\ \hline & \multicolumn{2}{c|}{\(\rho[v]\)} & \multicolumn{2}{c}{\(\rho[v]\)} \\ & \multicolumn{2}{c|}{0.1107} & \multicolumn{2}{c}{0.4101} \\ \hline \hline \end{tabular} \end{table} Table 2: The mean absolute error (MAEs) for each learning with the SHO and the RND external fields. The units are MeV for \(E[\rho]\) and \(E[v]\), while the MAE for \(\rho[v]\) is dimensionless (see Eq. (16)). Figure 4: A neural network with the encoder-decoder structure employed in this work for a mapping from an external \(v\) to a particle number density \(\rho\). It consists of 10 hidden layers, all of which are fully-connected. Their activation functions are ReLU, and the softmax activation function is employed for the output layer. Figure 5: The absolute error of the density distribution directly generated by a deep learning from a given external field \(v\). It is plotted as a function of the binding energy from the corresponding Kohn-Sham calculation. The left and the right panels show the results with the SHO and the RND external fields, respectively. The densely populated points are displayed in red, while the underpopulated points are shown in blue. Figure 6: Examples of the predicted densities (the bottom panels) generated directly from the RND external potentials shown in the top panels. For a comparision, the corresponding Khon-Sham densities are also plotted in the middle panels. The units of the color coordinate are MeV for the external potentials and fm\({}^{-3}\) for the densities. In each panel, the horizontal axis denotes the \(r\) coordinate while the vertical axis denotes the \(z\) coordinate, whose scales are shown in the left bottom panel. where \(\rho_{\rm pred}(r,z)\) and \(\rho_{\rm ans}(r,z)\) denote a predicted density and a Kohn-Sham result, respectively. Here, the bar symbol represents the average over the test data. We apply the same cut-off energies to the training data as those for the binding energy (see Tab. 1). Figure 5 shows the error for each test data point plotted as a function of the corresponding binding energy from the Kohn-Sham calculation. Their average corresponds to the MAE (16), which is 0.1107 for the SHO external fields and 0.4101 for the RND external fields. Figure 6 presents the images of the predicted densities for a few randomly selected data points for the RND external fields, in comparison to the corresponding Kohn-Sham densities. These examples clearly show that our neural networks successfully reproduce the Konh-Sham densities. ### Generalization performance We have so far introduced the two types of external fields and constructed the two independent datasets. For each dataset, we have successfully provided predictions for the training data with sufficient accuracy; however, this does not guarantee performance for unknown data. For instance, a neural network trained with the RND data does not necessarily yield accurate predictions for the SHO data. This is because the RND and the SHO external fields yield density profiles in a different way to each other. In general, such generalization performance is a critical concern in applying a trained neural network to another dataset. To investigate this issue in the context of nuclear physics, let us consider \(E_{\rm SHO}[\rho_{\rm RND}]\) and \(E_{\rm RND}[\rho_{\rm SHO}]\), where \(\rho_{\rm SHO}\) and \(\rho_{\rm RND}\) are the Kohn-Sham densities obtained with the SHO and the RND external fields, respectively, and \(E_{\rm SHO}\) and \(E_{\rm RND}\) are the functionals trained with \(\rho_{\rm SHO}\) and \(\rho_{\rm RND}\), respectively. In Sec. III B, we have investigated \(E_{\rm SHO}[\rho_{\rm SHO}]\) and \(E_{\rm RND}[\rho_{\rm RND}]\), but here we are interested in the performance of the functionals when the densities obtained with the other types of external fields are used as inputs. The left panel in Fig. 7 compares the binding energies obtained with the Kohn-Sham calculations with the RND external fields with \(E_{\rm SHO}[\rho_{\rm RND}]\). The right panel shows similar quantities, but by inverting RND and SHO, that is a comparison between the Kohn-Sham calculations with the SHO external potentials and \(E_{\rm RND}[\rho_{\rm SHO}]\). One can see that the performance of the neural network trained with the SHO external fields, \(E_{\rm SHO}\), is quite poor in reproducing the RND test data with large randomness. On the other hand, the neural network trained with the RND external fields, \(E_{\rm RND}\), successfully predicts the SHO test data, although the errors are larger than those for \(E_{\rm RND}[\rho_{\rm RND}]\) shown in Fig. 3. The MAEs between Kohn-Sham results and predictions are 1.1523 MeV for \(E_{\rm SHO}[\rho_{\rm RND}]\) and 0.122 MeV for \(E_{\rm RND}[\rho_{\rm SHO}]\). A similar conclusion has been obtained also in Ref. [6]. Therefore we can conclude that the RND potentials which we adopted are random enough for deep learning. Figure 7: A verification of generalization performance for the present deep learning. The left and right panels show the results with the RND and the SHO external fields, respectively. The horizontal axes denote the energies obtained with the Kohn-Sham calculations. On the other hand, the vertical axes denote \(E_{\rm SHO}[\rho_{\rm RND}]\) (the left panel) and \(E_{\rm RND}[\rho_{\rm SHO}]\) (the right panel), that is, the predictions of deep learning trained with the SHO (the left panel) and the RND (the right panel) external fields. Both the training and test data (200,000 data in total) are plotted in each panels because the RND (SHO) dataset are not used in training \(E_{\rm SHO}[\rho]\) (\(E_{\rm RND}[\rho]\)). Summary and future perspectives Starting from a Skyrme functional, we have successfully constructed an energy density functional (EDF) which depends only on a particle number density. This functional does not require Kohn-Sham orbitals, and thus can be regarded as an orbital-free EDF (OF-EDF). To this end, we have applied deep learning, in which the density distributions obtained with two types of random external fields (SHO and RND) were mapped on the energy with a neural network. The resultant EDF was found to predict various energies for the original Skyrme EDF with reasonable accuracy, except for the energy of the RND external fields, whose accuracy could however be considerably improved when the energies were predicted with deep learning in which the external fields themselves were directly learned. The latter feature is more pronounced in systems with an attractive interaction than in electron systems. We have also found that deep learning with less random SHO external potentials has smaller errors as compared to that with the RND external fields. In this paper, we have employed simple supervised learning. However, there are various methods of machine learning besides this. For example, generative models such as a generative adversarial network (GAN) [38, 39] and a diffusion model [3, 40] may provide efficient ways to generate the particle number densities, that is the input for deep learning used in this work to construct an OF-EDF. These methods maybe useful alternatives for future application of the deep learning method discussed in this paper. In nuclear physics, a triaxial deformation often plays an important role, particularly in nuclear fission. In that occasion, one needs to deal with 3-dimensional densities, accounting also for spin and isospin indices. We mention that traditional neural networks, comprising fully-connected layers, tend not to perform efficiently with such 3-dimensional data, primarily because the data size tends to become huge when the data are converted to 1-dimensional data. On the other hand, CNNs have shown adaptability to data of general dimensions. With the Keras API [36], 3D CNNs can be conveniently implemented, making a straightfoward extension of the present work to 3D cases. Furthermore, the Vision Transformer (ViT) [4], which has recently demonstrated success in image recognition tasks, can also be extended to 3-dimensional data. With those schemes, the dimensionality of the density itself is not a crucial issue in learning EDFs, without incurring additional costs for preparing training data. One of the big advantages of using deep learning methods is that energies can be rapidly computed once test data are prepared and trained. With such low-cost calculations, numerical experiments will become much easier than before. We mention that, as objectives of research become more and more sophisticated, the number of DFT calculations required to publish a single research paper has in general been increased in these days. A typical example is a calculation for fission barriers in a multi-dimensional space. Even though computer performance continues to be improved, a computational cost of research has in general been increased, and it has been more complicated than before to test an idea with a numerical experiment. Fast computational methods like the one developed in this work, particularly when they are provided in a convenient format such as a Python library, can significantly shorten the time required to test and validate ideas. If a numerical accuracy is an issue, one may invalidate ideas obtained with deep learning by using the traditional Kohn-Sham scheme. This could be interpreted as an application of the idea of materials informatics (MI) [41] to a theoretical research. A potential problem in performing supervised learning is that one has to collect a large set of training data. In this work, we have chosen a relatively light nucleus, \({}^{24}\)Mg, and imposed axial symmetry, and thus we have treated a relatively low-cost system. However, heavy and superheavy nuclei, such as uranium isotopes, will be very costly in terms of data collection, especially when no symmetry is imposed, even though those nuclei have attracted lots of attention in nuclear physics, as e.g. a finding the optimal pathway in fission has still been a big theoretical challenge. In this regard, we would like to point out that a data collection needs not be performed individually; it could actually be done collaboratively by many researchers. A lot of good quality data, which are ready to be used in deep learning, may have already existed for some selected nuclei. Therefore, we believe that it is desirable to establish a framework in the nuclear theory community to collect numerical data and/or to carry out numerical calculations with unified hyperparameters such as a mesh size. Such a collaborative approach will help advance research more efficiently and effectively, benefiting the whole nuclear physics community. ###### Acknowledgements. We thank G. Colo for useful discussions. This work was supported by JSPS KAKENHI (Grant Nos. 21J22348, JP19K03824, JP19K03861, JP19K03872, and JP23K03414).
2307.06541
On the Effective Horizon of Inverse Reinforcement Learning
Inverse reinforcement learning (IRL) algorithms often rely on (forward) reinforcement learning or planning over a given time horizon to compute an approximately optimal policy for a hypothesized reward function and then match this policy with expert demonstrations. The time horizon plays a critical role in determining both the accuracy of reward estimate and the computational efficiency of IRL algorithms. Interestingly, an effective time horizon shorter than the ground-truth value often produces better results faster. This work formally analyzes this phenomenon and provides an explanation: the time horizon controls the complexity of an induced policy class and mitigates overfitting with limited data. This analysis leads to a principled choice of the effective horizon for IRL. It also prompts us to reexamine the classic IRL formulation: it is more natural to learn jointly the reward and the effective horizon together rather than the reward alone with a given horizon. Our experimental results confirm the theoretical analysis.
Yiqing Xu, Finale Doshi-Velez, David Hsu
2023-07-13T03:06:36Z
http://arxiv.org/abs/2307.06541v1
# On the Effective Horizon of Inverse Reinforcement Learning ###### Abstract Inverse reinforcement learning (IRL) algorithms often rely on (forward) reinforcement learning or planning over a given time horizon to compute an approximately optimal policy for a hypothesized reward function and then match this policy with expert demonstrations. The time horizon plays a critical role in determining both the accuracy of reward estimate and the computational efficiency of IRL algorithms. Interestingly, an _effective time horizon_ shorter than the ground-truth value often produces better results faster. This work formally analyzes this phenomenon and provides an explanation: the time horizon controls the complexity of an induced policy class and mitigates overfitting with limited data. This analysis leads to a principled choice of the effective horizon for IRL. It also prompts us to reexamine the classic IRL formulation: it is more natural to learn jointly the reward and the effective horizon together rather than the reward alone with a given horizon. Our experimental results confirm the theoretical analysis. ## 1 Introduction Inverse reinforcement learning (IRL) (Ng and Russell, 2000) aims to infer the underlying task objective from expert demonstrations. One common approach is to estimate a reward function that induces a policy matching closely the demonstrated expert trajectories. This model-based approach holds the promise of generalizing the learned reward function and the associated policy over unseen states (Osa et al., 2018). Existing algorithms generally follow the classic IRL formulation and assume a known ground-truth discount factor, or equivalently, time horizon for the expert demonstrations (Ng and Russell, 2000; Abbeel and Ng, 2004; Ramachandran and Amir, 2007; Ziebart et al., 2008; Boularias et al., 2011; Levine et al., 2011; Wulfmeier et al., 2016; Pirotta and Restelli, 2016; Finn et al., 2016, 2016; Ho and Ermon, 2016; Fu et al., 2018; Ni et al., 2020; Ke et al., 2020; Ramponi et al., 2020; Metelli et al., 2021; Hoshino et al., 2022). They then estimate a reward function, given the time horizon. Surprisingly, an _effective time horizon_ shorter than the ground-truth value often produces better results faster. Why? Intuitively, the time horizon controls the complexity of an induced policy class. With limited data, a shorter time horizon is preferred, as the induced policy class is simpler and mitigates overfitting. Further, the reward function and the time horizon capture two distinct aspects of the expert's internal decision-making process. The reward function represents the expert's task objective. The time horizon represents the expert's strategic preference, _i.e._, the relative importance of long-term and short-term reward. From this perspective, it is natural to learn jointly the reward and the time horizon together, rather than the reward alone with a given horizon, as the expert's decision horizon is generally unknown in practice. In this work, we present a formal analysis showing that with limited expert demonstrations, a reduced discount factor or time horizon improves the generalization performance of the learned reward function over unseen states. Based on this result, we propose to learn the reward function and the discount factor jointly. We describe a simple extension of the linear programming IRL algorithm (LP-IRL) (Ng and Russell, 2000) and the maximum entropy IRL algorithm (MaxEnt-IRL)(Ziebart et al., 2008) to do so through cross-validation. Our experimental evaluation of both LP-IRL and MaxEnt-IRL on four different tasks support our theoretical analysis. We are not aware of prior work that formally analyzes the relationship between the time horizon and the performance of the learned reward function in IRL. Some IRL algorithms employ a smaller effective time horizon for computational efficiency, albeit at the cost of myopic sub-optimal policies (MacGlashan and Littman, 2015; Lee et al., 2022; Xu et al., 2022). The work of Jiang et al. (2015) shows that for planning in a Markov decision process, a shorter horizon improves the policy performance, when the dynamic model is inaccurate. It is, however, unclear whether a similar phenomenon about the horizon occurs in IRL, where the reward function is unknown and estimated from expert data. ## 2 Related works Effective Horizon of Imitation LearningImitation learning learns desired behaviors by imitating expert demonstrations and comprises two classes of methods: model-free behavior cloning (BC) and model-based inverse reinforcement learning (IRL) (Osa et al., 2018). The primary distinction between BC and IRL lies in the horizons used to align the learned behaviors with expert data. BC matches step-wise expert actions, resulting in poor generalization to unseen states. In contrast, IRL addresses this issue by either matching multi-step trajectory distributions (Ziebart et al., 2008; Boularias et al., 2011; Levine et al., 2011; Wulrmeier et al., 2016; Finn et al., 2016, 2016; Pirotta and Restelli, 2016; Ramponi et al., 2020), or their marginalized approximations (Ho and Ermon, 2016; Fu et al., 2018; Ni et al., 2020; Ke et al., 2020; Ghasemipour et al., 2019; Hoshino et al., 2022). The former employs a double-loop structure to interleave the policy optimization and reward function update, while the latter learns a discriminator to distinguish expert-like behaviors. Both approaches utilize the ground-truth horizon/discount factor for optimization, ensuring global temporal consistency between the learned policy and expert. Notably, few IRL methods adopt receding horizons to reduce computational cost (MacGlashan and Littman, 2015; Lee et al., 2022; Xu et al., 2022), claiming shorter optimization horizons yields sub-optimal policies. However, a theoretical analysis on the impact of the horizon choice in IRL is lacking, making it an important yet overlooked consideration in the field. Effective Horizon and Model AccuracyThe study in Jiang et al. (2015) investigates the impact of the horizon on planning under an inaccurate transition model, showing that a shorter horizon reduces the planning loss when the transition function is prone to estimation errors. However, applying its theoretical result to the case of reward function estimation is complicated due to the inherent complexity of IRL. There are two main challenges in examining the horizon's role in IRL. First, the transition model estimation in Jiang et al. (2015) is conducted locally using state-action-state tuples for each state, allowing for a straightforward expression of estimation error by counting local samples. In contrast, the reward function estimation error relies heavily on the planning horizon, as IRL learns the reward function by matching the temporal behaviors with the expert. Second, planning with an estimated transition function in Jiang et al. (2015) is a single forward process, while IRL requires interdependent and iterative policy optimization and reward function estimation until convergence. This iterative process adds complexity to measuring the final policy performance. Despite these challenges, we aim to explore the effective horizon's role in IRL, as it is vital for a more realistic IRL formulation and has the potential to further enhance the policy performance. ## 3 Problem formulation In this work, we use _horizon_ and _discount factor_ interchangeably, as the discount factor implicitly incorporates the planning horizon by discounting future rewards. We consider an MDP \((S,A,P,R_{0},\gamma_{0})\), where \(S\) and \(A\) represent the state and action spaces, respectively. The transition function is denoted by \(P:S\times A\times S\rightarrow[0,1]\), and the ground-truth reward function is \(R_{0}:S\times A\rightarrow[0,R_{max}]\). The discount factor, \(\gamma_{0}\), implicitly determines the value of future rewards at the current time step. The optimal policy, \(\pi^{*}_{R_{0},\gamma_{0}}\), maximizes the total discounted reward based on \(R_{0}\) and \(\gamma_{0}\). The MDP is assumed to be ergodic, such that any state is reachable from any other state by following a suitable policy. In our setting, we are given the MDP without the reward function \(R_{0}\) or the discount factor \(\gamma_{0}\). Instead, we have a set of expert demonstrations \(D=\{\tau_{0},\tau_{1},...\}\), with each trajectory \(\tau=(s_{0},a_{0},s_{1},...,s_{T})\) sampled from \(\pi^{*}_{R_{0},\gamma_{0}}\). We assume the expert policy is deterministic, hence observing a single \((s,a)\) pair eliminates the policy estimation error for that state. We propose to jointly learn the reward function and discount factor \((\widehat{R},\widehat{\gamma})\) from limited expert demonstrations. The scarcity of data suggests that \((\widehat{R},\widehat{\gamma})\) is susceptible to approximation errors, which consequently affects the induced optimal policy \(\pi^{*}_{\widehat{R},\widehat{\gamma}}\). We measure the quality of the \((\widehat{R},\widehat{\gamma})\) pair by comparing the performance of its induced policy \(\pi^{*}_{\widehat{R},\widehat{\gamma}}\) with that of the ground-truth optimal policy \(\pi^{*}_{R_{0},\gamma_{0}}\), both evaluated under the ground-truth \((R_{0},\gamma_{0})\) for fair comparison. Formally, we define the loss as the performance difference between the induced and ground-truth optimal policies: \(\left\|V^{\pi^{*}_{R_{0},\gamma_{0}}}_{R_{0},\gamma_{0}}-V^{\pi^{*}_{\widehat {R},\widehat{\gamma}}}_{R_{0},\gamma_{0}}\right\|_{\infty}\), where \(V^{\pi}_{R,\gamma}\) represents the value function of policy \(\pi\) evaluated under \((R,\gamma)\). The "best" policy \(\pi^{*}_{\widehat{R},\widehat{\gamma}}\) is the one that minimizes this loss. We aim to use this loss to guide our selection of the \((\widehat{R},\widehat{\gamma})\) pair. The existing IRL works either use ground-truth \(\gamma_{0}\), or a smaller one to reduce the computation burden. In this work, we investigate when \(\widehat{\gamma}\leq\gamma_{0}\), how to choose the effective horizon that minimizes the loss defined above. We define the optimal horizon as: \[\widehat{\gamma}^{*}=\operatorname*{arg\,min}_{0\leq\widehat{\gamma}\leq \gamma_{0}}\left\|V^{\pi^{*}_{R_{0},\gamma_{0}}}_{R_{0},\gamma_{0}}-V^{\pi^{*} _{\widehat{R},\widehat{\gamma}}}_{R_{0},\gamma_{0}}\right\|_{\infty}. \tag{1}\] ## 4 Analysis ### Overview How does effective horizon affect reward learning from expert demonstrations? We formally analyze the dependency between the effective horizon and the quality of the learned reward function in different coverage of the expert data. Our main Theorem 4.1 shows that, given limited expert data coverage, employing a discount factor smaller than the ground-truth value allows IRL methods to learn "better" reward functions that induce policies more closely aligned with the expert. **Theorem 4.1**.: _Assume two MDPs with shared controlled Markov process: \((S,A,P,R_{0},\gamma_{0})\) and \((S,A,P,\widehat{R},\widehat{\gamma})\), whereas \(R_{0}\) and \(\gamma_{0}\) are the non-negative ground-truth reward function and the discount factor, while the reward function \(\widehat{R}:S\times A\rightarrow\mathbb{R}_{\geq 0}\) and the effective horizon \(\widehat{\gamma}\) are estimated from expert demonstrations visiting \(N\) states. Let \(|\Pi_{\widehat{\gamma}}|\) measure the complexity of the policy class induced by the estimated effective horizon \(\widehat{\gamma}\). Then, for the optimal policies \(\pi^{*}_{R_{0},\gamma_{0}}\) induced by the ground-truth \((R_{0},\gamma_{0})\) pair and \(\pi^{*}_{\widehat{R},\widehat{\gamma}}\) induced by the estimated \((\widehat{R},\widehat{\gamma})\) pair, the difference in value function is bounded by:_ \[\left\|V^{\pi^{*}_{R_{0},\gamma_{0}}}_{R_{0},\gamma_{0}}-V^{\pi^{*}_{R_{0}, \gamma_{0}}}_{R_{0},\gamma_{0}}\right\|_{\infty}\leq\frac{\gamma_{0}-\widehat {\gamma}}{(1-\gamma_{0})(1-\widehat{\gamma})}R_{max}+\frac{2R_{\max}}{(1- \widehat{\gamma})^{2}}\sqrt{\frac{1}{2N}\log\frac{|S||\Pi_{\widehat{\gamma}}|} {2\delta}} \tag{2}\] _with probability at least \(1-\delta\)._ Intuitively, Theorem 4.1 bounds the performance disparity between the policy induced by the learned \((\widehat{R},\widehat{\gamma})\) and the expert policy as a sum of two terms: when \(\widehat{\gamma}\) increases, the first error term diminishes to encourage fidelity to the ground-truth \(\gamma_{0}\) and approaches \(0\) when \(\widehat{\gamma}\rightarrow\gamma_{0}\), while the second error term grows due to overfitting arising from estimating a policy from an increasingly complex class \(\Pi_{\widehat{\gamma}}\) using the limited expert data covering \(N\) states. Consequently, these opposing error terms imply that an intermediate value of \(\widehat{\gamma}\) yields a better reward function that induces the most expert-like policy. Let's outline the proof idea below. We begin by establishing a bound on the estimation error of the reward function in terms of the _effective horizon_\(\widehat{\gamma}\) and the _expert state coverage_\(N\), which corresponds to the second error term in the RHS of the overall bound in Theorem 4.1. To start with, the expert policy estimation error depends on both the expert state coverage \(N\) and the complexity of the policy class \(\Pi_{\widehat{\gamma}}\) used for estimation, while this complexity \(|\Pi_{\widehat{\gamma}}|\) in turn increases monotonically with the effective horizon \(\widehat{\gamma}\) (Theorem 4.3). As \(\widehat{\gamma}\) increases, the complexity of policy class \(\Pi_{\widehat{\gamma}}\) rises, hence the fitted policy is more likely to overfit given limited expert state coverage \(N\). Moreover, as IRL learns the reward function by matching the induced policy with the expert, the expert policy estimation error further propagates to the reward function estimation (Theorem 4.6). Therefore, the estimation error of the reward function in the second error term of Theorem 4.1 is bounded by i) the _effective horizon_\(\widehat{\gamma}\) that controls the complexity \(|\Pi_{\gamma}|\) of the induced policy class, and ii) the _expert state coverage_\(N\). Next, we derive the overall bound in Theorem 4.1. This bound measures the performance disparity between the policy induced by the learned reward-horizon pair and the optimal policy. Intuitively, this performance gap is caused by i) the difference in the horizons they optimize over, and ii) the difference in the reward functions. We, therefore, split this difference into two simpler error terms: i) the first term measures the performance drop due to evaluating the ground-truth optimal policy using different _horizons_, and ii) the second term accounts for the difference between the learned and ground-truth _reward functions_. This intermediate result is formalized in Theorem 4.7. Its first term can easily be simplified to the final form, while the estimation error of the _reward function_ in the second term can be further bounded using the _effective horizon_ and the _state coverage the expert samples_, as we have described above. This completes the proof. The remainder of this section is organized as follows. In Section 4.2 we derive how the horizon controls the complexity measure of the policy class and prove the monotonicity (Theorem 4.3). In Section 4.3, we describe the feasible reward function set as a function of the expert policy (Lemma 4.5). In Section 4.4, we derive the error bound in reward function estimation from limited expert data coverage (Theorem 4.6). In Section 4.5, we prove how the error in reward function estimation and the difference in horizons propagate to the error in the value function estimation (Theorem 4.7). Finally, in Section 4.6, we combine all the results above and prove the final bound in Theorem 4.1. ### Complexity of the Policy Class We propose to use the number of potentially optimal policies under the fixed state space, action space, and transition function as the complexity measure of the policy class for different \(\gamma\)s under unknown reward function under mild conditions. **Definition 4.2** (Complexity Measure).: The complexity measure of the policy class under a specific \(\gamma\) is defined as the number of optimal policies under the fixed state space \(S\), action space \(A\), and transition function \(P\), but with arbitrary reward function \(R^{\prime}\in F_{R}\) that satisfies our assumption described later. Formally, we define the class of optimal policy corresponding to the given \(\gamma\) as: \[\Pi_{\gamma}=\{\pi:\exists R\in F_{R}\text{ s.t. }\pi\text{ is optimal in }(S,A,P,R,\gamma)\},\] where \(F_{R}\) is the set of reward functions that satisfy the following assumption: for each state \(s\in S\), there exists a fixed state-action pair whose reward is strictly higher than any other actions, and \(R\) uniquely maximizes that state-action pair \(R(s,a^{*})\) for all states. To ensure a meaningful discussion, we assume a specific form for reward functions, as any policy can be optimal when considering arbitrary reward functions. The complexity of the policy class w.r.t \(\gamma\) is the number of optimal policies in the corresponding policy class, which is \(|\Pi_{\gamma}|\). Next, we prove that the complexity of the policy class defined in Definition 4.2 increases monotonically as the discount factor \(\widehat{\gamma}\) increases. We refer the readers to Appendix A for the full proof. **Theorem 4.3**.: _Under a specific MDP \(M=(S,A,P,\cdot,\cdot)\) with fixed state space \(S\), action space \(A\), and transition function \(P\), we define the optimal policy class according to Definition 4.2, then we have the following claims:_ 1. \(\forall\gamma,\gamma^{\prime}\in[0,1)\)_, if_ \(\gamma<\gamma^{\prime}\)_, then_ \(\Pi_{\gamma}\subseteq\Pi_{\gamma^{\prime}}\)_._ 2. _When_ \(\gamma=0\)_,_ \(|\Pi_{0}|\) _=_ 1._ 3. _If_ \(\gamma\to 1\)_,_ \(|\Pi_{\gamma}|\geq(|A|-1)^{|S|-1}\left|S\right|\) _under mild conditions._ Intuitively, claim 1 asserts that as the discount factor \(\gamma\) grows, the number of potentially optimal policies increases monotonically. Claim 2 and 3 collectively demonstrate that, under mild conditions, policy complexity drastically rises with \(\gamma\): when the discount factor is at its lowest (\(\gamma=0\)), there is only one optimal policy, as the reward function has a unique maximum state-action pair for each state; while as \(\gamma\) increases, the optimal policy class can encompass nearly all possible policies, with \(|\Pi_{\gamma}|=(|A|-1)^{|S|-1}|S|\). In essence, \(\gamma\) effectively controls the complexity of the policy class. ### Feasible Reward Function Set In this section, we explicitly define all reward(s) that are consistent with the expert demonstration. To establish an algorithm-agnostic mapping from the fixed set of expert data and the effective horizon to the learned reward functions, we draw inspiration from Metelli et al. (2021) and define a reward function feasible set based on the fundamental formulation of IRL (Ng & Russell, 2000). In particular, the reward function feasible set includes all the reward functions whose induced polices match the expert data. Under this definition, we derive the explicit characterization of the reward feasible set as a function of the expert policy. First, we implicitly define the feasible result set based on the IRL definition Ng & Russell (2000), adapting for the flexible discount factor. **Definition 4.4** (IRL Problem).: Let \(\mathcal{M}=(S,A,P)\) be the MDP without the reward function or discount factor. An IRL problem, denoted as \(\Re=(\mathcal{M},\pi^{E})\), consists of the MDP and an expert's policy \(\pi^{E}\). A reward \(\widehat{R}\in\mathbb{R}^{S\times A}\) is feasible for \(\Re\) if there exists a \(\widehat{\gamma}\) such that \(\pi^{E}\) is optimal for the MDP \(\mathcal{M}\cup(\widehat{R},\widehat{\gamma})\), i.e., \(\pi^{E}\in\Pi_{\widehat{R},\widehat{\gamma}}^{*}\). We use \(\mathcal{R}_{\Re}\) to denote the set of feasible rewards for \(\Re\). To ensure \(\widehat{R}\) and \(\widehat{\gamma}\) belong to the expert's feasible set \(\pi^{E}\), two conditions derived from the advantage function \(A_{\widehat{R},\widehat{\gamma}}^{\pi^{E}}(s,a)=Q_{\widehat{R},\widehat{ \gamma}}^{\pi^{E}}(s,a)-V_{\widehat{R},\widehat{\gamma}}^{\pi^{E}}(s)\) must be met: (1) if \(\pi^{E}(a|s)>0\), then \(A_{\widehat{R},\widehat{\gamma}}^{\pi^{E}}(s,a)=0\); and (2) if \(\pi^{E}(a|s)=0\), then \(A_{\widehat{R},\widehat{\gamma}}^{\pi^{E}}(s,a)\leq 0\). The first condition ensures the expert's chosen actions have zero advantage, eliminating motivation to choose alternatives, while the second guarantees unchosen actions have non-positive advantages. To explicitly express the feasible reward set, we introduce two operators: \((B^{\pi}g)(s,a)=g(s,a)\mathbbm{1}\{\pi(a|s)>0\}\) and \((\bar{B}^{\pi}g)(s,a)=g(s,a)\mathbbm{1}\{\pi(a|s)=0\}\) for any given policy \(\pi\). The _expert-filter_\((B^{\pi}g)(s,a)\) retains \(g(s,a)\) values for actions taken by the expert policy \(\pi^{E}(a|s)\). Conversely, the _expert-filter-complement_\((B^{\pi}g)(s,a)\) preserves values for actions not taken by the expert. The feasible reward set is derived as follows. The detailed derivation is provided in Appendix B.1. **Lemma 4.5** (Feasible Reward Set, adapted from Metelli et al. (2021)).: _Let \(\Re=(\mathcal{M},\pi^{E})\) be an IRL problem. Let \(\widehat{R}\in\mathbb{R}^{S\times A}\) and \(0<\widehat{\gamma}<1\), then \(\widehat{R}\) is a feasible reward, \(i.e.\), \(\widehat{R}\in\mathcal{R}_{\Re}\) if and only if there exits \(\zeta\in\mathbb{R}^{S\times A}_{\geq 0}\) and \(V\in\mathbb{R}^{S}\) such that:_ \[\widehat{R}=-\bar{B}^{\pi^{E}}\zeta+(E-\widehat{\gamma}P)V, \tag{3}\] _whereas \(E:\mathbb{R}^{|S|}\rightarrow\mathbb{R}^{|S|\times|A|}\) such that \((Ef)(s,a)=f(s)\)._ The reward function in Lemma 4.5 comprises two terms based on the expert policy \(\pi^{E}\) and the MDP. Specifically, the first term \(-\bar{B}^{\pi^{E}}\zeta\) depends solely on \(\pi^{E}\): using _expert-filter-complement_ on the non-negative function \(\zeta\), actions played by the expert (i.e., \(\pi^{E}(a|s)>0\)) become zero, while unplayed actions (i.e., \(\pi^{E}(a|s)=0\)) have non-positive values. The second term represents the policy's temporal effect, relying on the MDP's transition function. This can be viewed as reward-shaping via the value function, which preserves the expert policy's optimality. ### Reward Inaccuracy due to the Estimation Error of Expert Policy This section examines how the expert policy estimation error propagates to the feasible reward set estimation. We consider two IRL problems, \(\Re=(\mathcal{M},\pi^{E})\) and \(\Re=(\mathcal{M},\hat{\pi}^{E})\), which differ only in expert policies: \(\Re\) utilizes the ground-truth expert policy, while \(\hat{\Re}\) employs an estimated policy from samples. Since an IRL algorithm aligns its induced policy with the estimated expert policy, its feasible sets will be equivalent to that of the estimated expert policy. Intuitively, inaccuracies in estimating the expert policy \(\pi^{E}\) lead to errors in estimating the feasible sets \(\mathcal{R}_{\Re}\). Our goal is to obtain a reward function \(\widehat{R}\) with a feasible set "close" to \(R_{0}\)'s feasible set. Specifically, "closeness" is determined by the distance between the nearest reward functions in each set. The estimated \(\mathcal{R}_{\hat{\Re}}\) is considered close to the exact \(\mathcal{R}_{\Re}\) if, for every reward \(R_{0}\in\mathcal{R}_{\Re}\), there exists an estimated reward \(\widehat{R}\in\mathcal{R}_{\Re}\) with a small \(|R_{0}-\widehat{R}|\) value. The subsequent section outlines how errors in the expert policy \(\pi^{E}\) propagate to the reward functions. **Theorem 4.6** (Adapted from Theorem 3.1 in Metelli et al. (2021)).: _Let \(\Re=(\mathcal{M},\pi^{E})\) and \(\hat{\Re}=(\mathcal{M},\hat{\pi}^{E})\) be two IRL problems. Then for any \(R_{0}\in\mathcal{R}_{\Re}\) such that \(R_{0}=-\bar{B}^{\pi^{E}}\zeta+(E-\gamma_{0}P)V\) and \(\hat{\Re}=(\mathcal{M},\hat{\pi}^{E})\), the reward function \(\hat{\Re}\) is a reward function._ Proof.: We first prove that \(\hat{\Re}=(\mathcal{M},\hat{\pi}^{E})\) and \(\hat{\Re}=(\mathcal{M},\hat{\pi}^{E})\). The reward function \(\hat{\Re}\) is a reward function. \(\left\|R_{0}\right\|_{\infty}\leq R_{\max}\) there exists \(\widehat{R}\in\mathcal{R}_{\hat{\mathbf{x}}}\) such that element-wise it holds that:_ \[\left|R_{0}-\widehat{R}\right|\leq\bar{B}^{\pi^{E}}B^{\hat{\pi}^{E}}\zeta. \tag{4}\] _Furthermore, \(\left\|\zeta\right\|_{\infty}\leq\frac{R_{\max}}{1-\gamma_{0}}\)._ Intuitively, this result states the existence of a reward function \(\widehat{R}\) in the estimated feasible set \(\mathcal{R}_{\hat{\mathbf{x}}}\) which is bounded by the error in expert policy estimation. Specifically, this term is non-zero only for state-action pairs where \(\pi^{E}(a|s)=0\) and \(\hat{\pi}^{E}(a|s)>0\), i.e., actions not taken by the expert but erroneously believed to be taken by the estimated expert policy. Thus, to zero out more entries in the reward function's error term, we need expert data to cover more states. We refer the readers to Appendix B.2 for the detailed proof. ### Decomposing the Error in Value Function This section decomposes the difference in value function between the two optimal policies induced by different reward-horizon into two simpler error terms: the first only depends on the difference in the horizon used for evaluating the ground-truth optimal policy, while the second term exclusively captures the estimation error of the reward function. Let \(\pi^{*}_{R_{0},\gamma_{0}}\in\Pi_{\gamma_{0}}\) be the optimal policy induced by the ground-truth reward and discounted factor, and \(\pi^{*}_{\widehat{R},\widehat{\gamma}}\in\Pi_{\widehat{\gamma}}\) be the optimal policy induced by the learned reward function and the corresponding \(\widehat{\gamma}\). We aim to find an upper bound for the difference between the value functions of \(\pi^{*}_{R_{0},\gamma_{0}}\) and \(\pi^{*}_{\widehat{R},\widehat{\gamma}}\) when evaluated under the ground-truth \(R_{0}\) and \(\gamma_{0}\). As directly evaluating this error is challenging, we divide it into two more manageable error terms. In particular, we bound \(\left\|V^{\pi^{*}_{R_{0},\gamma_{0}}}_{R_{0},\gamma_{0}}-V^{\pi^{*}_{\widehat {R},\widehat{\gamma}}}_{R_{0},\gamma_{0}}\right\|\) in terms of \(\widehat{\gamma}\) and the difference in reward functions \(\left\|R_{0}-\widehat{R}\right\|\). **Theorem 4.7**.: _Assume two partial MDPs with shared \(\mathcal{M}=(S,A,P)\). Let \(R_{0}\) and \(\gamma_{0}\) be the non-negative ground-truth reward function and the discounted factor of the exact MDP, while \(\widehat{R}\in\mathbb{R}^{S\times A}_{\geq 0}\) and \(\widehat{\gamma}\) from the second MDP are estimated from data. Then, for the optimal policies \(\pi^{*}_{R_{0},\gamma_{0}}\) induced by the ground-truth \((R_{0},\gamma_{0})\) pair and \(\pi^{*}_{\widehat{R},\widehat{\gamma}}\) induced by the estimated \((\widehat{R},\widehat{\gamma})\) pair, the difference in value function is bounded below:_ \[\left\|V^{\pi^{*}_{R_{0},\gamma_{0}}}_{R_{0},\gamma_{0}}-V^{\pi^{*}_{\widehat {R},\widehat{\gamma}}}_{R_{0},\gamma_{0}}\right\|\leq\frac{\gamma_{0}-\widehat {\gamma}}{(1-\gamma_{0})(1-\widehat{\gamma})}\,R_{max}+\frac{2}{1-\widehat{ \gamma}}\left\|R_{0}-\widehat{R}\right\|. \tag{5}\] The proof for Theorem 4.7 can be found in Appendix B.3. The overall error in Theorem 4.7 consists of two terms. The first error term represents the performance loss due to utilizing a smaller discount factor, \(\widehat{\gamma}<\gamma_{0}\). As \(\widehat{\gamma}\) increases, this error decreases and approaches \(0\) when \(\widehat{\gamma}\to\gamma_{0}\). The second error term arises from employing the learned reward function, \(\widehat{R}\), instead of the ground-truth reward function, \(R_{0}\). Notably, \(\widehat{R}\) is learned using a reduced \(\widehat{\gamma}\) from limited expert demonstrations. Contrary to the first error term, the second error increases with larger \(\gamma_{0}\) values, as the reward function estimation error is compounded over an extended horizon, leading to a greater overall loss. Since the two error terms in the RHS of Theorem 4.7 are influenced by \(\gamma_{0}\) in opposite ways, the bound will be optimized at an intermediate value. ### Overall Bound on the Performance Loss We have established i) how the effective horizon controls the complexity of the induced policy class (Theorem 4.3), ii) how the expert policy estimation error bounds the reward-horizon estimation (Theorem 4.6), and iii) how to decompose the performance gap between the induced and ground-truth optimal policies (Theorem 4.7). In this section, we integrate these results to prove how would the effective horizon affects the performance of final policy induced by the learned reward function from limited expert state coverage. We first deduce how the effective horizon affects expert policy estimation error in cases of limited expert state coverage. As demonstrated in Theorem 4.3, policy class complexity increases monotonically with the effective horizon. Utilizing Hoeffding's inequality, we bound the expert policy estimation error by the policy class complexity based on i.i.d expert samples covering \(N\) states. The resulting inequality indicates that a smaller horizon can control policy class complexity and reduce overfitting in expert policy estimation when expert state coverage is limited. For a detailed proof, please refer to Appendix B.4. Finally, we substitute the bound on the estimation error of the expert policy to derive an upper bound on the reward function estimation error in Theorem 4.7, completing the proof in Theorem 4.1. The overall loss comprises two terms that exhibit opposing dependencies on \(\widehat{\gamma}\). This suggests the existence of an intermediate value \(0<\widehat{\gamma}<\gamma_{0}\) that minimizes the overall loss, which will be empirically demonstrated in the following section. ## 5 Experiments In this section, we empirically examine Theorem 4.1 using both linear programming IRL (LP-IRL) (Ng & Russell, 2000) and maximum entropy IRL (MaxEnt-IRL) (Ziebart et al., 2008) adapted for partial expert coverage and varying discount factors setting (implementation details in Appendix D and E respectively). Both of the algorithms originally utilize the ground-truth discount factor, \(\gamma_{0}\), to learn reward functions. To extend from their original settings to jointly learn the proposed function class \((\widehat{R},\widehat{\gamma})\), we apply a cross-validation extension to optimize the discount factor in the modified IRL methods (details in section 5.2). Specifically, we answer the following questions: 1. Can a lower \(\widehat{\gamma}<\gamma_{0}\) improve IRL policy performance? 2. How does \(\widehat{\gamma}^{*}\) change with increasing expert coverage \(N\)? 3. Is the cross-validation extension effective in finding \(\widehat{\gamma}^{*}\)? We evaluate the performance of LP-IRL and MaxEnt-IRL on four Gridworld and Objectworld tasks with varying reward complexity. For Q.1, we measure the number of incorrectly induced actions under varying discounted factors and different coverage of expert data. Our findings show that the optimal \(\widehat{\gamma}\)s across all expert coverage are smaller than \(\gamma_{0}\) for both algorithms. For Q.2, we plot how the optimal discount factors change as the expert coverage increases. The consistent U-shaped curves observed in all cases align with the anticipated overfitting effect implied by the second error term in Equation 2. For Q.3, we compare the performance of policies selected via cross-validation with the baseline policies selected using the oracle counts of incorrect actions (this baseline "cheats" by learning the reward function using \(100\%\) of the expert data and validating using the error counts in \begin{table} \begin{tabular}{c} \hline \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ \hline \(\widehat{\gamma}^{*}\) \\ the unseen states). Our results indicate that the discrepancy in performance is negligible for all tasks, demonstrating the effectiveness of cross-validation in selecting \(\widehat{\gamma}^{*}\). ### Task Setup We design four tasks of varying complexity in reward functions: Gridworld-simple, Gridworld-hard, Objectworld-linear, and Objectworld-nonlinear, adapted from Ng & Russell (2000) and Levine et al. (2011). We illustrate each task instance in the first three columns of Tables 1 and more details on the task specification are in Appendix C. The ground-truth discount factor is \(\gamma_{0}=0.99\). The Gridworld tasks provide sparse rewards only at randomly sampled goals: Gridworld-simple has fewer goals (\(4\)) and a smaller state space (\(10\times 10\) states), while Gridworld-hard has more goals (\(6\)) and a larger state space (\(15\times 15\) states). On the other hand, the Objectworld tasks have denser ground-truth rewards that are functions of nearby object features. The reward function for Objectworld-linear is linear with respect to the features of nearby objects, while that of Objectworld-nonlinear is non-linear. Intuitively, learning a complex reward function may be more susceptible to overfitting, especially when expert state coverage is sparse compared to the state space. We consider different percentages of state coverage by expert demonstrations. We say a set of expert demonstrations \(D=\{\tau_{0},\tau_{1},\dots\}\)_covers_ a set of \(N\) states, with the set being the union of states traversed by trajectories in \(D\). A demonstration set \(D\) covers \(K\%\) of states if \(K\%=N/|S|\), with \(S\) denoting the state space. We evaluate induced policy performance by _counting errors in state sets_: the number of states where induced and expert policies execute different actions. ### Cross Validation Extension We use cross-validation to determine the optimal discount factor \(\widehat{\gamma}^{*}\), given the expert demonstrations \(D\) covering \(N\) states. We divide \(D\) into training and validation sets, ensuring no overlap. Next, we uniformly sample \(M\) discount factors from \((0,\gamma_{0})\), learning the reward function \(R_{\gamma}\) for each using the training set. The optimal reward-horizon pair minimizes error count in the validation set. We randomly sample \(10\) environments per task, reporting mean and standard deviation errors. For all tasks, we assign 80% of demonstration as the training set and 20% as the validation set. To better understand the effectiveness of cross-validation, we employ an oracle representing the best policy learnable from the available expert data. This oracle, considered "cheating", utilizes the entire state space (both observed and unobserved states) for validation and leverages all expert data for training. We examine whether the \(\widehat{\gamma}^{*}\)s chosen by cross-validation correspond to this oracle. ### Results We assess the impact of the _effective horizon_ on IRL by examining both LP-IRL and MaxEnt-IRL across four tasks. 1 The policy performance results are presented in Tables 1 and 2 for LP-IRL and MaxEnt-IRL, respectively. We utilize the _error counts_ metric, which quantifies the discrepancies between induced and expert policies by counting differing actions in corresponding states. Footnote 1: LP-IRL utilizes a discount factor \(\gamma\), while MaxEnt-IRL employs a horizon \(T\). To ease the presentation, our analysis treats \(\gamma\) and \(T\) interchangeably, with findings for \(\gamma\) also applicable to \(T\), unless specified otherwise. As illustrated in Tables 1 and 2, the optimal discount factor \(\widehat{\gamma}^{*}<\gamma_{0}\) for all four tasks across various expert data coverage in both LP-IRL and MaxEnt-IRL. For low coverage, the error count curves are generally U-shaped: discrepancies with the expert policy decrease as \(\widehat{\gamma}\) increases to the "sweet spot" and then rise drastically. This confirms our error bounds in Theorem 4.1: with small \(\widehat{\gamma}\), the second error term in Equation 2 caused by overfitting is less prominent, and increasing \(\widehat{\gamma}\) allows temporal extrapolation, reducing the overall error. However, with larger \(\widehat{\gamma}\), overfitting becomes more significant, outweighing the reduction in the first error term and increasing the overall error. Under high data coverage, error counts either remain low (in LP-IRL) or drops initially (in MaxEnt-IRL) for small \(\widehat{\gamma}\) and strictly increase as \(\widehat{\gamma}\) grows further for both methods, implying that \(\widehat{\gamma}^{*}<\gamma_{0}\) induces the most expert-like policy, confirming our theoretical result. Interestingly, for LP-IRL, the error counts do not initially drop as \(\widehat{\gamma}\) increases. This is due to accurate step-wise behavior matching under dense expert data, making the performance gains from temporal reasoning (the first error term) negligible. This supports Spencer et al. (2021)'s insight that naive behavioral cloning excels with large expert demonstration coverage compared to IRL algorithms. However, this low error counts for small \(\widehat{\gamma}\)s are not seen in MaxEnt-IRL as it parameterizes the reward function linear in the state features, limiting its capability to precisely copy the step-wise actions even under high data coverage. function class that jointly learns reward-horizon pairs and empirically substantiate our analysis using a cross-validation extension for the existing IRL algorithms. As overfitting remains a challenge for IRL, especially with scarce expert data, we believe our findings offer valuable insights for the IRL community on better IRL formulations.
2307.09672
Convex Geometry of ReLU-layers, Injectivity on the Ball and Local Reconstruction
The paper uses a frame-theoretic setting to study the injectivity of a ReLU-layer on the closed ball of $\mathbb{R}^n$ and its non-negative part. In particular, the interplay between the radius of the ball and the bias vector is emphasized. Together with a perspective from convex geometry, this leads to a computationally feasible method of verifying the injectivity of a ReLU-layer under reasonable restrictions in terms of an upper bound of the bias vector. Explicit reconstruction formulas are provided, inspired by the duality concept from frame theory. All this gives rise to the possibility of quantifying the invertibility of a ReLU-layer and a concrete reconstruction algorithm for any input vector on the ball.
Daniel Haider, Martin Ehler, Peter Balazs
2023-07-18T22:54:51Z
http://arxiv.org/abs/2307.09672v1
# Convex Geometry of ReLU-Layers, ###### Abstract The paper uses a frame-theoretic setting to study the injectivity of a ReLU-layer on the closed ball of \(\mathbb{R}^{n}\) and its non-negative part. In particular, the interplay between the radius of the ball and the bias vector is emphasized. Together with a perspective from convex geometry, this leads to a computationally feasible method of verifying the injectivity of a ReLU-layer under reasonable restrictions in terms of an upper bound of the bias vector. Explicit reconstruction formulas are provided, inspired by the duality concept from frame theory. All this gives rise to the possibility of quantifying the invertibility of a ReLU-layer and a concrete reconstruction algorithm for any input vector on the ball. Convex Geometry of ReLU-Layers, Injectivity on the Ball and Local Reconstruction ## 1 Introduction The Rectified Linear Unit ReLU\((s)=\max(0,s)\), \(s\in\mathbb{R}\) has become indispensable in modern neural network architecture. It is applied component-wise on the output of an affine linear function \(Ax-b\), comprising of the multiplication by a weight matrix \(A\) and the shift by a bias vector \(b\). The combined mapping is called a _ReLU-layer_. This has proven to be a simple, yet effective non-linear mapping to handle fundamental problems in the training of deep neural networks well (Glorot et al., 2011; Krizhevsky et al., 2012; Goodfellow et al., 2016; Nair and Hinton, 2010). Despite its simplicity, yet, the ReLU function still hides some mysteries and is an active topic of research (Dittmer et al., 2020). Recently, invertible network architectures have been getting a lot of attention due to their increased interpretability and the possibility of reversing the forward process analytically, which is especially interesting in a generative setting. This found many applications in the context of normalizing flows, offering exact and efficient likelihood estimations (Dinh et al., 2017; Donahue et al., 2017). Mathematically speaking, the forward process in such an invertible architecture must be _injective_, guaranteeing the existence of a _left-inverse_ that allows perfect reconstruction of any input. A ReLU-layer is a mapping that is designed to provide sparse output. Hence, its injectivity is an interesting property that has been tackled from a theoretical point of view only little in the literature. Bruna et al. characterized a ReLU-layer to be injective in terms of an admissibility condition for index sets and proved a bi-Lipschitz stability condition for an injective ReLU-layer, see Proposition 2.2 in (Bruna et al., 2014). Just recently, Puthawala et al. formulated a condition in terms of spanning sets that is equivalent to the one in (Bruna et al., 2014) (with a slight modification) and describes the injectivity of ReLU-networks consisting of many ReLU-layers see Theorem \(2\) in (Puthawala et al., 2022). Both conditions, however, are not applicable to verify the injectivity of a ReLU-layer for a given weight matrix in practice. The presented work provides exactly that. We found the convex geometry of the weight matrix to play an essential role in the injectivity analysis for the associated ReLU-layer, using a concept that Behrmann et al. introduced in Theorem 4 of (Behrmann et al., 2018). The geometrical perspective helps profoundly to strengthen the intuition on the effect of the ReLU function. It allows to formulate a computationally feasible method to give a sufficient condition for injectivity. This shall contribute to the enhancement of the interpretability of neural networks in terms of a way to quantify the invertibility of a ReLU-layer with corresponding exact reconstruction formulas. Aiming to set a rigorous foundation for future work on this topic, we formulate all results in an abstract mathematical manner, using the language of _frame theory_ which we find to be especially well-suited. In Section 2 we interpret a ReLU-layer by means of frame theory and motivate the restriction to the ball. Section 3 is dedicated to the injectivity of a ReLU-layer theoretically. In Section 4 we introduce a method to obtain an upper bound for all biases, such that the corresponding ReLU-layer is injective on the ball and its non-negative part. Explicit reconstruction formulas are stated. Finally, Section 5 demonstrates how the method can be used to analyze the injectivity behavior of a ReLU-layer in numerical experiments. ## 2 Mathematical Context ### Neural Networks meet Frame Theory The goal of this section is to link abstract frame theory with deep learning. We want to particularly emphasize that frames are a well-suited concept for the mathematical analysis of neural networks, not only in terms of notation but also due to its long usage in signal processing which is tied closely to deep learning. In this sense, we build our work upon notation and tools from frame theory for \(\mathbb{R}^{n}\), c.f. (Balazs, 2008; Casazza & Kutyniok, 2012). We shall write \[X=(x_{i})_{i\in I}\subseteq\mathbb{R}^{n}\quad\text{ with }\quad|I|=m\geq n\] to refer to a collection of \(m\) vectors \(x_{1},\ldots,x_{m}\) in \(\mathbb{R}^{n}\). Denoting the usual inner product on \(\mathbb{R}^{n}\) as \(\langle\cdot,\cdot\rangle\) we say that \(X\) constitutes a _frame_ for \(\mathbb{R}^{n}\) with _frame elements_\(x_{i}\), if there are constants \(0<A\leq B<\infty\), such that \[A\cdot\|x\|^{2}\leq\sum_{i\in I}|\langle x,x_{i}\rangle|^{2}\leq B\cdot\|x\|^ {2} \tag{1}\] holds for all \(x\in\mathbb{R}^{n}\). The constants \(A,B\) are called lower and upper frame bounds for \(X\). In \(\mathbb{R}^{n}\), a frame is equivalent to a spanning set. The bounds \(A,B\) become important, if one is interested in the numerical properties of the operators associated with a frame: the _analysis operator_ \[C:\mathbb{R}^{n} \to\mathbb{R}^{m}\] \[x \mapsto\left(\langle x,x_{i}\rangle\right)_{i\in I},\] its adjoint, the _synthesis operator_ \[D:\mathbb{R}^{m} \to\mathbb{R}^{n}\] \[(c_{i})_{i\in I} \mapsto\sum_{i\in I}c_{i}\cdot x_{i},\] and the concatenation of analysis, followed by synthesis, the _frame operator_ \[S:\mathbb{R}^{n} \to\mathbb{R}^{n}\] \[x \mapsto\sum_{i\in I}\langle x,x_{i}\rangle\cdot x_{i}.\] If \(X\) is a frame, then \(C\) is injective, \(D\) surjective, and \(S\) bijective. In \(\mathbb{R}^{n}\) all the above operators are realized via left-multiplication of \(x\) with a corresponding matrix. In this sense, the analysis operator \(C\) can be identified with the \(m\times n\) matrix \[C=\begin{pmatrix}-x_{1}-\\ \vdots\\ -x_{m}-\end{pmatrix}.\] For the synthesis operator, we have that \(D=C^{\top}\). Recall that in matrix terminology, injectivity, and surjectivity relate to the corresponding matrix having full rank. Hence, if the weight matrix of a layer in a neural network has full rank, then it can be interpreted as the analysis operator of the frame consisting of its row vectors if \(m\geq n\) and as the synthesis operator of the frame consisting of its column vectors if \(m\leq n\). At the initialization of a neural network, the weight matrices are commonly set to be Gaussian i.i.d. matrices known to have full rank with probability \(1\)(Mehta, 2004). Hence, one can be (almost) sure to start the training with the rows, resp. columns of the weight matrices to constitute frames. Here, we concentrate on the case where \(m\geq n\) and refer to such a layer as _redundant_. The matrix associated with the frame operator is \(S=DC\). It can be used to construct the _canonical dual frame_ for \(X\), given by \(\tilde{X}=\left(S^{-1}x_{i}\right)_{i\in I}\). Denoting \(\tilde{D}\) as the associated synthesis operator leads to the canonical frame decomposition of \(x\in\mathbb{R}^{n}\) by \(X\), \[x=S^{-1}S=\sum_{i\in I}\langle x,x_{i}\rangle\cdot S^{-1}x_{i}=\tilde{D}Cx. \tag{2}\] In this way, (2) is equivalent to \(\tilde{D}\) being a left-inverse of \(C\), allowing perfect reconstruction of \(x\) from \(Cx\). To reconstruct an input vector from the output of a ReLU-layer, we will construct a left-inverse for it exactly in the spirit of (2). Finally, one can find the minimal upper and the maximal lower frame bound in (1) via the largest and smallest eigenvalue of \(S\) respectively. The ratio \(\frac{B}{A}\) of these bounds corresponds to the condition number of the linear mapping given by the analysis operator, hence the weight matrix of the network layer, indicating its numerical stability. ### ReLU-layers as Non-linear Analysis Operators In a frame-theoretic context, we define the ReLU-layer associated with a collection of vectors \(X=\left(x_{i}\right)_{i\in I}\subseteq\mathbb{R}^{n}\) and a bias vector \(\alpha\in\mathbb{R}^{m}\) as the non-linear mapping \[C_{\alpha}:\mathbb{R}^{n} \to\mathbb{R}^{m} \tag{3}\] \[x \mapsto\left(\text{ReLU}(\langle x,x_{i}\rangle-\alpha_{i}) \right)_{i\in I}.\] The notation \(C_{\alpha}\) is chosen to reflect the link to the frame analysis operator \(C\). Of course, this is equivalent to how a ReLU-layer is commonly denoted, \(\text{ReLU}(Cx-\alpha)\) where ReLU applies component-wise. For fixed \(x\), the effect of the shift by the bias \(\alpha\) and the ReLU function on the frame analysis can be interpreted as all frame elements with \(\langle x,x_{i}\rangle<\alpha_{i}\) are set to be the zero-vector. According to this observation, we introduce the notation \[I_{x}^{\alpha}:=\{i\in I:\langle x,x_{i}\rangle\geq\alpha_{i}\}, \tag{4}\] determining the index set associated with those frame elements which are not affected by the ReLU function for \(x\). This perspective requires referring to sub-collections of frames very often. We write \(X_{L}=(x_{i})_{i\in L}\) for the sub-collection of \(X\) with respect to the index set \(L\subseteq I\) Analogously, we add \(L\) as a subscript to the operators associated with \(X_{L}\), e.g. \(C_{L}\) is the analysis operator of \(X_{L}\). Clearly, the case where \(L=I_{x}^{\alpha}\) plays a central role. ### Input Data on the Closed Ball One of the core ideas in this paper is the restriction of \(C_{\alpha}\) to the closed ball of radius \(r>0\) in \(\mathbb{R}^{n}\), denoted by \[\mathbb{B}_{r}=\{x\in\mathbb{R}^{n}:\|x\|\leq r\}.\] We write \(\mathbb{B}=\mathbb{B}_{1}\). Indeed, this is a very reasonable assumption when thinking of standard data normalization practices for neural networks (LeCun et al., 2012; Huang et al., 2023). It turns out that this restriction allows for a much richer analysis of the injectivity of \(C_{\alpha}\) than on all of \(\mathbb{R}^{n}\), in particular, involving the radius \(r\). Furthermore, as the output of a ReLU-layer has only non-negative entries, hence lies within \(\mathbb{R}^{n}_{+}\), the input domain of any ReLU-layer that applies to the output of a previous ReLU-layer on the ball lies within the non-negative part of \(\mathbb{B}_{r}\), denoted by \[\mathbb{B}^{+}_{r}=\mathbb{B}_{r}\cap\mathbb{R}^{n}_{+}. \tag{5}\] Similarly, we write \(\mathbb{B}^{+}=\mathbb{B}\cap\mathbb{R}^{n}_{+}\). The boundary of the unit ball, or equivalently, the \((n-1)\)-sphere is denoted by \[\mathbb{S}=\partial\mathbb{B}=\{x\in\mathbb{R}^{n}:\|x\|=1\}.\] ## 3 Injectivity of \(C_{\alpha}\) on \(\mathbb{B}_{r}\) The ReLU-layer mapping \(C_{\alpha}\) is - by design - non-linear, such that a condition for its injectivity will generally depend on the input. Fixing \(x\), one notices that if the sub-collection \(X_{I_{x}^{\alpha}}\) is a frame, then the analysis operator \(C_{I_{x}^{\alpha}}\) is injective, which we will use to study the injectivity of \(C_{\alpha}\). For \(\alpha\equiv 0\), Puthawala et al. refer to this property as "\(x\) having a directed spanning set" see Definition 1 in (Puthawala et al., 2022). In the following, we formulate this for general \(\alpha\) and \(K=\mathbb{B}_{r}\) in the context of frame theory. **Definition 3.1** (\(\alpha\)-rectifying on \(\mathbb{B}_{r}\)).: A collection \(X=(x_{i})_{i\in I}\subseteq\mathbb{R}^{n}\) is called \(\alpha\)-rectifying for \(\alpha\in\mathbb{R}^{m}\) on \(\mathbb{B}_{r}\) if for all \(x\in\mathbb{B}_{r}\) the sub-collection \(X_{I_{x}^{\alpha}}=(x_{i})_{i\in I_{x}^{\alpha}}\) is a frame for \(\mathbb{R}^{n}\). An analogous definition can be formulated for \(\mathbb{B}^{+}_{r}\). Unless explicitly stated, we always refer to \(\mathbb{B}_{r}\) when writing that \(X\) is \(\alpha\)-rectifying, since it covers the case \(\mathbb{B}^{+}_{r}\). In Lemma 2 of the same paper (Puthawala et al., 2022) the authors show that \(\alpha\)-rectifying on \(\mathbb{R}^{n}\) characterizes the injectivity of \(C_{\alpha}\). We revisit this characterization for \(\mathbb{B}_{r}\) and \(\mathbb{B}^{+}_{r}\). Again, the frame-theoretic formulation simplifies the statement significantly. **Theorem 3.2** (Injectivity of ReLU-layers on \(\mathbb{B}_{r}\)).: _Consider \(X=(x_{i})_{i\in I}\subseteq\mathbb{R}^{n}\), \(\alpha\in\mathbb{R}^{m}\). If \(X\) is \(\alpha\)-rectifying on \(\mathbb{B}_{r}\) (resp. \(\mathbb{B}^{+}_{r}\)), then \(C_{\alpha}\) is injective on \(\mathbb{B}_{r}\) (resp. \(\mathbb{B}^{+}_{r}\))._ A proof can be found in the appendix. Hence, we can shift the question of injectivity of \(C_{\alpha}\) to the verification of the \(\alpha\)-rectifying property for a given collection of vectors \(X\). **Stability.** Following the lines of (Bruna et al., 2014) and again, switching from \(\mathbb{R}^{n}\) to \(\mathbb{B}_{r}\), one can show that the injectivity of \(C_{\alpha}\) on \(\mathbb{B}_{r}\) implies frame-like inequalities analogous to (1), i.e. there are constants \(0<A_{0}\leq B_{0}<\infty\) such that \[A_{0}\cdot\|x\|^{2}\leq\sum_{i\in I}\left|\text{ReLU}\left(\langle x,x_{i} \rangle-\alpha_{i}\rangle\right|^{2}\leq B_{0}\cdot\|x\|^{2} \tag{6}\] for all \(x\in\mathbb{B}_{r}\). Here, \(A_{0}\) can be chosen as the smallest eigenvalue and \(B_{0}\) as the largest eigenvalue of all frame operators associated with the frames \(X_{I_{x}^{\alpha}}\) with \(x\in\mathbb{B}_{r}\). **Inclusiveness.** It is clear that if \(X\) is \(\alpha\)-rectifying, then \(X\) is \(\alpha^{\prime}\)-rectifying for all \(\alpha^{\prime}\leq\alpha\). Therefore, we call \[\alpha\text{ an {\it upper bias} for }C_{\alpha}\text{ if }X\text{ is }\alpha\text{-rectifying.}\] This perfectly reflects the role of the bias vector in a neural network: the larger the bias values, the more neurons are activated by the ReLU function, hence the "more injective" the ReLU-layer becomes in the sense that it is injective for a larger set of bias vectors. Therefore, it is of natural interest to find the largest possible upper bias for a given weight matrix. A unique maximal upper bias, however, does not exist in general. **Restriction to \(\mathbb{S}\).** It is important to notice that we may restrict the \(\alpha\)-rectifying property to unit norm vectors since the norms directly scale the upper bias values \(\alpha_{i}\) and can be re-introduced at any time. In this sense, \(X\) is \(\alpha\)-rectifying if and only if \(\overline{X}=\left(x_{i}\cdot\|x_{i}\|^{-1}\right)_{i\in I}\) is \(\overline{\alpha}\)-rectifying, where \(\overline{\alpha}_{i}=\alpha_{i}\cdot\|x_{i}\|\). Therefore, in the following we will always assume \(X\subseteq\mathbb{S}\), i.e. \(\|x_{i}\|=1\) for all \(i\in I\). Note that this corresponds to standard weight normalization (Salimans & Kingma, 2016). **Bias-radius interplay.** Often when studying ReLU-layers theoretically, the bias is implicitly incorporated into the linear part of the operator. However, in our work, we deliberately keep it as a shift as the interplay of bias and input domain is of central interest. We mentioned that an upper bias \(\alpha\) favors injectivity when it is large. On the other hand, a large input data domain, i.e. a ball with large radius \(r\) offers more flexibility for normalization. However, there is a general trade-off: the larger the radius is chosen, the smaller \(\alpha\) will get, in general, and vice versa. We have the following trivial fact: Any frame is \(\alpha\)-rectifying on \(\mathbb{B}_{r}\) for \(\alpha\equiv-r\), i.e. \(\alpha_{i}=-r\) for all \(i\in I\). Hence, any redundant ReLU-layer is injective on the closed ball with any radius if the bias vector is sufficiently small. For a basis, (i.e. \(m=n\)) the above fact becomes also necessary, immediately implying that a basis can never be \(\alpha\)-rectifying on \(\mathbb{R}^{n}\) for any \(\alpha\). However, the standard basis for \(\mathbb{R}^{n}\) is \(\alpha\)-rectifying on \(\mathbb{B}^{+}\) for \(\alpha\equiv 0\). This shows that taking into account the input domain is a crucial step to take when studying injectivity since it naturally adapts to situations where a frame is not \(\alpha\)-rectifying on \(\mathbb{R}^{n}\) but might be on \(\mathbb{B}_{r}\), resp. \(\mathbb{B}_{r}^{+}\). The question that we are now interested in is, _how_ to find a "good" upper bias for \(\mathbb{B}_{r}\) and \(\mathbb{B}_{r}^{+}\)? The Mercedes-Benz frame in \(\mathbb{R}^{2}\)(Casazza & Kutyniok, 2012), given by \[X_{mb}=\left(\begin{pmatrix}0\\ 1\end{pmatrix},\left(\begin{smallmatrix}-\sqrt{3}/2\\ -\nicefrac{{1}}{{2}}\end{smallmatrix}\right),\left(\begin{smallmatrix}\sqrt{ 3}/2\\ -\nicefrac{{1}}{{2}}\end{smallmatrix}\right)\right)\] (see Figure 1) is a particularly good example, where the optimal upper bias for \(\mathbb{B}\) can be found by looking at the geometry of the frame. Its elements determine the vertices of an equilateral triangle so that we can reduce the problem to one pair of elements by symmetry. The worst case is found by \(\langle x_{i},x_{j}\rangle=-\frac{1}{2}\). Hence, \(X_{mb}\) is \(\alpha\)-rectifying on \(\mathbb{B}\) for \(\alpha\equiv-\frac{1}{2}\). This idea can be generalized to polytopes in arbitrary dimensions. In \(\mathbb{R}^{3}\), we obtain that the Tetrahedron frame, given by \[X_{tet}=\frac{1}{\sqrt{3}}\cdot\left(\begin{pmatrix}1\\ 1\\ 1\end{pmatrix},\begin{pmatrix}1\\ -1\\ -1\end{pmatrix},\begin{pmatrix}-1\\ 1\\ -1\end{pmatrix},\begin{pmatrix}-1\\ -1\\ 1\end{pmatrix}\right).\] (see Figure 1) is \(\alpha\)-rectifying on \(\mathbb{B}\) for \(\alpha\equiv-\frac{1}{\sqrt{3}}\). In a more general setting, where the frame elements are not aligned in a regular manner, we can at least reduce the problem to consider every face individually. ## 4 Convex Polytopes and Bias Estimations In a nutshell, we estimate a "good" upper bias vector \(\alpha\) for a given set of vectors \(X\), hence, the ReLU-layer mapping \(C_{\alpha}\) is injective on \(\mathbb{B}_{r}\). It turns out that the combinatorial structure of the convex polytope associated with the elements of \(X\) can be related to the \(\alpha\)-rectifying property of \(X\). To prepare the estimation procedure, we shall introduce all building blocks for the estimation procedure for \(\mathbb{B}_{r}\) in Section 4.1 and then deduce a version for \(\mathbb{B}_{r}^{+}\) in Section 4.2. For all standard results on convex polytopes, we refer to (Ziegler, 2012). Here, we are specifically interested in convex polytopes that arise as the set of all convex linear combinations of a collection of vectors \(X=(x_{i})_{i\in I}\subseteq\mathbb{S}\), \[P_{X}=\{x\in\mathbb{R}^{n}:x=\sum_{i\in I}c_{i}\cdot x_{i},c_{i}\geq 0,\sum_{i \in I}c_{i}=1\}. \tag{7}\] A face of \(P_{X}\) is any intersection of \(P_{X}\) with an affine half-space (in any dimension) such that none of the interior points of \(P_{X}\) (w.r.t. the induced topology on \(P_{X}\)) lie on its boundary. While vertices and edges are the \(0\)- and \(1\)-dimensional faces of \(P_{X}\), the \((n-1)\)-dimensional faces are called _facets_. For every face and, in particular, every facet \(F\), there are \(a\in\mathbb{R}^{n}\setminus\{0\}\) and \(b\in\mathbb{R}\) such that \[F=\{x\in P_{X}:\langle a,x\rangle=b\}, \tag{8}\] i.e. any facet lies on an affine subspace of codimension \(1\) of \(\mathbb{R}^{n}\). Furthermore, any \(x\in F\) can be written as the convex linear combination, \[x=\sum_{i\in I_{F}}c_{i}\cdot x_{i},\quad c_{i}\geq 0,\quad\sum_{i\in I_{F}}c_{i }=1.\] We shall write the index set of vertices, associated with \(F\) as \[I_{F}=\{i\in I:x_{i}\in F\}. \tag{9}\] The following lemma reveals the core idea of our approach. **Lemma 4.1**.: _Let \(F\) be a facet. If \(0\notin F\), then \(X_{I_{F}}\) is a frame._ In other words, as long as the facet does not go through the origin, the associated vertices form a frame. A proof can be found in the appendix. We call \(X\)_omnidirectional_ if \(0\) lies in the interior of \(P_{X}\)(w.r.t. the topology in \(\mathbb{R}^{n}\)), see Definition 1 in (Behrmann et al., 2018). Equivalently, there cannot be a hyperplane so that the elements in \(X\) are all accumulated on only one side of it. For the proposed bias estimation on \(\mathbb{B}_{r}\), omnidirectionality is an essential property as it allows to cover every \(x\in\mathbb{B}_{r}\) the same way. For \(\mathbb{B}_{r}^{+}\) we formulate an analogous condition in Section 4.2. Moreover, if \(X\) is omnidirectional, then \(0\) cannot lie on any facet of \(P_{X}\) and Lemma 4.1 applies. Numerically, it is verified via a simple convex optimization program (Behrmann et al., 2018). Assuming a certain ordering of the facets, we write \(F_{j}\) referring to the \(j\)-th facets of \(P_{X}\). Analogous to the idea of obtaining the optimal upper biases for the Mercedes-Benz and the Tetrahedron frame, we will use the frames \(X_{I_{F_{j}}}\) for all \(j\) to estimate a bias. Letting the cone of \(F_{j}\) be denoted as \[\operatorname{cone}(F_{j})=\{tx:x\in F_{j},t\geq 0\},\] then omnidirectionality and \(X\subseteq\mathbb{S}\) provide the following properties. **Lemma 4.2**.: _If \(X\subseteq\mathbb{S}\) is omnidirectional, then the following holds._ 1. \(\bigcup_{j}I_{F_{j}}=I\)_,_ 2. \(\bigcup_{j}\operatorname{cone}(F_{j})=\mathbb{R}^{n}\)__ 3. \(X_{I_{F_{j}}}\) _is a frame for every_ \(j\) These three properties build the backbone of our approach. By (i), every frame element is a vertex of \(P_{X}\). Due to (ii), we can partition \(\mathbb{B}_{r}\) into facet-specific conical subsets where we can estimate a bias locally. And most importantly, by (iii), every sub-collection associated to a facet induces a frame. Properties (i) and (ii) are easy to see and (iii) is a direct consequence of Lemma 4.1. _Remark 4.3_.: For a facet \(F\), the vectors \(X_{I_{F}}\) will be redundant (\(m>n\)) only in rare cases. If the frame elements lie in general position on \(\mathbb{S}\), then every \(X_{I_{F}}\) is a basis (\(m=n\)) with probability \(1\)(Buchta & Muller, 1984). Before we introduce the upper bias estimation procedures for \(\mathbb{B}_{r}\) and \(\mathbb{B}_{r}^{+}\), we provide an explanation of why the particular grouping of the frame elements into vertices of facets is indeed suitable for the purpose of finding large upper bias values for the \(\alpha\)-rectifying property. If \(X\) is omnidirectional and \(F\) a facet of \(P_{X}\), then consistent with (8) there are \(a\in\mathbb{R}^{n}\setminus\{0\}\) and \(0\neq b\in\mathbb{R}\) such that \[\begin{array}{ll}\langle a,x_{k}\rangle=b,&\text{for $k\in I_{F}$},\\ \langle a,x_{\ell}\rangle<b,&\text{for $\ell\notin I_{F}$}.\end{array}\] In this sense, the construction of \(X_{I_{F}}\) is a natural way of selecting spanning sub-collections of \(X\) with the highest coherence possible, making this particularly useful for our purpose. ### Polytope Bias Estimation for \(\mathbb{B}_{r}\) We now introduce the _Polytope Bias Estimation_ (PBE) for \(\mathbb{B}_{r}\) with \(r>0\). The procedure estimates an upper bias, denoted as \(\alpha^{\mathbb{B}}\), such that \(X\) is \(\alpha^{\mathbb{B}}\)-rectifying on \(\mathbb{B}\). This implies that \(X\) is \((r^{-1}\cdot\alpha^{\mathbb{B}})\)-rectifying on \(\mathbb{B}_{r}\). The core idea is to partition \(\mathbb{B}\) (and \(\mathbb{S}\)) into conical pieces, \[F_{j}^{\mathbb{B}} :=\operatorname{cone}(F_{j})\cap\mathbb{B} \tag{10}\] \[F_{j}^{\mathbb{S}} :=\operatorname{cone}(F_{j})\cap\mathbb{S}. \tag{11}\] If \(X\) is omnidirectional, by Lemma 4.2, we have \[\mathbb{B}=\bigcup_{j}F_{j}^{\mathbb{B}}\quad\text{and}\quad\mathbb{S}= \bigcup_{j}F_{j}^{\mathbb{S}}. \tag{12}\] To find \(\alpha_{i}^{\mathbb{B}}\), we identify the minimal analysis coefficient \(\langle y,x_{i}\rangle\) that can occur for \(y\) on each \(F_{j}^{\mathbb{B}}\) containing \(x_{i}\), i.e. \[\alpha_{i}^{\mathbb{B}}:=\min_{\begin{subarray}{c}y\in F_{j}^{\mathbb{B}}\\ j:x_{i}\in F_{j}\end{subarray}}\langle y,x_{i}\rangle. \tag{13}\] We do not tackle this optimization problem directly but solve two related problems instead. On the one hand, we consider the minimal auto-correlation values on each facet, \[\alpha_{i}^{X}:=\min_{\begin{subarray}{c}\ell\in I_{F_{j}}\\ j:x_{i}\in F_{j}\end{subarray}}\langle x_{\ell},x_{i}\rangle, \tag{14}\] that are easy to compute. On the other hand, we solve \[\alpha_{i}^{\mathbb{S}}:=\min_{\begin{subarray}{c}y\in F_{j}^{\mathbb{S}}\\ j:x_{i}\in F_{j}\end{subarray}}\langle y,x_{i}\rangle \tag{15}\] via convex linear programs. Note that the sets, on which all three optimization problems happen are subsets of each other, \(F_{j}^{\mathbb{B}}\supset F_{j}^{\mathbb{S}}\supset X_{I_{F_{j}}}\), so that we immediately observe that \(\alpha_{i}^{\mathbb{B}}\leq\alpha_{i}^{\mathbb{S}}\leq\alpha_{i}^{X}\). With this, we solve (13). **Theorem 4.4**.: _(PBE for \(\mathbb{B}\)) If \(X\subseteq\mathbb{S}\) is omnidirectional, then \(X\) is \(\alpha^{\mathbb{B}}\)-rectifying on \(\mathbb{B}\) and \(\alpha_{i}^{\mathbb{B}}\), given in (13) can be computed as_ \[\alpha_{i}^{\mathbb{B}}=\begin{cases}0&\text{if $\alpha_{i}^{X}\geq 0$}\\ \alpha_{i}^{\mathbb{S}}&\text{otherwise.}\end{cases} \tag{16}\] _If \(\alpha_{i}^{X}<0\), then \(\alpha_{i}^{\mathbb{S}}\) given in (15) is the minimum over \(j:x_{i}\in F_{j}\) of the solutions of the convex linear programs_ \[\min\;\left(x_{i}^{\top}D_{I_{F_{j}}}\right)d \tag{17}\] \[\text{subject to }d\geq 0\] \[\|D_{I_{F_{j}}}d\|_{2}\leq 1,\] _where \(D_{I_{F_{j}}}\) is the synthesis operator of \(X_{I_{F_{j}}}\)._ Figure 1: Frame vectors \(X\) (blue) and their convex hulls forming convex regular polytopes \(P_{X}\). From left to right: Mercedes-Benz, Square, and Pentagon frame in \(\mathbb{R}^{2}\), Tetrahedron and Icosahedron frame in \(\mathbb{R}^{3}\). The unit ball \(\mathbb{B}\) is outlined in gray. A proof can be found in the appendix. The general case follows from \(\mathbb{B}_{r}=\{x\in\mathbb{R}^{n}:x=r\cdot y,y\in\mathbb{B}\}\) for \(r>0\). Hence, the minimal argument of (13) lies on \(\mathbb{S}\) or at zero, depending on the sign of the minimal correlation of a facet, given by \(\alpha_{i}^{X}\). So, a strategy to obtain \(\alpha^{\mathbb{B}}\) is to start considering the easy-to-compute \(\alpha_{i}^{X}\) by finding the smallest auto-correlation value with \(x_{i}\) among all facets that are adjacent to \(x_{i}\). Then, only if \(\alpha_{i}^{X}<0\), the convex optimization (17) has to be solved. See Algorithm 1 for a pseudo-code of the procedure. **Example 1**.: _(a) For the Tetrahedron frame \(X_{tet}\), we have \(\alpha^{X}\equiv-\frac{1}{3}\), therefore \(\alpha^{\mathbb{B}}=\alpha^{\mathbb{S}}\equiv-\frac{1}{\sqrt{3}}\). (b) For the Icosahedron frame, given by_ \[X_{ico}=\frac{1}{\sqrt{1+\varphi^{2}}}\cdot\left(\begin{pmatrix}0\\ \pm 1\\ \pm\varphi\end{pmatrix},\begin{pmatrix}\pm 1\\ \pm\varphi\end{pmatrix},\begin{pmatrix}\pm\varphi\\ 0\\ \pm 1\end{pmatrix}\right),\] _(see Figure 1), where \(\varphi=\frac{1+\sqrt{5}}{2}\) is the golden ratio, we have \(\alpha^{X}\equiv\frac{\varphi}{1+\varphi^{2}}\approx 0.45\), therefore \(\alpha^{\mathbb{B}}\equiv 0\). Figure 2 shows the idea of the PBE for this example geometrically._ Note that \(0\geq\alpha_{i}^{\mathbb{B}}\) is reasonable to guarantee the \(\alpha\)-rectifying property on \(\mathbb{B}_{r}\) since for any upper bias \(\alpha\) and \(x=0\), it has to hold that \(\langle 0,x_{i}\rangle=0\geq\alpha_{i}\) for all \(i\) in some \(I_{F_{j}}\). ### Polytope Bias Estimation for \(\mathbb{B}_{r}^{+}\) In neural networks, often ReLU-layers succeed each other. In this context, we show that \(\mathbb{B}_{r}^{+}\) is conceptually the right input domain for a PBE for ReLU-layers that are applied to the output of a previous one. In fact, this requires knowing where the image of \(\mathbb{B}_{r}\) under \(C_{\alpha}\) lies. **Lemma 4.5**.: _Let \(X\) be \(\alpha\)-rectifying and \(B_{0}\) denote the largest optimal upper frame bound among \(X_{I_{x}^{\alpha}}\) with \(x\in\mathbb{B}_{r}\). Then_ \[C_{\alpha}\left(\mathbb{B}_{r}\right)\subseteq\mathbb{B}_{r\sqrt{B_{0}}}^{+}. \tag{18}\] It is easy to show that (18) is a direct consequence of the upper inequality in (6) and clearly, holds for \(x\in\mathbb{B}_{r}^{+}\) as well. Note that we may also estimate the radius of the ball as \(r\sqrt{B}\), where \(B\) is any upper frame bound of \(X\). We approach the PBE for \(\mathbb{B}_{r}^{+}\) by restricting the computations of the PBE introduced in Theorem 4.4 to only those facets, that actively contribute to the estimation. In this sense, we only consider those frame elements whose associated facets have a non-trivial intersection with \(\mathbb{R}_{+}^{n}\). We denote the corresponding index sets as \[J^{+}=\{j:F_{j}\cap\mathbb{R}_{+}^{n}\neq\emptyset\},\qquad I^{+}=\bigcup_{j \in J^{+}}I_{F_{j}}.\] According to this, instead of omnidirectionality, we only have to require \[\mathbb{R}_{+}^{n} \subseteq\bigcup_{j\in J^{+}}\mathrm{cone}(F_{j}),\quad\text{and} \tag{19}\] \[0\notin F_{j}\ \ \text{for all}\ j\in J^{+}, \tag{20}\] which we shall refer to as _non-negative omnidirectionality_. See Figure 3 (right) for an illustration. This is tailored to provide the properties in Lemma 4.2 for \(\mathbb{B}_{r}^{+}\): by (19), we have analogously to (12), \[\mathbb{B}^{+}\subseteq\bigcup_{j\in J^{+}}F_{j}^{\mathbb{B}} \tag{21}\] and condition (20) is sufficient for \(X_{I_{F_{j}}}\) being a frame for every \(j\in J^{+}\) by Lemma 4.1. With this, we have all requirements to deduce the PBE for \(\mathbb{B}_{r}^{+}\). Figure 3: Non-regular polytopes. Left: The estimated bias values \(\alpha_{i}^{\mathbb{B}}\) are computed from the largest adjacent facet of \(x_{i}\). Hence, the less regular the normalized frame elements are distributed on the sphere, the smaller \(\alpha^{\mathbb{B}}\) becomes. Right: The frame is non-negatively omnidirectional since \(\bigcup_{j\in J^{+}}F_{j}\supseteq\mathbb{R}_{+}^{n}\), but not omnidirectional. Figure 2: Geometrical intuition of the PBE for the Icosahedron frame on \(\mathbb{B}\). Consider \(x_{i}\). Left: The (blue) filled facets are used to compute \(\alpha_{i}^{\mathbb{B}}\). The (gray) darker piece (dashed border) indicates \(F_{j}^{\mathbb{B}}\). Right: Rotated perspective of the left image. The affine half-space \(\Omega_{i}=\{x\in\mathbb{R}^{n}:\langle x,x_{i}\rangle\geq\alpha_{i}^{X},i\in I _{F_{j}}\}\), indicated by the left-most area with decreasing opacity (orange) contains all vectors such that all vertices of the adjacent facets are active. Since \(\alpha_{i}^{X}\geq 0\), the brighter (yellow) half-space represents the solution \(\alpha_{i}^{\mathbb{B}}=0\). **Theorem 4.6** (PBE for \(\mathbb{B}^{+}\)).: _If \(X\subseteq\mathbb{S}\) is non-negatively omnidirectional, then \(X\) is \(\alpha^{\mathbb{B}^{+}}\)-rectifying on \(\mathbb{B}^{+}\) with_ \[\alpha_{i}^{\mathbb{B}^{+}}=\begin{cases}\alpha_{i}^{\mathbb{B}}&\text{for }i \in I^{+}\\ s_{i}&\text{else,}\end{cases} \tag{22}\] _where \(s_{i}\in\mathbb{R}\) is arbitrary._ A proof can be founded in the appendix. This reduces the computational cost and improves the upper bias estimation as potentially large bias values in \(I\setminus I^{+}\) can be omitted. _Remark 4.7_.: Clearly, conditions (19) and (20) are weaker than omnidirectionality, yet, harder to check numerically. Similarly, \(F_{j}\cap\mathbb{R}_{+}^{n}\neq\emptyset\) is not straightforward to verify. Indeed, it holds true for all adjacent facets of \(x_{i}\in\mathbb{R}_{+}^{n}\), however, there might be facets meeting the condition but with no vertices in \(\mathbb{R}_{+}^{n}\). The interested reader will find a continued discussion in the appendix. Finding an efficient implementation of this, however, is left as an open problem. ### Remarks on the Optimality of the PBE In general, we cannot expect the proposed PBE to yield upper biases that are maximal. Estimating the error of the estimation to a maximal upper bias (if exists), however, is difficult since this would require knowing the combinatorial structure (i.e. the vertex-facet index sets) of a general polytope, which has been a topic of active research for several decades. In the special cases when the polytopes are regular and simplicial (every facet has exactly \(n\) vertices), e.g. Mercedes-Benz, Tetrahedron and Icosadhedron frame, we expect that the estimated upper bias is indeed maximal. It is easy to verify that the PBE is also stable to perturbations as long as the combinatorial structure is preserved. Hence, one could expect that the estimation will be more accurate the more evenly distributed the frame elements are on the sphere. See Figure 3 (left) for an illustration. ``` Get \(I_{F_{j}}\) via computing \(V_{X}\) for\(j=1,\ldots,J\)do \(S_{I_{F_{j}}}^{-1}\leftarrow\left((C_{I_{F_{j}}})^{\top}C_{I_{F_{j}}}\right)^{-1}\) \(\overline{X}\gets X_{I_{F_{j}}}\) \(\tilde{D}_{I_{F_{j}}}\leftarrow\begin{pmatrix}S_{I_{F_{j}}}^{-1}\overline{x}_ {1}&S_{I_{F_{j}}}^{-1}\overline{x}_{2}&\cdots&S_{I_{F_{j}}}^{-1}\overline{x}_ {|I_{F_{j}}|}\\ \Big{|}&\Big{|}&\Big{|}\end{pmatrix}\) endfor \(z=C_{\alpha}x\) \(\overline{z}\gets z+\alpha\) while\(j=1,\ldots,J\)do if\(I_{F_{j}}\in I_{x}^{\alpha}\)then \(\tilde{D}_{I_{F_{j}}}\overline{z}_{|I_{F_{j}}}=x\) endif endwhile ``` **Algorithm 2** Reconstruction via Facets ### Local Reconstruction via Facets Unless \(I_{x}^{\alpha}\neq I\) for all \(x\in\mathbb{B}_{r}\), there cannot be only _one_ global left-inverse for \(C_{\alpha}\). We propose to systematically construct a collection of left-inverses, each associated with one facet of \(P_{X}\). Recall that the frame operator of \(X_{I_{F_{j}}}\) is denoted by \(S_{I_{F_{j}}}\) and that its canonical dual frame is given by \(\tilde{X}_{I_{F_{j}}}=\left(S_{I_{F_{j}}}^{-1}x_{i}\right)_{i\in I_{F_{j}}}\). **Theorem 4.8**.: _Let \(X\in\mathbb{S}\) be \(\alpha\)-rectifying on \(\mathbb{B}\) and omnidirectional. For every \(x\in\mathbb{B}\) there is \(j\) such that_ \[\tilde{D}_{I_{F_{j}}}C_{\alpha}x=x, \tag{23}\] _where_ \[\tilde{D}_{I_{F_{j}}}:\mathbb{R}^{m} \rightarrow\mathbb{R}^{n}\] \[(c_{i})_{i\in I} \mapsto\sum_{i\in I_{F_{j}}}(c_{i}+\alpha_{i})\cdot S_{I_{F_{j}} }^{-1}x_{i}. \tag{24}\] In other words, \(\tilde{D}_{I_{F_{j}}}\) is a left-inverse of \(C_{\alpha}\) for all \(x\in F_{j}^{\mathbb{B}}\). By (12), every \(x\in\mathbb{B}\) lies in some \(F_{j}^{\mathbb{B}}\), hence indeed for any \(x\in\mathbb{B}\) there is a left-inverse. It is easy to see that (23) reduces to the usual canonical frame decomposition (2) of \(x\) by \(X_{I_{F_{j}}}\). _Remark 4.9_.: In general, there are infinitely many duals for \(X\)(Christensen, 2003). The canonical dual mentioned above relates to the pseudo-inverse of the associated analysis operator, hence induces the optimal inverse by means of ridge regression. ### Implementation We discuss the implementation aspects of the PBE for \(\mathbb{B}\) and the reconstruction formulas. Our imple mentations of the algorithms are publicly available under [https://github.com/danedane-haider/Alpha-rectifying-frames](https://github.com/danedane-haider/Alpha-rectifying-frames). #### 4.5.1 Pbe The vertex-facet index sets \(I_{F_{j}}\) are encoded in what is called the _vertex-facet incidence matrix_\(V_{X}\). Assuming that \(P_{X}\) has \(J\) facets, then \(V_{X}\) is the \(J\times m\) matrix with entries \[V_{X}[j,i]=\begin{cases}1&\quad\text{if }i\in I_{F_{j}}\\ 0&\quad\text{else,}\end{cases}\] indicating which vertices correspond to which facets. To compute the vertex-facets incidences, we use the routine VERTICES_IN_FACETS from the open-source software Polymake (Gawrilow and Joswig, 2000). This routine requires the vertices in homogeneous coordinates, i.e. \[C_{hom}=\begin{pmatrix}1-x_{1}-\\ \vdots\\ 1-x_{m}-\end{pmatrix}.\] Already noted in (Puthawala et al., 2022), checking the \(\alpha\)-rectifying property is probably NP-hard. The computation of \(V_{X}\) relies on convex-hull algorithms, which are also not expected to run in polynomial time for general polytopes. However, for points in general position on \(\mathbb{S}\) (i.e. no hyperplane in \(\mathbb{R}^{n}\) contains more than \(n\) of the points), the "reverse-search" algorithm is expected to finish in linear time in the number of vertices \(m\) for fixed dimension \(n\)(Assarf et al., 2016). This condition is precisely fulfilled when assuming random initialization and normalized frame elements. Polymake uses this algorithm via the command prefer "lrs";. Algorithm 1 gives step-by-step instructions to compute \(\alpha^{\mathbb{B}}\) for any omnidirectional frame \(X\subseteq\mathbb{S}\). #### 4.5.2 Reconstruction In practice, one can read off \(I_{x}^{\alpha}\) from \(z=C_{\alpha}x\) and find a facet \(F_{j}\) such that \(I_{F_{j}}\subseteq I_{x}^{\alpha}\) using the vertex-facet incidences. Note that \(F_{j}\) might not be unique with this property. Algorithm 2 describes how the systematic construction of the left-inverses \(\tilde{D}_{I_{F_{j}}}\) can be done, assuming that \(X\) is \(\alpha\)-rectifying and omnidirectional. ## 5 Numerical Experiments A series of experiments revealed that the injectivity behavior of a ReLU-layer is very sensitive to many hyperparameters and circumstances, such as the size of the layer, the depth of the network, the position of the layer within the network, initialization and normalization procedures, the optimizer and the data itself. Here, we present a numerical experiment, where we want to focus merely on the size of the ReLU-layer, i.e. its redundancy. Therefore, the experimental setting is designed to be as simple and reduced as possible. Considering more realistic network models requires a much broader study, which goes beyond the scope of this contribution. ### Experimental Setting We train a neural network with one ReLU-layer and a soft-max output layer on the Iris data set (Fisher, 1936). For the ReLU-layer, we consider four redundancy settings \(m=|I|=10,20,60,100\). The corresponding networks are mappings from \(\mathbb{R}^{4}\rightarrow\mathbb{R}^{m}\rightarrow\mathbb{R}^{3}\). After normalization to zero mean and a variance of one, all data samples lie within the ball of radius \(r=3.1\). We use the upper biases from the PBE for \(\mathbb{B}\) (Theorem 4.4) with appropriate scaling to monitor the injectivity behavior of the ReLU-layers during training, see Figure 4. Here, optimization of a cross-entropy loss is done using stochastic gradient descent at a learning rate of \(0.5\) for \(100\) epochs. Figure 4: Averaged quantities (across \(10\) iterations) related to a ReLU-layer over \(100\) epochs of training with different redundancies \(m=|I|\). Top: Cross entropy loss on the validation set. Mid: Mean of the trained biases, \(\widehat{\beta}\) (dashed), and mean of the estimated upper biases on \(\mathbb{B}_{r}\), \(\widehat{\alpha}^{\mathbb{B}}\) (solid). Bottom: Proportion of learned bias values that are smaller than the estimations, i.e. \(\#(\beta_{i}\leq\alpha_{i}^{\mathbb{B}})/m\), indicating the injectivity trend. ### Discussion The top plot shows that in our setting, high redundancy in the ReLU-layer yields the smallest validation loss. Expectedly, high redundancy also increases the chance of the polytope \(P_{X}\) having many non-negatively correlated facets, i.e. \(\alpha_{i}^{X}\geq 0\). Hence, more bias estimations are \(0\) according to (16). The mid-plot shows this nicely (solid lines). Note that all learned biases \(\beta\) decrease in mean (dashed lines). The lower the redundancy, the stronger this decrease is. Since the bias estimations \(\alpha_{i}^{\text{B}}\) remain almost unchanged in mean, we may conclude that lacking injectivity of low redundancy ReLU-layers (i.e. too many output values are zero) is compensated during training via the bias. The bottom plot shows another measure of the injectivity trend: the proportion of learned bias values smaller than the estimations, i.e. \(\#(\beta_{i}\leq\alpha_{i}^{\text{B}})/m\). An increase in this quantity indicates that a ReLU-layer is becoming "more injective" during training. In concordance with the previous observations, layers with low redundancy show a stronger increase here as well. This could be interpreted as high redundancy favors injectivity from the start. In this sense, the PBE can help us to better understand the role of injectivity in neural networks. In the example, we are able to see the effect of different sizes of a ReLU-layer in regard to injectivity and validation loss and, in particular, what happens when the layer is chosen too small. It remains an open question if these results are representative for other settings and which are the responsible components causing this behavior. Future numerical investigation is necessary for a better understanding. ## 6 Conclusion We presented a frame-theoretic setting to study the injectivity of a ReLU-layer on the closed ball with radius \(r>0\) in \(\mathbb{R}^{n}\) and on its non-negative part. Moreover, we introduced a systematic approach of verifying it in practice, called polytope bias estimation (PBE). This method exploits the convex geometry of the weight matrix associated with a ReLU-layer and estimates a bias vector such that the layer is injective on the ball for all biases smaller or equal to the estimation. This allows us to give sufficient and quantified conditions for the invertibility of a ReLU-layer. Corresponding reconstruction formulas are provided. Via a straightforward implementation, the PBE allows to study the injectivity behavior of a redundant ReLU-layer and perform perfect reconstruction of the layer input where applicable. So far, our work contributes to a better understanding of the behavior of neural network layers by means of observations without interaction with the actual optimization procedure. As a possible application, the estimated upper biases from the PBE could be used to design a regularization procedure where the bias is guided toward injectivity during training. ## Acknowledgment D. Haider is recipient of a DOC Fellowship of the Austrian Academy of Sciences at the Acoustics Research Institute (A 26355). The work of M. Ehler was supported by the WWTF project CHARMED (VRG12-009) and P. Balazs was supported by the OeAW Innovation grant project FUn (IF_2019_24_Fun), and the FWF projects LoFT (P 34624) and NoMASP (P 34922). The authors would like to thank Daniel Freeman for fruitful and fun discussions and Lukas Kohldorfer for his valuable feedback.
2304.10673
Local Limit Theorems and Strong Approximations for Robbins-Monro Procedures
The Robbins-Monro algorithm is a recursive, simulation-based stochastic procedure to approximate the zeros of a function that can be written as an expectation. It is known that under some technical assumptions, Gaussian limit distributions approximate the stochastic performance of the algorithm. Here, we are interested in strong approximations for Robbins-Monro procedures. The main tool for getting them are local limit theorems, that is, studying the convergence of the density of the algorithm. The analysis relies on a version of parametrix techniques for Markov chains converging to diffusions. The main difficulty that arises here is the fact that the drift is unbounded.
Valentin Konakov, Enno Mammen
2023-04-20T23:28:33Z
http://arxiv.org/abs/2304.10673v1
# Local Limit Theorems and Strong Approximations for Robbins-Monro Procedures ###### Abstract The Robbins-Monro algorithm is a recursive, simulation-based stochastic procedure to approximate the zeros of a function that can be written as an expectation. It is known that under some technical assumptions, Gaussian limit distributions approximate the stochastic performance of the algorithm. Here, we are interested in strong approximations for Robbins-Monro procedures. The main tool for getting them are local limit theorems, that is, studying the convergence of the density of the algorithm. The analysis relies on a version of parametrix techniques for Markov chains converging to diffusions. The main difficulty that arises here is the fact that the drift is unbounded. ## 1 Introduction This paper is devoted to strong approximations for Robbins-Monro procedures. The approximations are based on the study of a local limit theorem for Robbins-Monro procedures. These algorithms have first been introduced in [26] to approximate the solution of an equation \(h(\theta)=0\), where randomly disturbed values of \(h(\theta)\) are observed at updated points \(\theta\). Since then, extensive literature have been published on the subject, but to the best of our knowledge, a local limit theorem has never been obtained. We refer to the monograph [1] and [24] for a general mathematical discussion of these algorithms and a review of the literature. An important class of Robbins-Monro procedures are optimisation methods based on stochastic gradient decent. There is an increasing literature of their applications in the implementation of artificial neural networks and in reinforcement learning. We refer to [5, 12, 14, 21, 22] and the references therein for some recent developments and overviews. One possible application of the results obtained in this paper are proofs of local invariance principles, that is, of the convergence in total variation for a wide class of stochastic functionals of the Robbins-Monro procedure to the functionals of a limiting diffusion process. This application will be discussed in another publication. The proof of such a result would be based on the results of this paper and the stratification method developed in the papers of Y. Davydov, see [7, 10, 8, 9]. We fix a probability space \((\Omega,\mathcal{F},\mathbb{P})\) on which all random variables we consider below are defined. Let \((\gamma_{k})_{k\geq 0}\) be a decreasing time step that will be specified later, and let \((\eta_{k})_{k\geq 0}\) be a collection of independent and identically distributed random variables. We define the following recursive procedure: \[\theta_{n+1}=\theta_{n}-\gamma_{n+1}H(\theta_{n},\eta_{n+1}),\ \theta_{0}\in \mathbb{R}^{d}, \tag{1}\] where \(H\) is a function from \(\mathbb{R}^{d}\times\mathcal{X}\) to \(\mathbb{R}^{d}\) with \(\mathcal{X}\) equal to the support of \(\eta_{i}\). Without loss of generality we can assume that \(\mathcal{X}\) is a subset of \(\mathbb{R}\). Generally, the Robbins-Monro procedure is used to approximate the zeros of the function: \(h(\theta)=\mathbb{E}[H(\theta,\eta)]\), where \(\eta\) has the same distribution as \(\eta_{k}\). Even though the general theory extends to the case of multiple zeros, in this paper, we assume that \(h\) has only one zero, \(\theta^{*}\) (i.e. \(h(\theta)=0\) iff \(\theta=\theta^{*}\) ). We assume that the sequence \((\gamma_{k})_{k\geq 1}\) is chosen as \[\gamma_{k}=\frac{A}{k^{\beta}+B} \tag{2}\] with constants \(A>0\), \(B\geq 0\) and \(1/2<\beta\leq 1\). For this choice we get that \[\sum_{k\geq 1}\gamma_{k}=+\infty,\ \ \sum_{k\geq 1}\gamma_{k}^{2}<+\infty, \tag{3}\] which is usually assumed for the step sequence \((\gamma_{k})_{k\geq 1}\). Our theory can be generalized to other monotonically decreasing choices of the step sequence \(\gamma_{k}\) as long as we have that (3) holds, and that \[\frac{\sqrt{\gamma_{k}}-\sqrt{\gamma_{k+1}}}{(\gamma_{k})^{3/2}}\to\bar{\alpha} \tag{4}\] for some constant \(\bar{\alpha}\). Note that for the choice (2) we have that \(\bar{\alpha}=0\), if \(1/2<\beta<1\) and \(\bar{\alpha}=(2A(B+1))^{-1}\), if \(\beta=1\). Under appropriate assumptions, it can be shown that the convergence: \[\theta_{n}\underset{n\to+\infty}{\longrightarrow}\theta^{*}, \tag{5}\] holds almost surely, see [1]. Furthermore, Gaussian limit theorems have been stated. For a formulation of such a result we remark first that after a renormalisation the procedure (1) stabilizes around the solution of the following Ordinary Differential Equation (ODE): \[\frac{d}{dt}\bar{\theta}_{t}=-h(\bar{\theta}_{t}),\ \bar{\theta}_{0}=\theta_{0}, \ \mbox{where}\ h(\theta)=\mathbb{E}[H(\theta,\eta)]. \tag{6}\] Thus, fluctuations of the algorithm should be considered with respect to the solution \((\bar{\theta}_{t})_{t\geq 0}\) of the ODE (6). For the defintion of the renormalisation, we consider a _shift_ in the indexation of the procedure that will allow us to consider \((\theta_{n})_{n\geq 0}\) in a region that is close to stationarity. Let \(N\in\mathbb{N}\), and consider a sequence \(\left(\theta_{n}^{N}\right)_{n\geq 0}=\left(\theta_{N+n}\right)_{n\geq 0}\), of shifted Robbins-Monro algorithms. These algorithms satisfy the following recurrence equation: \[\theta_{n+1}^{N}=\theta_{n}^{N}-\gamma_{n+1}^{N}H(\theta_{n}^{N},\eta_{n+1}^{N}) \tag{7}\] with starting value \(\theta_{0}^{N}\in\mathbb{R}^{d}\), where \(\eta_{n+1}^{N}=\eta_{N+n+1}\), and \(\gamma_{n+1}^{N}=\gamma_{N+n+1}\). Set now: \[t_{0}^{N}=0,\ t_{1}^{N}=\gamma_{1}^{N},\ t_{2}^{N}=\gamma_{1}^{N}+\gamma_{2}^ {N},\ \ldots,\ t_{k}^{N}=\gamma_{1}^{N}+\cdots+\gamma_{k}^{N}\] and define for an arbitrary fixed terminal time \(T>0\): \[M(N)=\inf\{k\in\mathbb{N}\ ;\ t_{k}^{N}\geq T\}.\] Closeness of \(\theta_{n}^{N}\) and \(\bar{\theta}_{t}\) on the grid \(t_{0}^{N},\ \ldots,\ t_{M(N)}^{N}\) becomes intuitively clear because \(\bar{\theta}_{t}\) is for \(t=t_{n+1}^{N}\) close to its Euler approximation \[\bar{\theta}_{n+1}^{N}=\bar{\theta}_{n}^{N}-\gamma_{n+1}^{N}h(\bar{\theta}_{ n}^{N})\ \mbox{with}\ \bar{\theta}_{0}^{N}=\theta_{0}.\] Note that the Robbins -Monro procedure can be rewritten as perturbed Euler scheme \[\theta_{n+1}^{N}=\theta_{n}^{N}-\gamma_{n+1}^{N}h(\theta_{n}^{N})+\varepsilon_{n }^{N},\] where \[\varepsilon_{n}^{N}=\theta_{n+1}^{N}-\theta_{n}^{N}+\gamma_{n+1}^{N}h(\theta_{ n}^{N})=-\gamma_{n+1}^{N}(H(\theta_{n}^{N},\eta_{n+1}^{N})-h(\theta_{n}^{N})).\] The centered innovations \(\varepsilon_{n}^{N}\) may be considered as "small fluctuations". On the interval \([0,T]\) we consider the re-normalized process \(U_{t}^{N}\) that is equal to : \[U_{t}^{N}=\frac{\theta_{k}^{N}-\bar{\theta}_{t_{k}^{N}}}{\sqrt{\gamma_{k}^{N}}} \tag{8}\] as long as \(t\in[t_{k}^{N},t_{k+1}^{N})\). Under our assumptions, stated in Subsection 2.1, it can be shown that the convergence (5) holds and that the process \(U_{t}^{N}\) converges weakly to the solution \((X_{t})_{0\leq t\leq T}\) of the \(d\)-dimensional SDE: \[\mathrm{d}X_{t}^{i}=\bar{\alpha}X_{t}^{i}\mathrm{d}t-\sum_{j=1}^{d}\frac{ \partial h_{i}}{\partial x_{j}}(\bar{\theta}_{t})\cdot X_{t}^{j}\mathrm{d}t+ \sum_{j=1}^{d}R_{ij}^{1/2}(\bar{\theta}_{t})\mathrm{d}W_{t}^{j},\ i=1,...,d, \tag{9}\] or in matrix notation \[\mathrm{d}X_{t}=\left(\bar{\alpha}I-\mathcal{D}h(\bar{\theta}_{t})\right)X_{t }\mathrm{d}t+R^{1/2}(\bar{\theta}_{t})\mathrm{d}W_{t}, \tag{10}\] where \(W_{t}\) is the \(d\)-dimensional Brownian motion, where for \(\theta\in\mathbb{R}^{d}\) the matrix \(R(\theta)\) is the covariance of \(H(\theta,\eta)\), and where we write \(\mathcal{D}h(x)=(\mathrm{grad}\ h_{1},...,\mathrm{grad}\ h_{d})^{\intercal}(x )=\left(\frac{\partial h_{i}}{\partial x_{j}}(x)\right)_{1\leq i,j\leq d}\) for the \(d\times d\) valued derivative of \(h\) at the point \(x\in\mathbb{R}^{d}\). This convergence can be shown by application of results discussed in [1] or in [18]. For a motivation of drift and diffusion factor we remark that (4) holds and \(\gamma_{k}^{N}\to 0\) and \(\gamma_{k+1}^{N}/\gamma_{k}^{N}\to 1\) for \(N\to\infty\). Under additional assumptions, see Theorem 13 in [1], one can show that \(U_{t}^{N}\) has a stationary limit \[\mathrm{d}X_{t}^{*}=\left(\bar{\alpha}I-\mathcal{D}h(\theta^{*})\right)X_{t} \mathrm{d}t+R^{1/2}(\theta^{*})\mathrm{d}W_{t}. \tag{11}\] In this article, we are interested in a local limit theorem for quantifying the convergence of the densities of a truncated modification of \((U_{t}^{N})_{t\geq 0}\) towards the densities of \((X_{t})_{t\geq 0}\). This will be done for the general case where a nonstationary limit (10) applies. We will come back to a discussion of the stationary limit \(X_{t}^{*}\) in Subsection 2.7. Our main results are stated in the next section. Throughout the paper \(C\) denotes a positive constant that is chosen large enough. At each appearance \(C\) may denote another constant. This may be the case even in the same formula. ## 2 Main Results ### Assumptions and outline of the paper Throughout the paper we will make the following assumptions. 1. The innovations \(\eta_{1},\eta_{2},...\) are i.i.d. with some distribution \(\mu\) and the step \(\gamma_{k}\) follows (2). 2. For any compact subset \(Q\) of \(\mathbb{R}^{d}\) there exist constants \(C_{Q}\) and \(q_{Q}\), possibly depending on \(Q\) such that for all \(\theta\in Q\) \[|H(\theta,x)|\leq C_{Q}(1+|x|^{q_{Q}}).\] * The function \(h(\theta)=\int H(\theta,x)\mu(\mathrm{d}x)\) has two bounded derivatives and the function \(H(\theta,x)\) is Lipschitz w.r.t. its first argument with a constant not depending on \(x\) in a tubular neighborhood \(\{\theta\in\mathbb{R}^{d}:\|\theta-\bar{\theta}_{t}\|\leq\delta\text{ for }t\in[0,T]\text{ with }\delta>0\text{ small enough.}\) For the next assumption we need some notation that will be used again when we define the truncated processes. We define \(a_{N}=\ln(1/\gamma_{1}^{N})\) \[\chi_{N}(x) = \left\{\begin{array}{cl}x&\text{for }\|x\|\leq a_{N},\\ a_{N}\frac{x}{\|x\|}\phi_{N}(\|x\|)&\text{for }\|x\|>a_{N}.\end{array}\right.\] \[\phi_{N}(\|x\|) = \left\{\begin{array}{cl}k_{N}\int_{a_{N}}^{2a_{N}+1-\|x\|}\exp \left(-\frac{1}{(t-a_{N})(a_{N}+1-t)}\right)\mathrm{d}t&\text{for }a_{N}<\|x\|\leq a_{N}+1,\\ 1&\text{for }\|x\|\leq a_{N},\\ 0&\text{for }\|x\|>a_{N}+1,\end{array}\right.\] \[\alpha_{t}^{N} = \alpha_{t_{k}^{N}}^{N}\text{ for }t_{k}^{N}\leq t<t_{k+1}^{N},\] where the value \(k_{N}\) depends on \(N\) and it is equal to \((\int_{a_{N}}^{a_{N}+1}\exp(-(t-a_{N})^{-1}(a_{N}+1-t)^{-1})\mathrm{d}t)^{-1}\). Thus \(\phi_{N}(u)\) is continuous at \(u=a_{N}\). We will also make use of the fact that \(\phi_{N}\) is infinitely often differentiabl. * For \(\xi(\theta,v)=H(\theta,v)-h(\theta)\) with \(\theta\in\mathbb{R}^{d},v\in\mathbb{R}\), the centered random variables \(\xi(\bar{\theta}_{t}+\sqrt{\gamma_{k}^{N}}\)\(\chi_{N}(x),\eta_{k+1}^{N})\) have densities \(f_{t_{k}^{N},x}(z)\) that are five times continuously differentiable with derivatives that are at least of polynomial decay of order \(M>2d+6\), i.e. for \(x\in\mathbb{R}^{d}\), for \(t\in\Gamma_{N}=\{0,t_{1}^{N},...,t_{M(N)}^{N}\}\) and for multi-indices \(\nu\), \(|\nu|\leq 5\) it holds with a constant \(C>0\) that \[|D_{z}^{\nu}f_{t,x}(z)|\leq CQ_{M}(z),\] where, for \(r>d\) the function \(Q_{r}:\mathbb{R}^{d}\rightarrow\mathbb{R}\) is defined by \[Q_{r}(z)=c_{r}(1+\|z\|)^{-r},\] with \(c_{r}\) chosen such that \(\int_{\mathbb{R}^{d}}Q_{r}(z)\mathrm{d}z=1\). Furthermore, it holds for \(x,y,z\in\mathbb{R}^{d}\), \(t=t_{i}^{N}\in\Gamma_{N}\), \(1\leq i,j\leq d\) that \[\int_{\mathbb{R}^{d}}f_{t,x}(z)z_{i}\mathrm{d}z=0,\] \[\int_{\mathbb{R}^{d}}f_{t,x}(z)z_{i}z_{j}\mathrm{d}z=R_{ij}(\bar{ \theta}_{t}+\chi_{N}(x)\sqrt{\gamma_{i}^{N}}),\] \[|f_{t,x}(z)-f_{t,y}(z)|\leq C\sqrt{\gamma_{i+1}^{N}}\|x-y\|Q_{M}(z)\] for some constant \(C>0\). * The covariance matrix \(R(\theta)\) of \(H(\theta,\eta)\) exists and has a smallest eigenvalue bounded away from \(0\), uniformly for \(\theta\). The elements \(R_{ij}\) of the covariance matrix are absolutely bounded and are Lipschitz continuous with a uniformly valid Lipschitz constant in a tubular neighborhood of \(\bar{\theta}_{t}\) for all \(t\in\Gamma_{N}\) for \(N\) large enough. In Subsection 2.2 we introduce a truncated version \(V_{t}^{N}\) of the Robbins-Monro procedure and show that it approximates the untruncated version \(U_{t}^{N}\). More precisely, we will show that the supremum of the absolute difference of the two processes is of order \(O_{P}(\sqrt{\gamma_{1}^{N}})=O_{P}(N^{-\beta/2})\). In the following subsection 2.3 we will state our main result. We will show that the truncated Robbins-Monro procedure can be approximated by the diffusion \(X_{t}\) in the following sense. We will give bounds on the total variation distance and Hellinger distance between the transition densities of the two processes. Furthermore, we will consider the joint distributions on an increasing grid of time points for the two processes. We will state bounds on the total variation distance and Hellinger distance for the distribution of the processes on the grid. This result allows a strong approximation of the truncated Robbins-Monro procedure can be approximated by the diffusion \(X_{t}\). We will give bounds on the total variation distance and Hellinger distance between the values of the truncated Robbins-Monro procedure and the diffusion on an increasing grid of time points. For the proof of these results we define a truncated modification of the diffusion in Subsection 2.4. For the comparison of the truncated Robbins-Monro process to the truncated diffusion we will make use of the parametrix method. How this approach can be adapted to our setting will be explained in Subsection 2.5. Subsection 2.6 states our result on the comparison of the truncated processes and outlines its proof. In Subsection 2.7 we will discuss convergence of the Robbins-Monro process under an additional Lyapunov condition which allows approximation by a stationary diffusion, see (11). All proofs of our results will be given in Section 3 and in the supplement. ### Approximation of Robbins-Monro algorithm by a truncated modification In this section we introduce a truncated modification of \((U_{t}^{N})_{t\geq 0}\) for which we will show uniform convergence to the untruncated version. For a discussion of practical and theoretical aspects of truncated Robbins-Monro procedures we refer to [18]. For a motivation how we truncate \(U_{t_{k}}^{N}\) we rewrite the process \(U_{t_{k}}^{N}\) as specified in the following lemma. **Lemma 2.1**.: With \[\beta_{k+1}^{N} = \sqrt{\gamma_{k+1}^{N}}\left(-h(\bar{\theta}_{t_{k}^{N}})-\frac{ \bar{\theta}_{t_{k+1}^{N}}-\bar{\theta}_{t_{k}^{N}}}{\gamma_{k+1}^{N}}\right),\] \[\alpha_{t_{k}^{N}}^{N} = \frac{\sqrt{\gamma_{k}^{N}}-\sqrt{\gamma_{k+1}^{N}}}{(\gamma_{k+ 1}^{N})^{3/2}}\] the Markov chain \((U_{t_{k}^{N}}^{N})\) has the following representation: \[U_{t_{k+1}^{N}}^{N} = U_{t_{k}^{N}}^{N}+G_{N}(t_{k}^{N},U_{t_{k}^{N}}^{N})\gamma_{k+1}^ {N}U_{t_{k}^{N}}^{N}-\sqrt{\gamma_{k+1}^{N}}\xi\left(\bar{\theta}_{t_{k}^{N}}+ \sqrt{\gamma_{k}^{N}}U_{t_{k}^{N}}^{N},\eta_{k+1}^{N}\right)+\beta_{k+1}^{N}, \tag{12}\] where \[G_{N}(t_{k}^{N},x) = \alpha_{t_{k}^{N}}^{N}I-\sqrt{\frac{\gamma_{k}^{N}}{\gamma_{k+1}^ {N}}}\int_{0}^{1}\mathcal{D}h\left(\bar{\theta}_{t_{k}^{N}}+\delta x\sqrt{ \gamma_{k}^{N}}\right)\mathrm{d}\delta.\] The proof of the lemma will be given in Subsection A.1 of the supplementary material. The representation (12) motivates the following truncated process \(V_{t_{k}^{N}}^{N}\): \[V_{t_{k+1}^{N}}^{N} = V_{t_{k}^{N}}^{N}+F_{N}(t_{k}^{N},V_{t_{k}^{N}}^{N})\gamma_{k+1} ^{N}\chi_{N}(V_{t_{k}^{N}}^{N})-\sqrt{\gamma_{k+1}^{N}}\xi\left(\bar{\theta}_{ t_{k}^{N}}+\sqrt{\gamma_{k}^{N}}\chi_{N}(V_{t_{k}^{N}}^{N}),\eta_{k+1}^{N} \right),\] where \[F_{N}(t,x) = (\alpha_{t}^{N}\mathbb{I}_{\|x\|\leq a_{N}}+\bar{\alpha}\mathbb{I}_{ \|x\|>a_{N}})I-\sqrt{\frac{\gamma_{k+\mathbb{I}_{\|x\|\geq a_{N}}}^{N}}{\gamma_{ k+1}^{N}}}\int_{0}^{1}\mathcal{D}h\left(\bar{\theta}_{t}+\delta\chi_{N}(x)\sqrt{ \gamma_{k}^{N}}\right)\mathrm{d}\delta,\] and where \(I\) is the \(d\times d\) identity matrix. The term \(a_{N}\) and the functions \(\chi_{N}\) and \(\phi_{N}\) were defined after the statement of Assumption (A3). Note that in the definition of the truncated process the term \(\beta_{k+1}^{N}\) is omitted. This is done because this term is asymptotically negligible, as stated in the following lemma. **Lemma 2.2**.: It holds that: \[\beta_{k+1}^{N}\underset{N\rightarrow+\infty}{\longrightarrow}0.\] For a proof of the lemma see Subsection A.2 of the supplementary material. Before we will come in the next subsection to the statement of our main result we will compare the truncated and the untruncated Robbins-Monro algorithm in the following proposition. **Proposition 2.3**.: For \(C>0\) large enough it holds that: \[\mathbb{P}\left(\sup_{1\leq k\leq M(N)}\left\|U_{t_{k}^{N}}^{N}-V_{t_{k}^{N}} ^{N}\right\|>C\sqrt{\gamma_{1}^{N}}\right)\underset{N\rightarrow+\infty}{ \longrightarrow}0.\] For a proof of the proposition see Subsection 3.2. ### Main result The following theorem compares the truncated Robbins-Monro procedure and the diffusion process. It states a bound for the difference in total variation norm and Hellinger norm for the transition densities \(p_{N}\) of the truncated Robbins-Monro procedure \(V_{t}^{N}\) and the transition density \(q\) of the diffusion process \(X_{t}\). For \(x,z\in\mathbb{R}^{d}\) and \(0\leq s<t\) we denote the conditional density of \(X_{t}\) at \(z\) given \(X_{s}=x\) by \(q(s,t,x,z)\) and for \(x,z\in\mathbb{R}^{d}\) and \(s,t\in\{t_{0}^{N},\ \ldots,\ t_{M(N)}^{N}\}\) with \(s<t\) we write \(p_{N}(s,t,x,z)\) for the conditional density of \(V_{t}^{N}\) at \(z\) given \(V_{s}^{N}=x\). **Theorem 2.1**.: _For \(s,t\in\{t_{0}^{N},\ \ldots,\ t_{M(N)}^{N}\}\) with \(s<t\) and \(x\in\mathbb{R}^{d}\) with \(|x|\leq a_{N}/2\) it holds that_ \[\int_{\mathbb{R}^{d}}( \sqrt{p_{N}}-\sqrt{q})^{2}(s,t,x,z)\mathrm{d}z\leq\int_{\mathbb{ R}^{d}}|p_{N}-q|(s,t,x,z)\mathrm{d}z\] \[\leq C(t-s)^{1/2}I_{\{\frac{1}{2}<\beta<1\}}(\gamma_{1}^{N})^{ \beta^{-1}-1}+C\ln(1/\gamma_{1}^{N})^{2}(\gamma_{1}^{N})^{1/2}\] \[\leq C(t-s)^{1/2}I_{\{\frac{1}{2}<\beta<1\}}N^{-(1-\beta)}+C\ln( N)^{2}N^{-\beta/2}.\] The theorem follows directly by application of Proposition 2.5 in Subsection 2.4 and Theorem 2.2 in Subsection 2.6. The propositions can also be used for getting a result on the distributions of the truncated Robbins-Monro procedure and the diffusion process on an increasing grid of time points \(\tau_{0}^{N}=0<\tau_{1}^{N}<...<\tau_{m_{N}}^{N}\) with \(\tau_{1}^{N},...,\tau_{m_{N}}^{N}\in\{t_{0}^{N},\ \ldots,\ t_{M(N)-1}^{N}\}\) and \(\tau_{m_{N}}^{N}=t_{M(N)-1}^{N}\). For \(m_{N}\geq 1\), \(z_{1},...,z_{m_{N}},x\in\mathbb{R}^{d}\) put \(z=(z_{1},...,z_{m_{N}})\) and denote the conditional distribution of \((X_{\tau_{j}^{N}}:1\leq j\leq m_{N})\) given \(X_{0}=x\) and of \((V_{\tau_{j}^{N}}:1\leq j\leq m_{N})\) given \(V_{0}^{N}=x\) by \(Q_{x}^{m_{N}}\) or \(P_{N,x}^{m_{N}}\), respectively. We get the following corollary of Proposition 2.6 and Proposition 2.11 for the L\({}_{1}\)-distance between these measures. **Corollary 2.4**.: _Suppose that_ \[m_{N}\to\infty,\quad C^{-1}m_{N}^{-1}\leq|\tau_{j}^{N}-\tau_{j-1}^{N}|\leq Cm_{N}^ {-1}\ \text{for}\ 1\leq j\leq m_{N},\quad N^{-\beta}m_{N}\ln(N)^{4}\to 0. \tag{13}\] _With a measure \(\nu\) that dominates \(Q_{x}^{m_{N}}\) and \(P_{N,x}^{m_{N}}\) it holds for \(x\in\mathbb{R}^{d}\) with \(|x|\leq a_{N}/2\) that_ \[\int\left|\frac{\mathrm{d}Q_{x}^{m_{N}}}{\mathrm{d}\nu}-\frac{\mathrm{d}P_{N,x }^{m_{N}}}{\mathrm{d}\nu}\right|\mathrm{d}\nu\leq C\delta_{N},\] _where_ \[\delta_{N} =m_{N}^{1/4}\left(I_{\{\frac{1}{2}<\beta<1\}}(\gamma_{1}^{N})^{( \beta^{-1}-1)/2}+\sqrt{\ln(1/\gamma_{1}^{N})}(\gamma_{1}^{N})^{1/4}\right)\] \[\qquad+m_{N}\ln(1/\gamma_{1}^{N})^{2}(\gamma_{1}^{N})^{1/2}\] \[=O\bigg{(}m_{N}^{1/4}\left(I_{\{\frac{1}{2}<\beta<1\}}N^{-(1- \beta)/2}+\sqrt{\ln(N)}N^{-\beta/4}\right)\] \[\qquad+m_{N}\ln(N)^{2}N^{-\beta/2}\bigg{)}.\] In particular, we have that the upper bound \(\delta_{N}\) in the corollary converges to \(0\) if \(m_{N}\) is of the form \(m_{N}=CN^{\mu}\) with \(\mu<\beta/2\) for \(\frac{1}{2}<\beta\leq\frac{4}{5}\), \(\mu<2(1-\beta)\) for \(\frac{4}{5}\leq\beta<1\) and \(\mu<1/2\) for \(\beta=1\). Note also that the corollary implies that on a large enough probability space we can construct versions of \(V_{t}^{N}\) and \(X_{t}\) such that \[\mathbb{P}\left(V_{\tau_{j}^{N}}^{N}=X_{\tau_{j}^{N}}:0\leq j\leq m_{N}\right) \geq 1-C\delta_{N}.\] ### Comparison of truncated and untruncated diffusion In this section we will compare \(X_{t}\) with a truncated modification \(X_{t}^{N}\) defined by \[\mathrm{d}X_{t}^{N}=F_{N}(t,X_{t}^{N})\chi_{N}(X_{t}^{N})\mathrm{d}t+R^{1/2}( \bar{\theta}_{t})\mathrm{d}B_{t}.\] The conditional density of \(X_{t}^{N}\) at \(z\) given \(X_{0}^{N}=x\) is denoted by \(q_{N}(0,t,x,z)\). The following proposition states a bound for the difference of the two densities in total variation norm. **Proposition 2.5**.: For \(0\leq t\leq T\) and \(x\in\mathbb{R}^{d}\) with \(|x|\leq a_{N}/2\) it holds that \[\int_{\mathbb{R}^{d}}(\sqrt{q_{N}}-\sqrt{q})^{2}(s,t,x,z)\mathrm{ d}z\leq\int_{\mathbb{R}^{d}}|q_{N}-q|(s,t,x,z)\mathrm{d}z\] \[\qquad\leq C(t-s)^{1/2}\left(I_{\{\frac{1}{2}<\beta<1\}}(\gamma_{1} ^{N})^{\beta^{-1}-1}+\ln(1/\gamma_{1}^{N})(\gamma_{1}^{N})^{1/2}\right)\] \[\qquad\leq C(t-s)^{1/2}\left(I_{\{\frac{1}{2}<\beta<1\}}N^{-(1- \beta)}+\ln(N)N^{-\beta/2}\right).\] The proof of the proposition will be given in Section A.3. The proposition can be used for getting a result on the distributions of the diffusions on an increasing grid of time points. With \(m_{N}\geq 1\), \(z_{1},...,z_{m_{N}},x\in\mathbb{R}^{d}\), \(z\), \(\tau_{j}^{N}\) and \(Q_{x}^{m_{N}}\) defined as in the last subsection, denote the conditional distribution of \((X_{N,t_{j}}:1\leq j\leq m_{N})\) given \(X_{N,0}=x\) by \(Q_{N,x}^{m_{N}}\). We get the following corollary of Proposition 2.5 for the Hellinger distance and \(\mathrm{L}_{1}\)-distance between the measures \(Q_{x}^{m_{N}}\) and \(Q_{N,x}^{m_{N}}\). The proof of this result can also be found in Section A.3. **Proposition 2.6**.: Suppose that (13) holds. With a measure \(\nu\) that dominates \(Q_{x}^{m_{N}}\) and \(Q_{N,x}^{m_{N}}\) it holds for \(x\in\mathbb{R}^{d}\) with \(|x|\leq a_{N}/2\) that \[\frac{1}{2}\int \left|\frac{\mathrm{d}Q_{x}^{m_{N}}}{\mathrm{d}\nu}-\frac{ \mathrm{d}Q_{N,x}^{m_{N}}}{\mathrm{d}\nu}\right|\mathrm{d}\nu\leq H(Q_{x}^{m_{ N}},Q_{N,x}^{m_{N}})\] \[\leq Cm_{N}^{1/4}\left(I_{\{\frac{1}{2}<\beta<1\}}(\gamma_{1}^{N})^{( \beta^{-1}-1)/2}+\sqrt{\ln(1/\gamma_{1}^{N})}(\gamma_{1}^{N})^{1/4}\right)\] \[\leq Cm_{N}^{1/4}\left(I_{\{\frac{1}{2}<\beta<1\}}N^{-(1-\beta)/2}+ \sqrt{\ln(N)}N^{-\beta/4}\right).\] In particular, we have that the upper bound in the proposition converges to \(0\) if \(m_{N}\) is of the form \(m_{N}=CN^{\mu}\) with \(\mu<\beta\) for \(\frac{1}{2}<\beta\leq\frac{2}{3}\), \(\mu<2(1-\beta)\) for \(\frac{2}{3}\leq\beta<1\) and \(\mu<1\) for \(\beta=1\). ### The parametrix method The main tool of our proofs is the parametrix method. This method allows to represent transition densities of certain processes by so-called parametrix series. A parametrix for a differential operator is often easier to construct than a fundamental solution and for many purposes it is almost as powerful. Sometimes it is also possible to construct a fundamental solution from a parametrix by iteratively improving it. The idea of parametrix representations is old and goes back to E. Levy, who considered non-degenerate second-order operators in non-divergent form, see [13]. It is based on perturbative theory methods. In real time, the density of an SDE with variable coefficients is a priori close to the density of an SDE with constant coefficients, for which we have good density controls. The idea of the method is to use Kolmogorov equations satisfied by the two densities for precise estimates of their difference. In addition to Levy's approach, other versions of the parametrix method have been developed. E.g. an approach has been proposed by [19] to obtain asymptotic expansions of the Laplacian spectrum on a manifold as a function of its curvature. This approach allows to study errors of discrete approximation schemes. In the framework of inhomogeneous non-degenerate diffusion processes it has been used in [15] to obtain local limit propositions for approximating Markov processes and in [16] to get error bounds for Euler schemes. In recent years there were some progresses of the parametrix method in the literature. Without claiming to be complete, we only mention extensions to processes with jumps, to McKean-Vlasov type equations, see [23], to degenrate Kolmogorov type diffusions, see [11, 17] and to parabolic SPDEs, see [25]. We now explain the core of the method for the example of a classical Brownian diffusion without going into technical details. We just explain the "Master formula" as it appeared in the celebrated article [19]. In this example we are interested in studying Brownian SDEs of the form \[Z_{t}=z+\int_{0}^{t}b(s,Z_{s})\mathrm{d}s+\int_{0}^{t}\sigma(s,Z_{s})\mathrm{ d}W_{s}, \tag{14}\] where \((W_{s})_{s\geq 0}\) is an \(\mathbb{R}^{k}\)-valued Brownian motion on some filtered probability space \((\Omega.\mathcal{F},(\mathcal{F}_{t})_{t\geq 0},\mathbb{P})\) and where the process \(Z_{t}\) is \(\mathbb{R}^{k}\)-valued. The coefficients \(b\) and \(\sigma\) are \(\mathbb{R}^{k}\)-valued or \(\mathbb{R}^{k}\times\mathbb{R}^{k}\)-valued, respectively and under appropriate assumptions on \(b\) and \(\sigma\) a unique weak solution of (14) exists and admits a transition density/fundamental solution \(p(s,t,x,y)\). Along with equation (14), consider the equation with coefficients "frozen" at the point \(y\) \[\tilde{Z}_{t}=z+\int_{0}^{t}b(s,y)\mathrm{d}s+\int_{0}^{t}\sigma(s,y)\mathrm{d }W_{s} \tag{15}\] with Gaussian transition density \(\tilde{p}(s,t,x,y)\). Below we will make use of the backward and forward Kolmogorov equations: \[\frac{\partial\tilde{p}}{\partial s}+\tilde{L}\tilde{p}=0,\quad\frac{\partial p }{\partial s}+Lp=0,\quad-\frac{\partial\tilde{p}}{\partial t}+\tilde{L}^{*} \tilde{p}=0,\quad-\frac{\partial p}{\partial t}+L^{*}p=0. \tag{16}\] together with the initial conditions \[\tilde{p}(t,t,x,y)=\delta(x-y)\mbox{ and }p(t,t,x,y)=\delta(x-y). \tag{17}\] With the help of (16) and (17) we can write the basic equality for the parametrix method: \[p(s,t,x,y)-\tilde{p}(s,t,x,y)\] \[\quad=\int_{s}^{t}\mathrm{d}u\frac{\partial}{\partial u}\left[ \int_{\mathbb{R}^{k}}p(s,u,x,z)\tilde{p}(u,t,z,y)\mathrm{d}z\right]\] \[\quad=\int_{s}^{t}\mathrm{d}u\int_{\mathbb{R}^{k}}\left[\tilde{p} (u,t,z,y)L^{*}p(s,u,x,z)-p(s,u,x,z)\tilde{L}\tilde{p}(u,t,z,y)\right]\mathrm{ d}z\] \[\quad=\int_{s}^{t}\mathrm{d}u\int_{\mathbb{R}^{k}}\left[p(s,u,x,z )(L-\tilde{L})\tilde{p}(u,t,z,y)\right]\mathrm{d}z.\] This equation can be written as \[p=\tilde{p}(t,t,x,y)+p\otimes H, \tag{18}\] where \(H=[L-\tilde{L}]\tilde{p}\) and the convolution type binary operation \(\otimes\) is defined by \[(f\otimes g)(s,t,x,y)=\int_{s}^{t}\mathrm{d}u\int_{\mathbb{R}^{d}}f(s,u,x,z)g( u,t,z,y)\mathrm{d}z. \tag{19}\] Iterative application of (18) gives an infinite series \[p=\sum_{r=0}^{\infty}\tilde{p}\otimes H^{(r)}, \tag{20}\] where \(\tilde{p}\otimes H^{(0)}=\tilde{p}\) and \(\tilde{p}\otimes H^{(r+1)}=(\tilde{p}\otimes H^{(r)})\otimes H\) for \(r=0,1,2,...\). An important property of representation (20) is that it allows us to express the non-Gaussian density \(p\) in terms of Gaussian densities \(\tilde{p}\). Equation (20) is the "Master formula" in our proof. We will apply it twice, to the truncated diffusion \(X_{t}^{N}\) and to the truncated Robbins-Monro process \(U_{t}^{N}\). In the latter application the parametrix method allows us to express transition densities of the Robbins-Monro process as an expression depending on the densities of sums of independent random variables. The main idea of the proof is based on the comparison of the densities of sums with the Gaussian densities showing up in the parametrix expansion of the truncated diffusion \(X_{t}^{N}\). We will make use of infinite series expansions of \(q_{N}\) \[q_{N}(t,s,x,y) = \sum_{r=0}^{\infty}\tilde{q}_{N}\otimes H_{N}^{(r)}(t,s,x,y), \tag{21}\] where the notation on the right hand side of this equation will be explained now. The equation is based on looking at a frozen diffusion process \(\tilde{X}_{t}^{s,x,y}\) \[\tilde{X}_{t}^{s,x,y}=x+\int_{s}^{t}F_{N}(u,\theta_{u,s}^{N}(y))\chi_{N}( \theta_{u,s}^{N}(y)\mathrm{d}u+\int_{s}^{t}R^{1/2}(\bar{\theta}_{u})\mathrm{ d}B_{u},\] where the functions \(\theta_{t,s}^{N}\) (\(0\leq t\leq s\)) are the solutions of the following ordinary differential equations \[\frac{\mathrm{d}}{\mathrm{d}t}\theta_{t,s}^{N}(y)=F_{N}(t,\theta_{t,s}^{N}(y)) \chi_{N}(\theta_{t,s}^{N}(y)) \tag{22}\] with boundary condition \(\theta_{s,s}^{N}(y)=y\). This is a Gaussian process with transition density \[\tilde{q}_{N}(t,s,x,y)=\tilde{q}_{N}^{\epsilon,x,y}(t,s,x,z)|_{z=y}=g_{ \vartheta}(t,s,\theta_{t,s}^{N}(y)-x),\] where \[g_{\bar{\sigma}}(t,s,z) = \frac{1}{(2\pi)^{d/2}\sqrt{\det\bar{\sigma}(t,s)}}\exp\left(- \frac{1}{2}z^{\intercal}\bar{\sigma}(t,s)^{-1}z\right),\] \[\bar{\sigma}(t,s) = \int_{t}^{s}R(\bar{\theta}_{u})\mathrm{d}u.\] We define \[H_{N}(t,s,x,y)=(L_{t}^{N}-\tilde{L}_{t}^{N})\tilde{q}_{N}(t,s,x,y),\] where \(L_{t}^{N}\) and \(\tilde{L}_{t}^{N}\) are the following generators: \[L_{t}^{N} = \frac{1}{2}\sum_{i,j=1}^{d}R_{ij}(\bar{\theta}_{t})\frac{\partial ^{2}}{\partial x_{i}\partial x_{j}}+\sum_{i=1}^{d}\left(\sum_{j=1}^{d}\left[F_ {N}(t,x)\right]_{i,j}\left[\chi_{N}(x)\right]_{j}\right)\frac{\partial}{ \partial x_{i}},\] \[\tilde{L}_{t}^{N} = \frac{1}{2}\sum_{i,j=1}^{d}R_{ij}(\bar{\theta}_{t})\frac{\partial ^{2}}{\partial x_{i}\partial x_{j}}+\sum_{i=1}^{d}\left(\sum_{j=1}^{d}\left[F_ {N}(t,\theta_{t,s}^{N}(y))\right]_{i,j}\left[\chi_{N}(\theta_{t,s}^{N}(y)) \right]_{j}\right)\frac{\partial}{\partial x_{i}},\] where the flow \(\theta_{t,s}^{N}(y)\) is defined in (22). The convolution type operation \(\otimes\) is defined as in (19) and for \(r=1,2,...\) the \(r\)-fold convolution is given by \(g\otimes H_{N}^{(r)}=(g\otimes H_{N}^{(r-1)})\otimes H_{N}\) with \(g\otimes H_{N}^{(0)}=g\). The validity of formula (20) for our choices of \(p,\tilde{p}\), and \(H\) and the correctness of (21) for \(q_{N},\tilde{q}_{N}\), and \(H_{N}\) follows from Lemmas 3.3 and 3.4 stated in Subsection 3.1. For the proof in the next subsection we will make use of the following series expansion of \(p_{N}\) for \(l<k\) \[p_{N}(t_{l}^{N},t_{k}^{N},x,y) = \sum_{r=0}^{N}\tilde{p}_{N}\otimes_{N}\mathcal{K}_{N}^{(r)}(t_{l} ^{N},t_{k}^{N},x,y), \tag{23}\] where the notation on the right hand side of this equation will be explained now. For this purpose we consider the following frozen Markov chain \[\tilde{V}_{t_{l+1}^{N}}^{N,y} = \tilde{V}_{t_{l}^{N}}^{N,y}+\left(\theta_{t_{l+1}^{N},t_{j}^{N}} ^{N}(y)-\theta_{t_{i}^{N},t_{j}^{N}}^{N}(y)\right)\] \[-\sqrt{\gamma_{i+1}^{N}}\xi\left(\bar{\theta}_{t_{l}^{N}}+\chi_{ N}\left(\theta_{t_{l}^{N},t_{j}^{N}}^{N}(y)\right)\sqrt{\gamma_{i}^{N}}, \eta_{i+1}\right)\] \[= \tilde{V}_{t_{l}^{N}}^{N,y}+\int_{t_{l}^{N}}^{t_{l+1}^{N}}F_{N} \left(u,\theta_{u,t_{j}^{N}}^{N}(y)\right)\chi_{N}\left(\theta_{u,t_{j}^{N}}^{ N}(y)\right)\mathrm{d}u\] \[-\sqrt{\gamma_{i+1}^{N}}\xi\left(\bar{\theta}_{t_{i}^{N}}+\chi_{ N}\left(\theta_{t_{i}^{N},t_{j}^{N}}^{N}(y)\right)\sqrt{\gamma_{i}^{N}}, \eta_{i+1}\right).\] By iterative application of (24) we get that for \(k<j\) \[\tilde{V}^{N,y}_{t^{N}_{j}}=\tilde{V}^{N,y}_{t^{N}_{k}}+y-\theta^{N}_{t^{N}_{k}, t^{N}_{j}}(y)-\sum_{i=k}^{j-1}\sqrt{\gamma^{N}_{i+1}}\xi_{i},\] where \[\xi_{i}=\xi\left(\bar{\theta}_{t^{N}_{i}}+\chi_{N}\left(\theta^{N}_{t^{N}_{i}, t^{N}_{j}}(y)\right)\sqrt{\gamma^{N}_{i}},\eta_{i+1}\right).\] For the transition density \(\tilde{p}^{y}_{N}(t^{N}_{k},t^{N}_{j},x,z)\) of the frozen Markov chain we have \[\tilde{p}^{y}_{N}(t^{N}_{k},t^{N}_{j},x,z) = \frac{\mathrm{d}}{\mathrm{d}z}P\left(\tilde{V}^{N,y}_{t^{N}_{j}} \in\mathrm{d}z\big{|}\tilde{V}^{N,y}_{t^{N}_{k}}=x\right)\] \[= p_{S_{N}}\left(z-x-y+\theta^{N}_{t^{N}_{k},t^{N}_{j}}(y)\right),\] where \(p_{S_{N}}\) denotes the density of \(-\sum_{i=k}^{j-1}\sqrt{\gamma^{N}_{i+1}}\xi_{i}\). Note that \[\tilde{p}^{y}_{N}(t^{N}_{k},t^{N}_{j},x,y)=p_{S_{N}}\left(\theta^{N}_{t^{N}_{k },t^{N}_{j}}(y)-x\right).\] For a test function \(\phi\) we define now the one step generators: \[\mathcal{L}_{N}\phi(t^{N}_{i},t^{N}_{j},x,y) = \frac{1}{\gamma^{N}_{i+1}}\int_{\mathbb{R}^{d}}\left(\phi(t^{N}_ {i+1},t^{N}_{j},z,y)-\phi(t^{N}_{i+1},t^{N}_{j},x,y)\right)p_{N}(t^{N}_{i},t^{ N}_{i+1},x,z)\mathrm{d}z,\] \[\tilde{\mathcal{L}}_{N}\phi(t^{N}_{i},t^{N}_{j},x,y) = \frac{1}{\gamma^{N}_{i+1}}\int_{\mathbb{R}^{d}}\left(\phi(t^{N}_{ i+1},t^{N}_{j},z,y)-\phi(t^{N}_{i+1},t^{N}_{j},x,y)\right)\tilde{p}^{y}_{N}(t^{N}_{i}, t^{N}_{i+1},x,z)\mathrm{d}z.\] We put \[\mathcal{K}_{N}(t^{N}_{i},t^{N}_{j},x,y)=\left(\mathcal{L}_{N}-\tilde{\mathcal{ L}}_{N}\right)\tilde{p}^{y}_{N}(t^{N}_{i},t^{N}_{j},x,y)\] and with the discretized time convolution \[(f\otimes_{N}g)(t^{N}_{i},t^{N}_{j},x,y)=\sum_{k=i}^{j-1}\gamma^{N}_{k+1}\int_ {\mathbb{R}^{d}}f(t^{N}_{i},t^{N}_{k},x,z)g(t^{N}_{k},t^{N}_{j},z,y)\mathrm{d}z\] we define for \(r=1,2,...\) the \(r\)-fold convolution as \(g\otimes_{N}\mathcal{K}^{(r)}_{N}=(g\otimes_{N}\mathcal{K}^{(r-1)}_{N})\otimes _{N}\mathcal{K}_{N}\) with \(g\otimes_{N}\mathcal{K}^{(0)}_{N}=g\). With this notation one can show that (23) holds. For the proof one makes repeated use of the Markov property, see also Lemma 3.6 in Konakov and Mammen (2000). ### Comparison of the truncated version of the Robbins-Monro algorithm with the truncated diffusion In this subsection we want to prove the following bound for the difference \(|q_{N}-p_{N}|\) of the transition density \(p_{N}\) of the truncated Robbins-Monro procedure \(V^{N}_{t^{N}_{i}}\) and of the transition density \(q_{N}\) of the truncated diffusion \(X^{N}_{t}\). The main result of this section is the following result **Theorem 2.2**.: _For \(i<j\) it holds that for all \(x,y\in\mathbb{R}^{n}\) with a constant \(C>0\)_ \[|q_{N}(t^{N}_{i},t^{N}_{j},x,y)-p_{N}(t^{N}_{i},t^{N}_{j},x,y)|\] \[\qquad\leq C\sqrt{\gamma^{N}_{1}}\ln^{2}(1/\gamma^{N}_{1})\mathcal{ Q}_{M-d-6}(t^{N}_{j}-t^{N}_{i},y-\theta^{N}_{t^{N}_{j},t^{N}_{i}}(x)).\] In the statement of the theorem for a natural number \(m\) and positive real numbers \(t\) we define \[\mathcal{Q}_{m}(t,x)=t^{-d/2}Q_{m}(t^{-1/2}x)\] with \(Q_{m}\) defined in (A4). We now come to the proof of Theorem 2.2. Note that by (21) and (23) we have for \(i<j\) that \[q_{N}(t_{i}^{N},t_{j}^{N},x,y)-p_{N}(t_{i}^{N},t_{j}^{N},x,y)=\sum_{r=0}^{ \infty}\tilde{q}_{N}\otimes H_{N}^{(r)}(t_{i}^{N},t_{j}^{N},x,y)-\sum_{r=0}^{N }\tilde{p}_{N}\otimes_{N}\mathcal{K}_{N}^{(r)}(t_{i}^{N},t_{j}^{N},x,y).\] With this expansion Theorem 2.2 follows immediately from the following lemmas. In the first lemma we replace in the parametrix expansion of \(q_{N}\) the convolution operation \(\otimes\) by the discrete convolution \(\otimes_{N}\). In Lemma 2.8 we show that it suffices to consider only the first \(N\) terms in the expansion. Lemma 2.9 now is the heart of our argument. We replace in the parametrix expansions the Gaussian densities \(\tilde{q}_{N}\) by the densities \(\tilde{p}_{N}\) of normed sums of independent random variables. We use Edgeworth expansion arguments and local limit propositions that offer powerful tools to bound the errors of this replacement. Note that \(\tilde{q}_{N}\) is replaced by \(\tilde{p}_{N}\) at two places: in the summation and in the definition of the kernels \(H_{N}\) and \(K_{N}\). At this point we apply Lemma 3.2. The kernel \(K_{N}\) is defined as \(H_{N}\), but with \(\tilde{q}_{N}\) replaced by \(\tilde{p}_{N}\). Finally, Lemma 2.10 bounds errors that show up by replacing the kernel \(K_{N}\) by the kernel \(\mathcal{K}_{N}\), which is used in the parametrix expansion of the Robbins-Monro algorithm. Lemma 2.9 will be proved in Subsection 3.3. The proofs of the other lemmas can be found in Subsection A.4 of the supplementary material. **Lemma 2.7**.: For \(i<j\) it holds with some constant \(C>0\) that \[\left|\sum_{r=0}^{\infty}\tilde{q}_{N}\otimes H_{N}^{(r)}(t_{i}^{N},t_{j}^{N },x,y)-\sum_{r=0}^{\infty}\tilde{q}_{N}\otimes_{N}H_{N}^{(r)}(t_{i}^{N},t_{j} ^{N},x,y)\right|\] \[\leq C\ln^{2}(1/\gamma_{1}^{N})\sqrt{\gamma_{1}^{N}}\sqrt{t_{j}^{N}-t_{i}^{N} }\bar{q}_{N}(t_{i}^{N},t_{j}^{N},x,y),\] where \(\bar{q}_{N}\) is the transition density of the diffusion \(\bar{X}_{t}^{N}\) defined in (86). Note that we have the bound (91) for \(\bar{q}_{N}\). **Lemma 2.8**.: For \(i<j\) it holds with some constant \(C>0\) that \[\left|\sum_{r=N+1}^{\infty}\tilde{q}_{N}\otimes_{N}H_{N}^{(r)}(t_ {i}^{N},t_{j}^{N},x,y)\right|\] \[\leq C\exp(-CN)\bar{q}_{N}(t_{i}^{N},t_{j}^{N},x,y).\] **Lemma 2.9**.: For \(i<j\) it holds with some constant \(C>0\) that \[\left|\sum_{r=0}^{N}\tilde{q}_{N}\otimes_{N}H_{N}^{(r)}(t_{i}^{N },t_{j}^{N},x,y)-\sum_{r=0}^{N}\tilde{p}_{N}\otimes_{N}K_{N}^{(r)}(t_{i}^{N}, t_{j}^{N},x,y)\right|\] \[\leq C\ln(1/\gamma_{1}^{N})\sqrt{\gamma_{1}^{N}}(t_{j}^{N}-t_{i}^ {N})^{1/2}\mathcal{Q}_{M-d-2}(t_{j}^{N}-t_{i}^{N},y-\theta_{t_{j}^{N},t_{i}^{N }}^{N}(x)),\] where \[K_{N}(t_{i}^{N},t_{j}^{N},x,y)=(L_{t_{i}^{N}}^{N}-\tilde{L}_{t_{i}^{N}}^{N}) \tilde{p}_{N}(t_{i+1}^{N},t_{j}^{N},x,y).\] The convolutions \(K_{N}^{(r)}\) are calculated using the convolution \(\otimes_{N}\), in contrast to \(H_{N}^{(r)}\) where as above the convolution operation \(\otimes\) is used. **Lemma 2.10**.: For \(i<j\) it holds with some constant \(C>0\) that \[\left|\sum_{r=0}^{N}\tilde{p}_{N}\otimes_{N}K_{N}^{(r)}(t_{i}^{N},t _{j}^{N},x,y)-\sum_{r=0}^{N}\tilde{p}_{N}\otimes_{N}\mathcal{K}_{N}^{(r)}(t_{i} ^{N},t_{j}^{N},x,y)\right|\] \[\qquad\leq C\sqrt{\gamma_{1}^{N}}\ln\left(1/\gamma_{1}^{N}\right) \mathcal{Q}_{M-d-6}(t_{j}^{N}-t_{i}^{N},y-\theta_{t_{j}^{N},t_{i}^{N}}^{N}(x)).\] Theorem 2.2 can be used for getting a result on the distributions of the truncated diffusion and truncated Robbins-Monro procedure on an increasing grid of time points. With \(m_{N}\geq 1\), \(z_{1},...,z_{m_{N}},x\in\mathbb{R}^{d}\), \(z\), \(\tau_{j}^{N}\), \(Q_{N,x}^{m_{N}}\) and \(P_{N,x}^{m_{N}}\) defined as in the subsections 2.3 and 2.4 we get the following corollary of Proposition 2.5 for the Hellinger distance and \(\mathrm{L}_{1}\)-distance between the measures \(Q_{x}^{m_{N}}\) and \(Q_{N,x}^{m_{N}}\). The proof of this result can be found in Section 3.4. **Proposition 2.11**.: Suppose that (13) holds. With a measure \(\nu\) that dominates \(P_{N,x}^{m_{N}}\) and \(Q_{N,x}^{m_{N}}\) it holds for \(x\in\mathbb{R}^{d}\) with \(|x|\leq a_{N}/2\) that \[\int\left|\frac{\mathrm{d}Q_{N,x}^{m_{N}}}{\mathrm{d}\nu}-\frac{\mathrm{d}P_{N,x}^{m_{N}}}{\mathrm{d}\nu}\right|\mathrm{d}\nu\leq C\sqrt{\gamma_{1}^{N}}m_{ N}\ln^{2}(1/\gamma_{1}^{N})\leq Cm_{N}N^{-\beta/2}\ln^{2}(N)\] In particular, we have that the upper bound in the proposition converges to \(0\) if \(m_{N}\) is of the form \(m_{N}=CN^{\mu}\) with \(\mu<\beta\). Convergence of the Robbins-Monro process under an additional Lyapunov condition to a stationary diffusion If the Lyapunov condition (A6) stated below is satisfied for the system (6) then the stationary point \(\theta^{*}\) is asymptotically stable for this system. That is for any \(\varepsilon>0\) there exists \(\delta>0\) with the following property: for all \(\theta_{0}\) such that \(|\theta_{0}-\theta^{*}|<\delta\), the solution \(\bar{\theta}_{t}\) of (6) with starting value \(\bar{\theta}_{0}=\theta_{0}\) is defined for all \(t>0\) and it holds that \[\sup_{t\in[0,\infty)}|\bar{\theta}_{t}-\theta^{*}|<\varepsilon.\] Moreover, \[|\bar{\theta}_{t}-\theta^{*}|\to 0\] for \(t\to\infty\), provided \(|\theta_{0}-\theta^{*}|\) is small enough. 1. **(Lyapunov condition)** All eigen values of the matrix \(\bar{\alpha}+\mathcal{D}h(\theta^{*})\) have strictly negative real parts. The next assumption states that the mean vector field \(h(\theta)\) is "inward". 1. There exists \(\delta>0\) with \[\liminf_{k\to\infty}\left(2\delta\frac{\gamma_{k}}{\gamma_{k+1}}+\frac{\gamma _{k+1}-\gamma_{k}}{\gamma_{k+1}^{2}}\right)>0\] such that \[\langle\theta-\theta^{*},h(\theta)\rangle\leq-\delta|\theta-\theta^{*}|^{2}.\] For \(\gamma_{k}=A(k^{\beta}+B)^{-1}\) with \(A>0,B\geq 0\) the first assumption of Assumption (A7) holds for all \(\delta>0\) if \(\frac{1}{2}<\beta<1\) and for \(\delta>\beta/(2A)\) if \(\beta=1\). Under Assumptions (A1)-(A7) one can analyze \[U_{t}^{*,N}=\frac{\theta_{k}^{N}-\theta^{*}}{\sqrt{\gamma_{k}^{N}}}\text{ for }t\in[t_{k}^{N},t_{k+1}^{N})\] with limiting diffusion \(X_{t}^{*}\), see (11), similarly as \(U_{t}^{N}\) with limiting diffusion \(X_{t}^{N}\). Cutoff points for the definition of the truncated processes can be chosen similarly as in the nonstationary case. For the cutoff error of the diffusion one can apply again the results in [4] getting a bound similar to the one stated in Proposition 2.5. For the comparison of the truncated and untruncated Robbins-Monro procedure one gets the bound of Proposition 2.3 proceeding exactly as in the proof of this proposition. Finally, one can show the local limit theorem 2.2 for the truncated versions of \(U_{t}^{*,N}\) and \(X_{t}^{*}\) proceeding essentially again as for the truncated versions of \(U_{t}^{N}\) and \(X_{t}\). Note also that for this result it is not necessary that Assumption (A3) holds in a tubular neighborhood of the ODE solution. It suffices that the assumption holds in a neighborhood of the unique stationary point \(\theta^{*}\), \(h(\theta^{*})=0\). ## 3 Proofs ### Some bounds In this subsection we will state some bounds that will be used in the proofs in the following subsections. The proofs of the lemmas of this subsection can be found in Subsection A.6 in the supplement. The first lemma states that \(F_{N}(t_{k}^{N},x)\chi_{N}(x)\) is uniformly Lipschitz in \(x\in\mathbb{R}^{d}\) for \(N\) large enough: **Lemma 3.1**.: For \(x,y\in\mathbb{R}^{d}\) and \(N\) large enough it holds with some constant \(L>0\) that \[\|F_{N}(t_{k}^{N},x)\chi_{N}(x)-F_{N}(t_{k}^{N},y)\chi_{N}(y)\|\leq L\|x-y\|. \tag{25}\] Furthermore, in the following subsections we will make use of the following inequalities stated in the next lemma. **Lemma 3.2**.: For \(0\leq|\nu|\leq 4\) and \(z,y\in\mathbb{R}^{d}\) it holds that \[|D_{z}^{\nu}(\tilde{p}_{N}-\tilde{q}_{N})(t_{i}^{N},t_{j}^{N},z, y)|\leq C\sqrt{\gamma_{1}^{N}}\left((t_{j}^{N}-t_{i}^{N})^{-(|\nu|+1)/2}+(t_{j}^{N }-t_{i}^{N})^{1-|\nu|/2}a_{N}\right) \tag{26}\] \[\qquad\times\mathcal{Q}_{M-d-1}(t_{j}^{N}-t_{i}^{N},\theta_{t_{i }^{N},t_{j}^{N}}^{N}(y)-z),\] \[|D_{z}^{\nu}\varphi(t_{i}^{N},t_{j}^{N},z,y)|\leq C(t_{j}^{N}-t_{ i}^{N})^{-|\nu|/2}\mathcal{Q}_{M-d-1}(t_{j}^{N}-t_{i}^{N},\theta_{t_{i}^{N},t_{j} ^{N}}^{N}(y)-z) \tag{27}\] for \(\varphi=\tilde{q}_{N}\) and \(\varphi=\tilde{p}_{N}\). We now compare the flow \(\theta_{t,s}^{N}\) defined in (22) with the flow \(\theta_{t,s}\) (\(0\leq t\leq s\)) that is defined as the solutions of the following ordinary differential equations \[\frac{\mathrm{d}}{\mathrm{d}t}\theta_{t,s}(y)=\left(\bar{\alpha}I+\mathcal{D} h(\bar{\theta}_{t})\right)\theta_{t,s}(y) \tag{28}\] with boundary condition \(\theta_{s,s}(y)=y\). The following lemma collects bounds for and between \(\theta_{t,T}(y)\), \(\theta_{t,T}^{N}(y)\), and \(y\). **Lemma 3.3**.: For all \(t,t_{k}^{N}\in[0,T]\) and for all \(x,y\in\mathbb{R}^{d}\) the following bounds hold with a constant \(C>1\), depending only on \(T\). \[C^{-1}\|\theta_{s,t}^{N}(y)-x\|\leq\|\theta_{t,s}^{N}(x)-y\|\leq C \|\theta_{s,t}^{N}(y)-x\|, \tag{29}\] \[C^{-1}\|\theta_{s,t}^{N}(y)\|\leq\|y\|\leq C\|\theta_{s,t}^{N}(y)\|\] (30) \[C^{-1}\|y\|\leq\|\theta_{t,T}(y)\|\leq C\|y\|,\] (31) \[\|\theta_{t,T}^{N}(y)-\theta_{t,T}(y)\|\leq C(T-t)\left(\ln(1/ \gamma_{1}^{N})\sqrt{\gamma_{1}^{N}}\mathbb{I}_{\beta=1}+(\gamma_{1}^{N})^{-1+ 1/\beta}\mathbb{I}_{1/2<\beta<1}\right). \tag{32}\] **Lemma 3.4**.: For all \(t,v\in[0,T]\) and for all \(x,y\in\mathbb{R}^{d}\) the following bounds hold with some constant \(C>0\) \[\left|H^{(r)}(t,v,x,y)\right|\leq C^{r}\frac{\Gamma^{r}(1/2)}{ \Gamma(r/2)}(v-t)^{(r-d-2)/2}\exp\left(-\frac{(x-\theta_{t,v}^{N}(y))^{2}}{C|v -t|}\right), \tag{33}\] \[\left|H^{(r)}_{N}(t,v,x,y)\right|\leq C^{r}\frac{\Gamma^{r}(1/2)} {\Gamma(r/2)}(v-t)^{(r-d-2)/2}\exp\left(-\frac{(x-\theta_{t,v}^{N}(y))^{2}}{C|v -t|}\right),\] (34) \[\left|\left(H^{(r)}-H^{(r)}_{N}\right)(t,v,x,y)\right|\] (35) \[\qquad\leq(r+1)C^{r+1}\frac{\Gamma^{r}(1/2)}{\Gamma(r/2)}\ln^{2} (1/\gamma_{1}^{N})\sqrt{\gamma_{1}^{N}}(v-t)^{(r-d-2)/2}(1+|y|)\exp\left(- \frac{(x-\theta_{t,v}^{N}(y))^{2}}{C|v-t|}\right).\] The following bound follows from Theorem 1.2 in [20]. **Lemma 3.5**.: For \(s<t\) and \(x,y\in\mathbb{R}^{d}\) it holds that \[q_{N}(s,t,x,y)\leq C(t-s)^{-d/2}\exp\left(-C\frac{(y-\theta_{t,s}^{N}(x))^{2} }{t-s}\right).\] We conclude this subsection by stating the following simple lemma without proof. **Lemma 3.6**.: For \(r\geq 1\) and \(t>0\), \(z,\delta\in\mathbb{R}^{d}\) it holds that \[\mathcal{Q}_{r}(t,z+\delta) \leq C\mathcal{Q}_{r}(t,z)(1+\|t^{-1/2}\delta\|)^{r}, \tag{36}\] \[\|t^{-1/2}z\|\mathcal{Q}_{r}(t,z) \leq C\mathcal{Q}_{r-1}(t,z). \tag{37}\] ### Proof of Proposition 2.3 We introduce the exit time \[\tau_{a_{N}}^{N}=\inf\{k\in[1,M(N)]:\|V_{t_{k}^{N}}^{N}\|\geq a_{N}\}\] and consider the processes \(U_{t_{k}^{N}}^{N}\) and \(V_{t_{k}^{N}}^{N}\) for \(k\leq\tau_{a_{N}}^{N}\). Note that \(U_{t_{0}^{N}}^{N}=V_{t_{0}^{N}}^{N}\) and \(U_{t_{1}^{N}}^{N}=V_{t_{1}^{N}}^{N}+\beta_{1}^{N}\). Furthermore, for \(1\leq k<\tau_{a_{N}}^{N}\) we have because of \(F_{N}(t,x)=G_{N}(t,x)\) for \(\|x\|\leq a_{N}\) that \[\left\|U_{t_{k+1}^{N}}^{N}-V_{t_{k+1}^{N}}^{N}\right\|\leq(1+L\gamma_{k+1}^{N })\left\|U_{t_{k}^{N}}^{N}-V_{t_{k}^{N}}^{N}\right\|+C\sqrt{\gamma_{k+1}^{N} \gamma_{k}^{N}})\left\|U_{t_{k}^{N}}^{N}-\chi_{N}(V_{t_{k}^{N}}^{N})\right\|+ \left\|\beta_{k+1}^{N}\right\|,\] where \(C/2\) is a bound for the Lipschitz constants of \(h\) and of \(H\) in its first argument, see Assumption (A1). We now use that \(\left\|U_{t_{k}^{N}}^{N}-\chi_{N}(V_{t_{k}^{N}}^{N})\right\|=\left\|U_{t_{k}^{ N}}^{N}-V_{t_{k}^{N}}^{N}\right\|\) for \(k\leq\tau_{a_{N}}^{N}\), that \(\gamma_{k}\) is monotone decreasing, and that \(\left\|\beta_{k+1}^{N}\right\|\) is bounded by a constant times \((\gamma_{k+1}^{N})^{3/2}\), see the Proof of Proposition 2.2. This gives that: \[\left\|U_{t_{k+1}^{N}}^{N}-V_{t_{k+1}^{N}}^{N}\right\|\leq(1+C^{\prime}\gamma_{ k}^{N})\left\|U_{t_{k}^{N}}^{N}-V_{t_{k}^{N}}^{N}\right\|+C^{\prime}(\gamma_{ k}^{N})^{3/2},\] where \(C^{\prime}>0\) is a new constant. Now, by definition of \(M(N)\) we have that \(\gamma_{1}^{N}+...+\gamma_{M(N)-1}^{N}<T\). Because of \(k<M(N)\) for \(k<\tau_{N}^{N}\) we have that \[\left\|U_{t_{k+1}^{N}}^{N}-V_{t_{k+1}^{N}}^{N}\right\| \leq (1+C^{\prime}\gamma_{k}^{N})\cdot...\cdot(1+C^{\prime}\gamma_{1} ^{N})\sum_{l=1}^{k}C^{\prime}(\gamma_{l}^{N})^{3/2}\] \[\leq C^{\prime\prime}(\gamma_{1}^{N})^{1/2}\] with \(C^{\prime\prime}=\exp(C^{\prime}T)C^{\prime}T\). We conclude that \[\mathbb{P}\left(\sup_{1\leq k\leq M(N)}\left\|U_{t_{k}^{N}}^{N}-V _{t_{k}^{N}}^{N}\right\|>C^{\prime\prime}\sqrt{\gamma_{1}^{N}}\right)\leq \mathbb{P}\left(\tau_{N}^{N}<M(N)\right)\] \[\qquad=\mathbb{P}\left(\sup_{1\leq k\leq M(N)-1}\left\|V_{t_{k}^ {N}}^{N}\right\|>a_{N}\right)\] \[\qquad\leq\mathbb{P}\left(\sup_{1\leq k\leq M(N)}\left\|V_{t_{k}^ {N}}^{N}\right\|>a_{N}\right).\] As mentioned after the statement of our assumptions the process \(U_{t}^{N}\) converges in distribution to the diffusion \((X_{t}:0\leq t\leq T)\) defined in (9). The same holds for the process \(V_{t}^{N}\), see Lemma 11.2.1, Theorem 10.2.2 and Theorem 11.2.3 in [27]. Now for any \(\epsilon>0\) there exists a level \(K_{\epsilon}\) with \[\mathbb{P}(\sup_{0\leq t\leq T}\left\|X_{t}\right\|\geq K_{\epsilon})\leq\epsilon.\] This shows that the upper bound \(\mathbb{P}\left(\sup_{1\leq k\leq M(N)}\left\|V_{t_{k}^{N}}^{N}\right\|>a_{N}\right)\) converges to \(0\) because of \(a_{N}\to\infty\) for \(N\to\infty\) which concludes the proof of the proposition. \(\Box\) ### Proof of Lemma 2.9 We will show \[|(H_{N}-K_{N})(t_{i}^{N},t_{j}^{N},z,y)| \leq C|z-\theta_{t_{i}^{N},t_{j}^{N}}^{N}(y)|\;|\nabla_{z}(\tilde{p} _{N}-\tilde{q}_{N})(t_{i}^{N},t_{j}^{N},z,y)|, \tag{38}\] \[|H_{N}(t_{i}^{N},t_{j}^{N},z,y)| \leq C\mathcal{Q}_{M-d-1}(t_{j}^{N}-t_{i}^{N},z-\theta_{t_{i}^{N},t_ {j}^{N}}^{N}(y)),\] (39) \[|(\tilde{p}_{N}\otimes_{N}K_{N}^{(r)})(t_{i}^{N},t_{j}^{N},z,y)| \leq \frac{(C(t_{j}^{N}-t_{i}^{N}))^{r}}{r!}\mathcal{Q}_{M-d-1}(t_{j}^ {N}-t_{i}^{N},z-\theta_{t_{i}^{N},t_{j}^{N}}^{N}(y)), \tag{40}\] and with some constant \(\bar{c}\) and with \(m=M-d-5-\gamma\) \[\int_{\mathbb{R}^{d}}\mathcal{Q}_{m}(t_{k}^{N}-t_{i}^{N},z-\theta _{t_{k}^{N},t_{i}^{N}}^{N}(x))\mathcal{Q}_{m}(t_{j}^{N}-t_{k}^{N},y-\theta_{t_ {j}^{N},t_{k}^{N}}^{N}(z))\mathrm{d}z \tag{41}\] \[\qquad\leq\bar{c}\mathcal{Q}_{m}(t_{j}^{N}-t_{i}^{N},y-\theta_{t_ {j}^{N},t_{i}^{N}}^{N}(x))\] for all \(1\leq i<k\leq j,x,y\in\mathbb{R}^{d}\). In the proof of the lemma we will make repeated use of Lemma 3.2. We will use it to bound the right hand side of (38) and in the proof of (40). At both places we replace the Gaussian densities \(\tilde{q}_{N}\) by the densities \(\tilde{p}_{N}\) of normed sums of independent random variables. By application of the lemma we get with the help of (37) from (38) that \[|(H_{N}-K_{N})(t_{i}^{N},t_{j}^{N},z,y)| \leq C\left(\frac{\sqrt{\gamma_{1}^{N}}}{\sqrt{t_{j}^{N}-t_{i}^{N}}}+ (t_{j}^{N}-t_{i}^{N})a_{N}\sqrt{\gamma_{1}^{N}}\right)\] \[\times\mathcal{Q}_{M-d-2}(t_{j}^{N}-t_{i}^{N},\theta_{t_{i}^{N}, t_{j}^{N}}^{N}(y)-z).\] We now show that (39)-(42) imply the statement of the lemma. For a poof of this claim we write \[\tilde{q}_{N}\otimes_{N}H_{N}^{(r+1)}(t_{i}^{N},t_{j}^{N},x,y)- \tilde{p}_{N}\otimes_{N}K_{N}^{(r+1)}(t_{i}^{N},t_{j}^{N},x,y)=I+II, \tag{43}\] where \[I = (\tilde{q}_{N}\otimes_{N}H_{N}^{(r)}-\tilde{p}_{N}\otimes_{N}K_{N }^{(r)})\otimes_{N}H_{N}(t_{i}^{N},t_{j}^{N},x,y),\] \[II = \tilde{p}_{N}\otimes_{N}K_{N}^{(r)}\otimes_{N}(H_{N}-K_{N})(t_{i }^{N},t_{j}^{N},x,y).\] For a discussion of the second term II note that we get directly from (40)-(42) that \[II \leq \sum_{k=i}^{j-1}\gamma_{k+1}^{N}\int_{\mathbb{R}^{d}}|\tilde{p}_ {N}\otimes_{N}K_{N}^{(r)}|(t_{i}^{N},t_{k}^{N},x,z)|H_{N}-K_{N}|(t_{k}^{N},t_{ j}^{N},z,y)\mathrm{d}z\] \[\leq a_{N}\sqrt{\gamma_{1}^{N}}\frac{C^{r+1}}{r!}\sum_{k=i}^{j-1} \frac{(t_{k}^{N}-t_{i}^{N})^{r}}{\sqrt{t_{j}^{N}-t_{k}^{N}}}\gamma_{k+1}^{N} \int_{\mathbb{R}^{d}}\mathcal{Q}_{M-d-1}(t_{k}^{N}-t_{i}^{N},x-\theta_{t_{i}^ {N},t_{k}^{N}}^{N}(z))\] \[\times\mathcal{Q}_{M-d-1}(t_{j}^{N}-t_{k}^{N},\theta_{t_{k}^{N}, t_{j}^{N}}^{N}(y)-z)\mathrm{d}z\] \[\leq a_{N}\sqrt{\gamma_{1}^{N}}\frac{C^{r+1}}{r!}\sum_{k=i}^{j-1} \frac{(t_{k}^{N}-t_{i}^{N})^{r}}{\sqrt{t_{j}^{N}-t_{k}^{N}}}\gamma_{k+1}^{N} \mathcal{Q}_{M-d-1}(t_{j}^{N}-t_{i}^{N},y-\theta_{t_{j}^{N},t_{i}^{N}}^{N}(x))\] \[\leq a_{N}\sqrt{\gamma_{1}^{N}}\frac{C^{r+1}}{r!}(t_{j}^{N}-t_{i}^{N} )^{r+\frac{1}{2}}B\left(\frac{1}{2},r+1\right)\mathcal{Q}_{M-d-1}(t_{j}^{N}-t_ {i}^{N},y-\theta_{t_{j}^{N},t_{i}^{N}}^{N}(x))\] \[\leq a_{N}\sqrt{\gamma_{1}^{N}}\frac{C^{r+2}}{\Gamma\left(r+\frac{3}{ 2}\right)}(t_{j}^{N}-t_{i}^{N})^{r+\frac{1}{2}}\mathcal{Q}_{M-d-1}(t_{j}^{N}-t _{i}^{N},y-\theta_{t_{j}^{N},t_{i}^{N}}^{N}(x)).\] We can apply this inequality to show that \[\sum_{r=0}^{\infty}\left|\tilde{p}_{N}\otimes_{N}K_{N}^{(r)} \otimes_{N}(H_{N}-K_{N})(t_{i}^{N},t_{j}^{N},x,y)\right| \tag{44}\] \[\leq a_{N}(t_{j}^{N}-t_{i}^{N})^{1/2}\sqrt{\gamma_{1}^{N}} \mathcal{Q}_{M-d-1}(t_{j}^{N}-t_{i}^{N},y-\theta_{t_{j}^{N},t_{i}^{N}}^{N}(x)).\] Iterative application of (43) gives: \[\left|\sum_{r=0}^{N}\tilde{q}_{N}\otimes_{N}H_{N}^{(r)}(t_{i}^{N},t _{j}^{N},x,y)-\tilde{p}_{N}\otimes_{N}K_{N}^{(r)}(t_{i}^{N},t_{j}^{N},x,y)\right|\] \[\qquad=\left|\sum_{r=0}^{N}\tilde{q}_{N}\otimes_{N}(H_{N}^{(r)}-H_ {N}^{(r),\otimes_{N}})(t_{i}^{N},t_{j}^{N},x,y)\right.\] \[\qquad\qquad+\sum_{r=0}^{N}(\tilde{q}_{N}-\tilde{p}_{N})\otimes_{ N}H_{N}^{(r),\otimes_{N}}(t_{i}^{N},t_{j}^{N},x,y)\] \[\qquad\qquad+\sum_{r=0}^{N}\sum_{k=0}^{r-1}\tilde{p}_{N}\otimes_{ N}K_{N}^{(k)}\otimes_{N}(H_{N}-K_{N})\otimes_{N}H_{N}^{(r-1-k),\otimes_{N}}(t_{ i}^{N},t_{j}^{N},x,y)\Bigg{|}\] \[\leq|\tilde{q}_{N}|\otimes_{N}\sum_{r=0}^{N}\left|H_{N}^{(r)}-H_ {N}^{(r),\otimes_{N}}\right|(t_{i}^{N},t_{j}^{N},x,y)\] \[\qquad\qquad+|\tilde{q}_{N}-\tilde{p}_{N}|\otimes_{N}\sum_{r=0}^{ N}\left|H_{N}^{(r),\otimes_{N}}\right|(t_{i}^{N},t_{j}^{N},x,y)\] \[\qquad\qquad+\sum_{k\geq 0}\left|\tilde{p}_{N}\otimes_{N}K_{N}^{( k)}\otimes_{N}(H_{N}-K_{N})\right|\otimes_{N}\sum_{r=0}^{N}\left|H_{N}^{(r), \otimes_{N}}\right|(t_{i}^{N},t_{j}^{N},x,y).\] The first term of this upper bound can be bounded as follows. \[|\tilde{q}_{N}|\otimes_{N}\sum_{r=0}^{N}\left|H_{N}^{(r)}-H_{N}^{(r),\otimes_ {N}}\right|(t_{i}^{N},t_{j}^{N},x,y)\leq C\ln^{2}(1/\gamma_{1}^{N})\sqrt{ \gamma_{1}^{N}}\sqrt{t_{j}^{N}-t_{i}^{N}}\bar{q}_{N}(t_{i}^{N},t_{j}^{N},x,y). \tag{45}\] This inequality can shown by similar arguments as the proof of (56) in the proof of Lemma 2.7. The statement of the lemma now follows by application of (26), (33), (44), (45) and (61). It remains to show (38)- (41). For a proof of (38) note that \[(H_{N}-K_{N})(t_{i}^{N},t_{j}^{N},z,y)=(L_{t_{i}^{N}}^{N}-\tilde{ L}_{t_{i}^{N}}^{N})(\tilde{q}_{N}-\tilde{p}_{N})(t_{i}^{N},t_{j}^{N},z,y)\] \[\qquad=\left(\sum_{i=1}^{d}\left(\sum_{j=1}^{d}[F_{N}(t_{i}^{N},z )]_{i,j}[\chi_{N}(z)]_{j}\right)-\sum_{i=1}^{d}\left(\sum_{j=1}^{d}[F_{N}(t_{i }^{N},\theta_{t_{i}^{N},t_{j}^{N}}^{N}(y))]_{i,j}[\chi_{N}(\theta_{t_{i}^{N}, t_{j}^{N}}^{N}(y))]_{j}\right)\right)\] \[\qquad\qquad\times\frac{\partial}{\partial z_{i}}(\tilde{q}_{N}- \tilde{p}_{N})(t_{i}^{N},t_{j}^{N},z,y).\] With the help of (25) this shows (38). Claim (39) follows directly by application of (34). We now come to the proof of (41). For this purpose we show for \(0\leq t<s<u\leq T\), \(x,y\in\mathbb{R}^{d}\) that with some constant \(C>0\) \[I(s,t,u,x,y)\leq C\mathcal{Q}_{m}(u-t,y-\theta_{u,t}^{N}(x)),\] where \[I(s,t,u,x,y)=\int_{\mathbb{R}^{d}}\mathcal{Q}_{m}(s-t,z-\theta_{s,t}^{N}(x)) \mathcal{Q}_{m}(u-s,y-\theta_{u,s}^{N}(z))\mathrm{d}z.\] This is equivalent to (41). For the proof of this claim note first that we get from (30) that \[I(s,t,u,x,y)\leq C\int_{\mathbb{R}^{d}}\mathcal{Q}_{m}(s-t,z-\theta_{s,t}^{N}(x)) \mathcal{Q}_{m}(u-s,z-\theta_{s,u}^{N}(y))\mathrm{d}z.\] We now consider two cases: I. \(\|y-\theta_{u,t}^{N}(x)\|\leq\sqrt{u-t}\) and II. \(\|y-\theta_{u,t}^{N}(x)\|>\sqrt{u-t}\). We start by considering case I. We make the additional assumption that \(s-t\geq\frac{1}{2}(u-t)\). The case \(s-t\leq\frac{1}{2}(u-t)\) can be treated with the same type of arguments and for this reason its discussion is omitted. Now we get in the latter case: \[\mathcal{Q}_{m}(s-t,z-\theta_{s,t}^{N}(x))=\frac{1}{(s-t)^{d/2}}Q _{m}\left(\frac{z-\theta_{s,t}^{N}(x)}{\sqrt{s-t}}\right)\] \[\qquad\leq\frac{2^{d/2}}{(u-t)^{d/2}}c_{m}\leq\frac{2^{d/2}}{(u-t )^{d/2}}c_{m}\frac{2^{m}}{\left(1+\frac{\|y-\theta_{u,t}^{N}(x)\|}{\sqrt{u-t} }\right)^{m}}\] \[\qquad=2^{m+d/2}\mathcal{Q}_{m}(u-t,y-\theta_{u,t}^{N}(x))\] This gives in case I the following bound for \(I(s,t,u,x,y)\) \[I(s,t,u,x,y)\leq C\mathcal{Q}_{m}(u-t,y-\theta_{u,t}^{N}(x))\int_{\mathbb{R}^ {d}}\mathcal{Q}_{m}(u-s,z-\theta_{s,u}^{N}(y))\mathrm{d}z\leq C\mathcal{Q}_{m }(u-t,y-\theta_{u,t}^{N}(x)),\] which shows (41) for case I. We now consider case II: \(\|y-\theta_{u,t}^{N}(x)\|>\sqrt{u-t}\). We define the following two sets: \[A_{1} = \{z\in\mathbb{R}^{d}:\|z-\theta_{s,t}^{N}(x)\|\geq\frac{1}{2}\| \theta_{s,u}^{N}(y)-\theta_{s,t}^{N}(x)\|\},\] \[A_{2} = \{z\in\mathbb{R}^{d}:\|z-\theta_{s,u}^{N}y)\|\geq\frac{1}{2}\| \theta_{s,u}^{N}(y)-\theta_{s,t}^{N}(x)\|\}.\] It holds \(A_{1}\cup A_{2}=\mathbb{R}^{d}\). We only consider values of \(z\) in \(A_{2}\). For such \(z\) we get by application of (30) \[\|z-\theta_{s,u}^{N}(y)\|\geq\frac{1}{2}\|\theta_{s,u}^{N}(y)- \theta_{s,t}^{N}(x)\|\] \[\qquad=\frac{1}{2}\|\theta_{s,u}^{N}(y)-\theta_{s,u}^{N}(\theta_{ u,t}^{N}(x))\|\geq C\|y-\theta_{u,t}^{N}(x)\|.\] This gives in case II \[\int_{A_{2}}\mathcal{Q}_{m}(s-t,z-\theta_{s,t}^{N}(x))\mathcal{Q} _{m}(u-s,z-\theta_{s,u}^{N}(y))\mathrm{d}z\] \[\qquad\leq\int_{A_{2}}\mathcal{Q}_{m}(s-t,z-\theta_{s,t}^{N}(x)) \frac{c_{m}(u-s)^{(m-d)/2}}{\|\theta_{s,u}^{N}(y)-z\|^{m}}\mathrm{d}z\] \[\qquad\leq\int_{A_{2}}\mathcal{Q}_{m}(s-t,z-\theta_{s,t}^{N}(x)) \mathrm{d}z\frac{c_{m}(u-s)^{(m-d)/2}}{C^{m}\|y-\theta_{u,t}^{N}(x)\|^{m}}\] \[\qquad\leq c_{m}(u-t)^{-d/2}C^{-m}2^{m}\left(1+\|y-\theta_{u,t}^{N }(x)\|\right)^{-m}\] \[\qquad\leq\mathcal{Q}_{m}(u-t,y-\theta_{u,t}^{N}(x)).\] The same bound can be shown for integrals over the set \(A_{1}\). This completes the proof of (41) for case II. It remains to show (40). For a proof of this claim note that \[K_{N}(t_{i}^{N},t_{j}^{N},z,y)=(L_{t_{i}^{N}}^{N}-\tilde{L}_{t_{i}^{ N}}^{N})\tilde{p}_{N}(t_{i}^{N},t_{j}^{N},z,y) \tag{46}\] \[\qquad=\left(\sum_{i=1}^{d}\left(\sum_{j=1}^{d}[F_{N}(t_{i}^{N},z )]_{i,j}[\chi_{N}(z)]_{j}\right)-\sum_{i=1}^{d}\left(\sum_{j=1}^{d}[F_{N}(t_{i} ^{N},\theta_{t_{i}^{N},t_{j}^{N}}^{N}(y))]_{i,j}[\chi_{N}(\theta_{t_{i}^{N},t_{ j}^{N}}^{N}(y))]_{j}\right)\right)\] \[\qquad\qquad\times\left(\frac{\partial}{\partial z_{i}}(\tilde{p }_{N}-\tilde{q}_{N})+\frac{\partial}{\partial z_{i}}\tilde{q}_{N}\right)(t_{i} ^{N},t_{j}^{N},z,y)\] \[\qquad\leq C\left(\frac{\sqrt{\gamma_{1}^{N}}}{\sqrt{t_{j}^{N}-t_ {i}^{N}}}+(t_{j}^{N}-t_{i}^{N})a_{N}\sqrt{\gamma_{1}^{N}}\right)\mathcal{Q}_{ M-d-2}(t_{j}^{N}-t_{i}^{N},\theta_{t_{i}^{N},t_{j}^{N}}^{N}(y)-z)\] \[\qquad\qquad+C\tilde{q}_{N}(t_{i}^{N},t_{j}^{N},z,y)\] \[\qquad\leq C\mathcal{Q}_{M-d-2}(t_{j}^{N}-t_{i}^{N},\theta_{t_{i }^{N},t_{j}^{N}}^{N}(y)-z),\] where again Lemma 3.2 has been used. With the help of (41) this gives that \[\left|(\tilde{p}_{N}\otimes_{N}K_{N})(t_{i}^{N},t_{j}^{N},x,z) \right|\leq\sum_{k=i}^{j-1}\gamma_{k+1}^{N}\int_{\mathbb{R}^{d}}|\tilde{p}_{N }(t_{i}^{N},t_{k}^{N},x,v)||K_{N}(t_{k}^{N},t_{j}^{N},v,z)|\mathrm{d}v\] \[\qquad\leq C\sum_{k=i}^{j-1}\gamma_{k+1}^{N}\int_{\mathbb{R}^{d}} \mathcal{Q}_{M-d-2}(t_{k}^{N}-t_{i}^{N},v-\theta_{t_{k}^{N},t_{i}^{N}}^{N}(x) )\mathcal{Q}_{M-d-2}(t_{j}^{N}-t_{k}^{N},\theta_{t_{k}^{N},t_{j}^{N}}^{N}(z)-v )\mathrm{d}v\] \[\qquad\leq C\mathcal{Q}_{M-d-2}(t_{j}^{N}-t_{i}^{N},z-\theta_{t_{ j}^{N},t_{i}^{N}}^{N}(x))\sum_{k=i}^{j-1}\gamma_{k+1}^{N}\] \[\qquad\leq C\mathcal{Q}_{M-d-2}(t_{j}^{N}-t_{i}^{N},z-\theta_{t_{ j}^{N},t_{i}^{N}}^{N}(x))(t_{j}^{N}-t_{i}^{N}).\] Similarly we get that \[\left|(\tilde{p}_{N}\otimes_{N}K_{N}^{(2)})(t_{i}^{N},t_{j}^{N},x,z)\right|\leq\sum_{k=i}^{j-1}\gamma_{k+1}^{N}\int_{\mathbb{R}^{d}}|(\tilde{p }_{N}\otimes_{N}K_{N})(t_{i}^{N},t_{k}^{N},x,v)||K_{N}(t_{k}^{N},t_{j}^{N},v,z) |\mathrm{d}v\] \[\qquad\leq C^{2}\sum_{k=i}^{j-1}\gamma_{k+1}^{N}(t_{k}^{N}-t_{i}^ {N})\int_{\mathbb{R}^{d}}\mathcal{Q}_{M-d-2}(t_{k}^{N}-t_{i}^{N},v-\theta_{t_{ k}^{N},t_{i}^{N}}^{N}(x))\] \[\qquad\qquad\times\mathcal{Q}_{M-d-2}(t_{j}^{N}-t_{k}^{N},\theta_{ t_{k}^{N},t_{j}^{N}}^{N}(z)-v)\mathrm{d}v\] \[\qquad\leq\frac{(C(t_{j}^{N}-t_{i}^{N}))^{2}}{2!}\mathcal{Q}_{M-d -2}(t_{j}^{N}-t_{i}^{N},z-\theta_{t_{j}^{N},t_{i}^{N}}^{N}(x)),\] where \(C\) is the same constant as in the last inequality. By induction we conclude that \[\left|(\tilde{p}_{N}\otimes_{N}K_{N}^{(r+1)})(t_{i}^{N},t_{j}^{N},x,z)\right|\leq\sum_{k=i}^{j-1}\gamma_{k+1}^{N}\int_{\mathbb{R}^{d}}|(\tilde{p}_{N }\otimes_{N}K_{N}^{(r)})(t_{i}^{N},t_{k}^{N},x,v)||K_{N}(t_{k}^{N},t_{j}^{N},v, z)|\mathrm{d}v\] \[\qquad\leq\frac{C^{r+1}}{r!}\sum_{k=i}^{j-1}\gamma_{k+1}^{N}(t_{k} ^{N}-t_{i}^{N})^{r}\int_{\mathbb{R}^{d}}\mathcal{Q}_{M-d-2}(t_{k}^{N}-t_{i}^{N},v-\theta_{t_{k}^{N},t_{i}^{N}}^{N}(x))\] \[\qquad\qquad\times\mathcal{Q}_{M-d-2}(t_{j}^{N}-t_{k}^{N},\theta _{t_{k}^{N},t_{j}^{N}}^{N}(z)-v)\mathrm{d}v\] \[\qquad\leq\frac{C^{r+1}}{r!}\int_{0}^{t_{j}^{N}-t_{i}^{N}}u^{r} \mathrm{d}u\mathcal{Q}_{M-d-2}(t_{j}^{N}-t_{i}^{N},z-\theta_{t_{j}^{N},t_{i}^ {N}}^{N}(x))\] \[\qquad\leq\frac{(C(t_{j}^{N}-t_{i}^{N}))^{r+1}}{(r+1)!}\mathcal{Q }_{M-d-2}(t_{j}^{N}-t_{i}^{N},z-\theta_{t_{j}^{N},t_{i}^{N}}^{N}(x)),\] which shows (40) and concludes the proof of Lemma 2.9. ### Proof of Proposition 2.11 Using a telescopic sum, we have with putting \(x_{0}=x\), \(x_{m_{N}}=y\), \(\prod_{k=1}^{0}...=1\), and \(\prod_{k=l+1}^{l}...=1\) \[\int\left|\frac{\mathrm{d}Q_{N,x}^{m_{N}}}{\mathrm{d}\nu}-\frac{ \mathrm{d}P_{N,x}^{m_{N}}}{\mathrm{d}\nu}\right|\mathrm{d}\nu \tag{47}\] \[=\int_{\mathbb{R}^{d(m_{N}-1)}}\left|\prod_{i=1}^{m_{N}}q_{N}( \tau_{i-1}^{N},\tau_{i}^{N},x_{i-1},x_{i})-\prod_{i=1}^{m_{N}}p_{N}(\tau_{i-1 }^{N},\tau_{i}^{N},x_{i-1},x_{i})\right|\mathrm{d}x_{1}...\mathrm{d}x_{m_{N}}\] \[\leq\sum_{i=1}^{m_{N}}\int_{\mathbb{R}^{d(m_{N}-1)}}\left|q_{N}( \tau_{i-1}^{N},\tau_{i}^{N},x_{i-1},x_{i})-p_{N}(\tau_{i-1}^{N},\tau_{i}^{N},x _{i-1},x_{i})\right|\] \[\qquad\times\prod_{k=1}^{i-1}q_{N}(\tau_{k-1}^{N},\tau_{k}^{N},x _{k-1},x_{k})\prod_{l=i+1}^{m_{N}}p_{N}(\tau_{l-1}^{N},\tau_{l}^{N},x_{l-1},x _{l})\mathrm{d}x_{1}...\mathrm{d}x_{m_{N}}\] \[\leq\sum_{i=1}^{m_{N}}\int_{\mathbb{R}^{d(m_{N}-1)}}\left|q_{N}( \tau_{i-1}^{N},\tau_{i}^{N},x_{i-1},x_{i})-p_{N}(\tau_{i-1}^{N},\tau_{i}^{N},x _{i-1},x_{i})\right|\] \[\qquad\times q_{N}(0,\tau_{i-1}^{N},x,x_{i-1})p_{N}(\tau_{i}^{N},T,x_{i},y)\mathrm{d}x_{i-1}\ \mathrm{d}x_{i}\ \mathrm{d}y\] \[=\sum_{i=1}^{m_{N}}\int_{\mathbb{R}^{d(m_{N}-1)}}\left|q_{N}( \tau_{i-1}^{N},\tau_{i}^{N},x_{i-1},x_{i})-p_{N}(\tau_{i-1}^{N},\tau_{i}^{N},x _{i-1},x_{i})\right|\] (48) \[\qquad\times q_{N}(0,\tau_{i-1}^{N},x,x_{i-1})\mathrm{d}x_{i-1} \ \mathrm{d}x_{i}.\] Now by Lemma 3.5 we have that \[q_{N}(0,\tau_{i-1}^{N},x,x_{i-1}) \leq C(\tau_{i-1}^{N})^{-d/2}\exp(-C(x_{i-1}-\theta_{\tau_{i-1}^{N},0}^{N}(x))^{2}/\tau_{i-1}^{N})\] \[\leq C\mathcal{Q}_{M-d-6}(\tau_{i-1}^{N},x_{i-1}-\theta_{\tau_{i-1} ^{N},0}^{N}(x)).\] Furthermore, we have by Theorem 2.2 and (13) \[\left|q_{N}(\tau_{i-1}^{N},\tau_{i}^{N},x_{i-1},x_{i})-p_{N}(\tau_ {i-1}^{N},\tau_{i}^{N},x_{i-1},x_{i})\right|\] \[\qquad\leq C\sqrt{\gamma_{1}^{N}}\ln^{2}(1/\gamma_{1}^{N}) \mathcal{Q}_{M-d-6}(\tau_{i}^{N}-\tau_{i-1}^{N},x_{i}-\theta_{\tau_{i}^{N},\tau_ {i-1}^{N}}^{N}(x_{i-1}))\] \[\qquad\leq C\sqrt{\gamma_{1}^{N}}\ln^{2}(1/\gamma_{1}^{N}) \mathcal{Q}_{M-d-6}(\tau_{i}^{N}-\tau_{i-1}^{N},x_{i}-\theta_{\tau_{i}^{N},\tau_ {i-1}^{N}}^{N}(x_{i-1})).\] Now the statement of the proposition follows by application of (41).
2307.07887
Handwritten and Printed Text Segmentation: A Signature Case Study
While analyzing scanned documents, handwritten text can overlap with printed text. This overlap causes difficulties during the optical character recognition (OCR) and digitization process of documents, and subsequently, hurts downstream NLP tasks. Prior research either focuses solely on the binary classification of handwritten text or performs a three-class segmentation of the document, i.e., recognition of handwritten, printed, and background pixels. This approach results in the assignment of overlapping handwritten and printed pixels to only one of the classes, and thus, they are not accounted for in the other class. Thus, in this research, we develop novel approaches to address the challenges of handwritten and printed text segmentation. Our objective is to recover text from different classes in their entirety, especially enhancing the segmentation performance on overlapping sections. To support this task, we introduce a new dataset, SignaTR6K, collected from real legal documents, as well as a new model architecture for the handwritten and printed text segmentation task. Our best configuration outperforms prior work on two different datasets by 17.9% and 7.3% on IoU scores. The SignaTR6K dataset is accessible for download via the following link: https://forms.office.com/r/2a5RDg7cAY.
Sina Gholamian, Ali Vahdat
2023-07-15T21:49:22Z
http://arxiv.org/abs/2307.07887v3
# Handwritten and Printed Text Segmentation: A Signature Case Study ###### Abstract While analyzing scanned documents, handwritten text can overlap with printed text. This overlap causes difficulties during the optical character recognition (OCR) and digitization process of documents, and subsequently, hurts downstream NLP tasks. Prior research either focuses solely on the binary classification of handwritten text or performs a three-class segmentation of the document, i.e., recognition of handwritten, printed, and background pixels. This approach results in the assignment of overlapping handwritten and printed pixels to only one of the classes, and thus, they are not accounted for in the other class. Thus, in this research, we develop novel approaches to address the challenges of handwritten and printed text segmentation. Our objective is to recover text from different classes in their entirety, especially enhancing the segmentation performance on overlapping sections. To support this task, we introduce a new dataset, SignaTR6K, collected from real legal documents, as well as a new model architecture for the handwritten and printed text segmentation task. Our best configuration outperforms prior work on two different datasets by 17.9% and 7.3% on IoU scores. The SignaTR6K dataset is accessible for download via the following link: [https://forms.office.com/r/2a5RDg7CAY](https://forms.office.com/r/2a5RDg7CAY). ## 1 Introduction For various purposes, the digitization of hard-copy documents and associated challenges are an active area of research in both academia [33, 19, 35, 18, 11, 28] and industry [26]. This digital transformation involves scanning paper documents through an OCR process, making their text accessible for downstream natural language processing (NLP) tasks, such as named entity recognition (NER). The documents of interest can originate from a variety of domains, including historical documents [35], legal and court-issued documents [26], business contracts [33], and medical records and prescriptions [10]. Although various studies, as cited above, have been conducted, there is yet a considerable gap between current approaches and human-level performance in mixed-text scenarios (i.e., when handwritten and printed text overlap). For instance, attorneys frequently sign legal documents, resulting in their signatures overlapping with their information. This overlap hampers the performance of OCR tools in character recognition, subsequently making it challenging for downstream tasks to accurately identify information linked to the attorneys and their associated law firms. Figure 1 provides an illustration of handwritten text (HT) overlapping with printed text (PT) in court documents. Extracting parties names is a crucial step in the named-entity recognition (NER) task for legal and court documents [32]. When attorneys and other involved parties sign these documents, which are later scanned using OCR tools, their signatures often obscure the details of names and law firms. Consequently, the semantic segmentation of handwritten elements, such as lawyers' signatures and accompanying handwritten notes, and printed text detailing the lawyer and the law firm's information, becomes vital. In this research, we aim to address the challenges of HT and PT segmentation, as there is still a large gap between human-level performance and the existing approaches Figure 1: Court documents are first printed and then signed or annotated by various parties. This results in handwritten text overlapping the underlying printed information, leading to performance degradation in downstream tasks, such as named entity recognition (NER). This is a fabricated example, however very similar to the original documents, to protect personally identifiable information (PII). for this task. Our focus in this effort is to improve the segmentation performance in overlapping regions for scanned legal documents, and to aid in this endeavor, we also introduce a new dataset. In summary, our research makes the following contributions: * We introduce a new dataset, SignaTR6K (pronounce as Signature 6K)1 derived from 200 pixel-level manually annotated crops of images from genuine legal documents. The dataset comprises signatures, handwritten text, and printed text, which frequently overlap. With data augmentation, we have created a dataset of sizes 5169, 530, 558, for training, validation, and testing, respectively, that we release to the public, to facilitate dataset availability for future research and to aid in the training and evaluation of deep-learning segmentation models. Footnote 1: Available at [https://forms.office.com/r/2a5RDg7cAY](https://forms.office.com/r/2a5RDg7cAY). * We propose a novel architecture that integrates both _semantic segmentation features_ and _fine features_, enhancing the performance of text segmentation over previous methods. Moreover, we introduce a new loss function termed _Fusion loss_, that, comparatively, is stable and converges to optimal loss values and Intersection over Union (IoU) scores. * Lastly, we conduct an extensive quantitative and visual evaluation of different variations of our approach against prior work on two distinct datasets, and illustrate our approach's superior performance in the text segmentation task, especially in challenging scenarios where printed and handwritten text overlap. ## 2 Background and Literature Review The text segmentation problem is defined as follows: given a scanned document possibly containing handwritten, printed, and background (i.e., blank) pixels, the task is to assign each pixel to its appropriate class. Scanned documents can originate from various sources, such as hard-copy paper documents or microfilms [28, 35]. Formally, for a given document \(D\), assuming there exist three classes as handwritten text \(HT\), printed text \(PT\), and background \(BG\), and pixel \(p_{i}\): \(\forall p_{i}\in D:p_{i}(c)==True\,if\,p_{i}\in c\)\(\&\&\,\,\,c\in\{HT,PT,BG\}\) (1). A situation may arise where a pixel belongs to two classes, i.e., \(HT\) and \(PT\) classes, when handwritten text overlaps with printed text, which single-label three-class formulation cannot handle such cases. Several previous studies have studied the segmentation of handwritten and printed text [11, 18, 28, 35], however, they have inherent limitations. Some focus solely on binary classification, determining if a pixel is handwritten or not [18], whereas others adopt a 3-class formulation of the problem, classifying pixels as handwritten, printed, or background [11, 28, 35]. This exclusive assignment of pixels to three different classes paralyzes any machine learning segmentation model to properly detect pixels in the overlapping areas to belong to both handwritten and printed classes. In our usecase depicted in Figure 1, the OCR process achieves insufficient performance due to overlapping of handwritten text and signatures that overlay the printed text. As such, to improve the documents digitization quality, and subsequently a wide range of downstream document understanding and natural language processing (NLP) tasks, it is vital to devise image processing approaches that can understand and properly segment different layers of text, i.e., printed and handwritten text. Models.Prior literature has applied various approaches for the identification and separation of handwritten and printed text. Early approaches [12, 20] formulated the problem as a binary classification task using KNN and SVM and focusing on connected components (CCs) (i.e., groups of pixels). More recently, Li et al. [22] employed conditional random fields (CRFs), with formulating both unary and pairwise potentials for adjacent connected components by leveraging convolutional neural networks (CNNs) architecture for the separation of CCs. The limitation of CC-based approaches is that they determine the class membership for the entire component rather than at the pixel level. Consequently, pixel-level segmentation methods were introduced, leveraging Markov random fields (MRFs) and MLPs [27, 30] for pixel-level classification of PT and HT segmentation. Following the success of encoder-decoder architectures in object segmentation [29], more recent works [18, 11, 28, 35] have predominantly adopted a U-Net based architecture [29], which comprises an encoder-decoder network, for HT and PT segmentation. In addition to our contribution of releasing a manually annotated dataset of legal documents with overlapping text, SignaTR6K, our methodology distinguishes itself from prior works in several significant ways. Firstly, we approach the segmentation problem with a four-class formulation, allowing overlapping pixels to be assigned to a new distinct class (OV), which signifies the presence of both HT and PT layers, leading to enhanced segmentation performance. Secondly, we introduce a novel architecture, the Mixed Feature Model (MFM), that combines a Fine Feature Path (FFP) with a Semantic Segmentation Path (SSP) and improves the performance by capitalizing on both high-level and low-level features. Notably, existing U-Net style architectures [18, 11, 28, 35] are limited to leveraging only the SSP path. Further, we present a new loss function, termed Fusion Loss, that converges faster and is more stable compared to prior losses for HT and PT segmentation task. In addition, we introduce a post-processing heuristic based on Conditional Random Fields (CRFH) to carry out relabeling, resulting in further enhancement of text segmentation performance. Additional details related to background and relevant works can be found in the supplementary. ## 3 SignaTR6K Dataset Successfully training and testing a segmentation model requires access to high-quality labeled datasets. Due to the limited availability of public data that contains both handwritten and printed text, previous research has predominantly sought to synthetically generate such data. To achieve this, researchers have combined datasets that either exclusively contain printed or handwritten text, or those with non-overlapping text, such as IAM [25], RIMES [3], PRImA [9], CVL [21], Scanned Questionnaire [4], and WGM-SYN [35]. The scanned documents can originate from diverse domains, each possessing its unique fonts, characters, and quality. Whether the original documents having poor quality in the case of archival documents [35], or the scanning process resulted in lower resolution or loss of contrast, these factors also compound the complexities of the text segmentation task. In addition, in some cases errors are introduced during the automated text labeling process [28]. Given these challenges, the absence of a precisely and manually annotated and verified dataset from real sources that contains various patterns of overlapping printed and handwritten text, significantly impedes the supervised machine learning (ML) process. To address this gap, we introduce a new dataset, **SignaTR6K** (pronounced as Signature 6K), that is derived from 200 original legal documents from Thomson Reuters Legal Content Services [6]. The dataset features overlapping printed and handwritten text and hand-drawn signatures, as depicted in Figure 1. The documents originate from different organizations, including law firms and courts, each having its distinct fonts and document formats. Furthermore, the annotations are made by different individuals, ensuring a diverse range of printed and handwritten styles. Importantly, each document has been manually labeled and verified. Figure 2 displays an original crop from SignaTR6K, which includes both a signature and the information of the signing party. Due to the presence of personally identifiable information (PIIs), we cropped the images to ensure no visible PIIs. Figure 1(a) illustrates the original image with both printed text and overlay handwritten signature. Figure 1(b) presents only the printed layer pixels, while Figure 1(c) contains only the handwritten layer pixels that are manually annotated. Any pixels absent from either the printed or handwritten channels are designated as background pixels. It is also evident that in overlapping scenarios, some pixels belong to both handwritten and printed layers. Dataset Generation.As hand-annotating a vast collection of real documents is time-consuming and expensive, in order to expand on the size of the dataset and make it adequate for training a deep learning model, we turn to data synthesis and augmentation techniques [18]. The synthetic approaches include general augmentation of crops, along with shifting, magnifying, and rotating operations. We also overlay handwritten and printed pixels from different crops to generate new real-like examples of overlapping text. For this purpose, from the 200 distinct original document samples, we set aside 16 crops, ensuring mutually exclusive samples in the test set. These crops are augmented only for the test set, and we create the training and validation sets from the remaining 184 samples. Following this approach and after excluding generated samples with visible PIIs, we have curated a dataset for training, validation, and testing sets of sizes 5169, 530, and 558 samples, respectively. Each sample is a pair of grayscale crop and its manually annotated ground truth. Each image is 256 by 256 pixels in size, with three channels (RGB) and typically contains several HT and PT overlaps. Four examples from the SignaTR6K dataset are presented in Figure 3, with their grayscale crops and corresponding ground truth (GT) labels. HT is indicated in green, PT in red, and BG in blue. Overlapping HT and PT pixels combine green and red channel values, resulting in a yellow appearance. We envision this dataset can be utilized for model training from scratch or further fine-tuning of a pre-trained classification or segmentation model for specific tasks. The SignaTR6K dataset is freely available for public download. ## 4 Approach and Methodology In this section, we detail our approach, the rationale behind the chosen architecture, the Semantic Segmentation Path (SSP) and Fine Feature Path (FFP), the various loss functions employed, and our novel _Fusion_ loss. Figure 3: Examples from the SignaTR6K dataset, with the top row showing the crops and the bottom row their ground truth annotations. Red: class \(PT\), printed, Green: class \(HT\), handwritten, and Blue: class \(BG\), background. Figure 2: This figure depicts a real crop from a legal document. (a) The original image containing printed text and overlay handwritten signature; (b) Printed layer pixels annotated only; (c) Handwritten layer pixels annotated only. Any pixels not present in the printed or handwritten layers are marked as background. ### Model Architecture The prevalent architecture for object segmentation applies a U-Net design [29, 24]. This architecture leverages a Fully Convolutional Network (FCN), that is without fully connected layers that are typically present at the end of Convolutional Neural Networks (CNNs). The U-Net architecture consists of two main parts, an encoder and a decoder. In the encoder segment, the original image (in our context a document crop) is fed into the model. It then undergoes a series of convolutions and max-pooling layers, extracting features from the image and reducing its dimensions. Conversely, the decoder processes the down-sampled image from the encoder using convolutions and up-sampling layers that eventually restores the image back to its original input size with the same number of channels. The final output of the decoder is a pixel-level labeling of the original image. Semantic Segmentation Path (SSP) in Figure 4 shows this part of the fully convolutional network. In addition to the encoder-decoder architecture in U-Net, there are also skip-connections that bring the feature maps directly from an encoder stage to the corresponding decoder stage, i.e., the decoder stage that has the same image size of the encoder stage. These skip-connections improve segmentation performance by re-accessing early-stage features that may be lost in the encoder's final output due to down-sampling. ### Four-Class Formulation As mentioned earlier, prior approaches have either considered binary classification of HT or a three-class formulation of the problem. Binary classification detects only one type of text, while the three-class formulation results in overlapping pixels being assigned solely to either the HT or PT class, impairing performance. Therefore, we propose a four-class formulation of the segmentation task, and the fourth class, _overlap_, \(OV\), models pixels that belong to both the HT and PT classes, specifically enabling the detection of overlapping areas. To expand on Formula (1) for this four-class formulation, for a given document \(D\), assuming there exist four classes as handwritten text \(HT\), printed text \(PT\), background \(BG\), and overlap \(OV\), and pixel \(p_{i}\): \(\forall p_{i}\in D:p_{i}(c)==True\)\(if\)\(p_{i}\in c\)\(and\)\(c\in\{HT,PT,BG,OV\}\). Overlapping pixels are highlighted in yellow in the ground truth, as seen in Figure 3. Our four-class single-label classification employs a Softmax activation function in the final layer, which ensures that only one output for each pixel is activated (Figure 4). Since the output image comprises three channels, when the \(OV\) class is predicted, during a post-processing step we turn on pixels for both \(HT\) and \(PT\) channels, resulting in the yellow color in the output image. In addition to the four-class formulation, we also explored three-class and multi-label formulations. In this scenario, instead of a Softmax activation, we applied Sigmoid activations for each class (i.e., three separate sigmoids). However, formulating the problem as a multi-label classification introduces added complexity and degrees of freedom. This can lead to undesirable scenarios, such as the simultaneous activation of pixels for \(HT\) and \(BG\). Consequently, the multi-label approach did not yield results comparable to those of the four-class formulation. ### Semantic Segmentation Path (SSP) The semantic segmentation path (SSP) of our model leverages a U-Net based architecture, with down-sampling in the encoder stages (i.e., backbone) and up-sampling in the decoder stages. The U-net architecture works well in capturing high-level image features. In this architecture, the encoder and decoder maintain a symmetrical architec Figure 4: Our architecture proposal showing FFP and SSP model paths. The output of FFP and SSP are concatenated prior to the final round of convolutions. A Softmax activation then selects one of the four classes for each pixel. The model’s final output consists of pixel-level annotations of the input image. BN: BatchNorm. ture with a similar number of stages. For example, in the SSP, if the encoder goes through four down-sampling stages, i.e., the input image size changes from 256*256 to 16*16 (256\(\rightarrow\)128\(\rightarrow\)64\(\rightarrow\)32\(\rightarrow\)16), correspondingly, the decoder undergoes four up-sampling stages, from 16*16 to 256*256. Thus, the final output retains the original input image's pixel dimensions. For the SSP, we explored a variety of network sizes. As a baseline and for comparison, we implemented FCN-light [11, 28, 35]. We then improved on the SSP architecture by using VGG16 [31], InceptionV3 [34], and ResNet34 [15] as backbones of the SSP and observed improvement in the performance. In particular, the ResNet34 and InceptionV3 backbones outperform the prior work (FCN) due to their larger number of learnable parameters. Additionally, the inclusion of residual connections and varied convolution sizes allows to better carry over the low-level features of text to the later stages of the segmentation network. This observation further inspired us to introduce the FFP network, which similarly incorporates residual connections and without down-sampling. ### Fine Feature Path (FFP) The semantic segmentation path excels when segmenting distinct objects. In this path, down-sampling layers are rapidly applied to the input image to capture high-level features and patterns. However, this rapid processing can lead to the loss of fine, or low-level, features. For our application, low-level features are crucial due to the intertwined nature of printed and handwritten text. To address this, we introduce a parallel path to the SSP, termed the fine feature path (FFP), which avoids down-sampling and instead incorporates a convolution block with residual connections. Note that, while the FFP aids in capturing fine features without down-sampling, on its own, i.e., without the SSP that includes down-sampling, it is insufficient for text segmentation, as the absence of high-level features means the model will not be able to detect high-level patterns irrespective of their pixel locations in the image. In Figure 4, the fine feature block of the FFP is repeated \(N_{x}\) times; in our implementation, \(N_{x}=4\). In addition, each stage of the FFP itself implements residual connections as it provides the flexibility to either bypass the block or use its output, leading to improved results, as we will discuss later in the paper. Similar architectures employing residual blocks have shown improved performance in fine object segmentation tasks, e.g., road segmentation [37]. Table 1 details the FFP architecture for four stages (\(N_{x}=4\)) and the connections between layers and convolution sizes. ### Mixed Feature Model (MFM) The Mixed Feature Model (MFM) serves as the umbrella model containing two parallel paths: SSP and FFP, as depicted in Figure 4. The objective of MFM is to capture the image's low-level features alongside its high-level features. The outputs of SSP and FFP are concatenated before the final layer convolution, producing the output of the MFM model. Additional details regarding the architecture, inputs, outputs, and convolution layer sizes of the MFM can be found in the supplementary. ### CRF Post-Processing and CRF Heuristic Prior research [11, 28, 35] has leveraged dense Conditional Random Fields (CRFs) as a post-processing step to re-label pixels based on their neighboring pixels. In our architectural exploration, we found that, while CRF post-processing is intended to improve segmentation performance, in practice it often hurts the segmentation performance by aggressively re-labeling pixels to incorrect classes. One issue arises due to the imbalanced nature of pixels across classes, with background pixels being predominant. Consequently, many pixels are mistakenly annotated from HT or PT to BG, or from HT to PT, which is undesirable. Based on this observation, we designed a heuristic for CRF post-processing to contain this unfavourable behaviour by only allowing the BG pixels to be relabelled to HT or PT pixels and not vice versa. As we will discuss in the experimentation section, this heuristic further improves segmentation performance. ### Loss Functions Due to the nature of the scanned documents and the amount of text on each page, the number of background pixels (i.e., white blank pixels) surpasses that of handwritten and printed pixels. This leads to a class imbalance problem [38], posing the risk of predicting the majority of pixels as background while still minimizing the loss value. As a result, we investigated various loss functions and weight as \begin{table} \begin{tabular}{l l l l l l l} \hline \hline & Group & Layer type & Fine & Ingen & Output & Output \\ \hline \multirow{4}{*}{Susp 1} & \multirow{4}{*}{G} & Conv & 3 \(\times\) 3/64 & \(\pm\) & \(\pm\) & 256 \(\times\) 256 \(\times\) 64 \\ & & Backbone & + & \(\pm\) & \(\pm\) & \(\pm\) & \(\pm\) & \(\pm\) \\ & & Backbone & + & \(\pm\) & \(\pm\) & \(\pm\) & \(\pm\) \\ & & Backbone & + & \(\pm\) & \(\pm\) & \(\pm\) & \(\pm\) \\ & & Backbone & + & \(\pm\) & \(\pm\) & \(\pm\) & \(\pm\) \\ & & Backbone & + & \(\pm\) & \(\pm\) & \(\pm\) & \(\pm\) \\ & & Backbone & + & \(\pm\) & \(\pm\) & \(\pm\) & \(\pm\) \\ & & Backbone & + & \(\pm\) & \(\pm\) & \(\pm\) & \(\pm\) \\ & Backbone & + & \(\pm\) & \(\pm\) & \(\pm\) & \(\pm\) \\ & Backbone & + & \(\pm\) & \(\pm\) & \(\pm\) & \(\pm\) \\ & Backbone & + & \(\pm\) & \(\pm\) & \(\pm\) & \(\pm\) \\ & Backbone & + & \(\pm\) & \(\pm\) & \(\pm\) & \(\pm\) \\ & Backbone & + & \(\pm\) & \(\pm\) & \(\pm\) & \(\pm\) \\ & Backbone & + & \(\pm\) & \(\pm\) & \(\pm\) & \(\pm\) \\ & Backbone & + & \(\pm\) & \(\pm\) & \(\pm\) & \(\pm\) \\ & Backbone & + & \(\pm\) & \(\pm\) & \(\pm\) & \(\pm\) \\ & Backbone & + & \(\pm\) & \(\pm\) & \(\pm\) & \(\pm\) \\ & Backbone & + & \(\pm\) & \(\pm\) & \(\pm\) \\ & Backbone & + & \(\pm\) & \(\pm\) & \(\pm\) & \(\pm\) \\ & Backbone & + & \(\pm\) & \(\pm\) & \(\pm\) \\ & Backbone & + & \(\pm\) & \(\pm\) & \(\pm\) & \(\pm\) \\ & Backbone & + & \(\pm\) & \(\pm\) & \(\pm\) & \(\pm\) \\ & Backbone & + signments to different classes in the loss function to evaluate its impact on segmentation performance. Additionally, during this exploration, we observed that different loss functions achieve different IoU (Intersection over Union) scores. Consequently, we introduce a new loss function, Fusion Loss, to incorporate the benefits of various losses. In the following, we discuss the different loss functions we explored. Cross-Entropy Loss.The multi-class cross-entropy loss is used for classification tasks involving more than two classes. It assumes that for each data point, only one class can be the ground truth, i.e., multi-class single-label classification. The standard form of cross-entropy loss for a single data point in multi-class segmentation with \(M\) classes is defined as: \(\mathcal{L}_{CE}(gt,pr)=-\sum_{m=1}^{M}gt\cdot\log(pr)\), where \(pr\) and \(gt\) represent the prediction and ground truth, respectively. The final loss is the average of the losses for all data points in a training set. Similarly, the weighted version (WCE) is computed as \(\mathcal{L}_{WCE}(gt,pr)=-\sum_{m=1}^{M}w_{m}\cdot gt\cdot\log(pr)\), with \(w_{m}\) being the weight assigned to each class in the loss calculation. Focal Loss.The focal loss [23] was introduced to address the class-imbalance problem during training of object detection tasks. The focal loss puts the focus of the learning algorithm on the incorrectly classified examples by applying a modulating term \((\mathbbm{1}-pr)^{\gamma}\) to the cross-entropy loss. This scaling factor dynamically weighs down the contribution of easy and correctly classified samples, allowing the training loop to concentrate on the harder and incorrectly classified samples. The focal loss is calculated as: \(\mathcal{L}_{Focal}(gt,pr)=-\sum_{m=1}^{M}gt\cdot(1-pr)^{\gamma}\log(pr)\), where \(\gamma\) is the modulating/focusing factor, and we set \(\bar{\gamma}==\mathbbm{2}\) as per the original paper. Similarly, the weighted version of the focal loss can be expressed as: \(\mathcal{L}_{WF}(gt,pr)=-\sum_{m=1}^{M}gt\cdot\alpha_{m}\cdot(1-pr)^{\gamma} \log(pr)\), where \(\alpha_{m}\) is analogous to \(w_{m}\), assigning weights to each class such that: \(\forall\alpha_{m}\mid 0<\alpha_{m}<1\&\&\sum_{m=1}^{M}\alpha_{m}=1\). Dice Loss.Dice loss is given by \(\mathcal{L}_{Dice}(precision,recall)=1=\frac{2}{M}\sum_{m=1}^{M}\)\(precision_{m}\cdot recall_{m}\). Intuitively, it aims to maximize the F-Score while minimizing the loss, i.e., \(\mathcal{L}_{Dice}=1-F_{Score}\). Accordingly, the weighted version of the dice loss is defined as \(\mathcal{L}_{WD}(precision,recall)=1-\frac{2}{M}\sum_{m=1}^{M}\frac{w_{m} \cdot precision_{m}\cdot recall_{m}}{precision_{m}+recall_{m}}\). Fusion Loss.Given our observation that different loss functions adversely affect different classes, we introduce a new loss targeted for maximizing the performance of all the classes. For example, the dice loss performs better on the background class, as in its formulation, it aims to maximize the F-Score. Because the majority of pixels are attributed to the background class, having higher values of correctly classified background pixels achieves a higher F-Score and lower loss values. However, this does not necessarily yield higher IoU scores for PT and HT classes. In contrast, the weighted versions of the cross-entropy and focal losses focus on the handwritten and printed classes. As such, with the Fusion loss, we aim to combine the behaviors of various losses, and we define the Fusion loss as the sum of the three weighted losses: \(\mathcal{L}_{Fusion}=\mathcal{L}_{WF}+\mathcal{L}_{WCE}+\mathcal{L}_{WD}\). ## 5 Experiments and Results Evaluation Metric.Intersection over Union (IoU) is a commonly used metric to measure the performance of a segmentation task [36], and in particular text segmentation [33, 18, 11, 35]. In our context, we use IoU to measure the pixel-level performance of the model output versus the ground truth. For a class \(c\), let \(TP_{c}\) represent the correctly predicted pixels for that class, \(FP_{c}\) denote the pixels incorrectly predicted as belonging to class \(c\), and \(FN_{c}\) represent the pixels that are incorrectly not predicted for class \(c\), then Intersection over Union, IoU, for class \(c\) is calculated as: \(\overline{IoU_{c}}=\frac{TP_{c}}{TP_{c}+FP_{c}+FN_{c}}\). For each experiment, we calculate the IoU for all three classes, HT, PT, and BG. In the four-class formulation, pixels attributed to the overlap (OV) class are converted in a post-processing step to both HT and PT classes, contributing to the IoU calculations for both. We also calculate the mean IoU, which is the average of IoUs for these three classes: \(\overline{IoU_{Mean}}=\frac{IoU_{HT}+IoU_{PT}+IoU_{BG}}{3}\). Experiments.We perform 50 epochs of training with a batch size of 8. For problem formulation, we use both three-class and four-class implementations. The experiments are conducted on two datasets: SignaTR6K and WGM-SYN [35]. The IoU values for FCN-light and WGM-MOD are as reported in [35] for the three-class formulation which has been performed on the same dataset, WGM-SYN. However, for the SignaTR6K dataset, we retrain an FCN-based model for three-class formulation to ensure a fair comparison on the new dataset. We run experiments for a variety of architectures and in three post-processing configurations. We structure experiments incrementally, i.e., adding one improvement at a time. This approach aids in understanding and isolating the impact of each added improvement, i.e., ablation study. Due to space constraints, the table detailing the complete list of experiment parameters and configurations is relegated to the supplementary. bones: VGG16, InceptionV3, and ResNet34. Our experimentation also includes three variations of MFM, which incorporates both FFP and SSP, and with three SSP variations: VGG16, InceptionV3, and ResNet34. For our implementations, we use the Segmentation Models Library available in TensorFlow [16]. Post-Processing Configurations.We experiment with three distinct configurations: without post-processing, with CRF post-processing, and with CRFH post-processing. In the scenario without post-processing, the IoU calculation is performed directly on the output of the model (FCN-light, SSP, and MFM). For the CRF post-processing approach, all classes are permissible for relabeling. However, with the CRF post-processing with heuristic (CRFH), as explained in Section 4.6, only the background class pixels (BG) are allowed to be relabeled as HT or PT classes. Weight Initialization.For both FCN-light and SSP models, model weights start from their initial random values at the beginning of the training. We train MFM configurations at last to reuse the weights from the SSP configurations. Thus, for the MFM trainings, we apply transfer learning by initializing with weights from the corresponding, previously trained SSP. For example, when training an MFM (FFP+SSP) with an SSP using the ResNet34 backbone, we initialize its SSP path with the available weights from the SSP - ResNet34. Lastly, although we have aimed for consistent IoU results by fixing the random seeds, the inherent non-determinism associated with GPU execution remains. As such, we have run our experiments multiple times to validate the IoU value improvements across configurations. Results.Tables 6 and 3 show the IoU values for the WGM-SYN and SignaTR6K datasets. The overall trend across the results indicates that transitioning from the three-class formulation to the four-class formulation improves the IoU scores. Furthermore, employing larger model backbones generally, but not always, improves the segmentation performance. Among all the model architectures evaluated, the ResNet34 and InceptionV3 backbones achieve the highest performance, which we attribute to their residual connections and different size convolutions as they can better capture fine features from the image. We provide a summary of the trends in the results here, while a more detailed version of the result tables can be found in the supplementary material. * Prior work [11, 35], which implements the three-class formulation of HT and PT segmentation, generally exhibits the lowest performance. Transitioning from the three-class to the four-class formulation with the same backbone, i.e., FCN-light, with the same number of model parameters and WCE loss, improves the average IoU values by \begin{table} \begin{tabular}{c c c c c c c c c c c|c c c c} \multicolumn{11}{c}{} & \multicolumn{1}{c}{**IoU \%**} & \multicolumn{6}{c}{**With CRF (IoU \%)**} & \multicolumn{6}{c}{**With CRF (IoU \%)**} \\ \cline{3-13} \multicolumn{1}{c}{} & \multirow{3}{*}{Backbone} & \multicolumn{1}{c}{\# Parameters} & Loss function & PT & HT & BG & Mean & PT & HT & BG & Mean & PT & HT & BG & Mean \\ \cline{3-13} \multirow{-2}{*}{**3-Class**} & **FCN-light** [11] & \(\sim\)295K & Weighted CE & 24.00 & 26.00 & 72.00 & 41.00 & 11.00 & 23.00 & 72.00 & 36.00 & **-** & **-** & **-** & **-** \\ & **WGM-MOD**[35] & \(\sim\)295K & Weighted CE & 42.00 & 36.00 & 74.00 & 50.00\({}^{*}\) & 41.00 & 23.00 & 72.00 & 49.00 & - & **-** & **-** \\ \hline \multirow{9}{*}{**4-Class**} & \multirow{9}{*}{**FCN-light**} & \multirow{9}{*}{**FCN-light**} & \multirow{9}{*}{\(\sim\)295K} & CE & 46.49 & 41.98 & 73.77 & 54.08 & 38.95 & 29.35 & 73.92 & 47.41 & 46.57 & 41.59 & 73.87 & 54.01 \\ & & & Focal & 46.02 & 42.01 & 73.80 & 53.95 & 32.96 & 24.50 & 74.00 & 43.82 & 46.11 & 41.23 & 73.98 & 53.77 \\ & & & Dice & 48.01 & 47.78 & 71.22 & 55.55 & 48.95 & 48.48 & 71.22 & 56.10 & 60.07 & 47.79 & 71.29 & 55.72 \\ & & & Weighted CE & 46.07 & 42.18 & 73.82 & 54.02 & 40.54 & 30.07 & 73.98 & 42.80 & 46.31 & 41.68 & 73.95 & 53.98 \\ & & & Weighted Focal & 43.71 & 41.77 & 73.95 & 53.14 & 32.67 & 24.40 & 74.07 & 43.71 & 44.06 & 41.12 & 74.11 & 53.10 \\ & & & Weighted Pose & 47.97 & 47.22 & 71.02 & 55.40 & 48.69 & 48.34 & 71.02 & 56.02 & 48.04 & 47.70 & 71.10 & 55.61 \\ & & & Fusion & 48.12 & 47.25 & 71.93 & 55.77 & 46.34 & 44.25 & 71.86 & 54.15 & 48.19 & 47.89 & 72.38 & 56.15 \\ \hline \multirow{9}{*}{**4-Class**} & \multirow{9}{*}{**SSP - VGG16**} & \multirow{9}{*}{\(\sim\)24M} & CE & 35.14 & 32.40 & 73.96 & 47.17 & 30.97 & 19.14 & 73.88 & 41.33 & 35.51 & 32.02 & 32.08 & 74.13 & 47.14 \\ & & & Focal & 43.65 & 32.22 & 74.02 & 46.93 & 29.11 & 19.42 & 73.91 & 40.81 & 43.67 & 31.79 & 74.22 & 46.89 \\ & & & Dice & 40.64 & 39.64 & 71.13 & 50.47 & 43.06 & 41.06 & 71.21 & 51.58 & 40.81 & 40.25 & 71.25 & 50.77 \\ & & & Weighted CE & 35.73 & 33.44 & 74.43 & 47.87 & 32.00 & 20.70 & 74.12 & 42.28 & 35.93 & 29.55 & 74.65 & 47.84 \\ & & & Weighted Focal & 34.90 & 34.35 & 74.36 & 47.87 & 29.18 & 20.72 & 74.06 & 41.32 & 35.11 & 33.80 & 74.56 & 47.82 \\ & & & Weighted Pose & 41.80 & 40.64 & 70.67 & 61.03 & 41.41 & 43.79 & 70.70 & 52.07 & 42.52 & 41.19 & 70.97 & 51.56 \\ & & & Fusion & 39.99 & 39.96 & 72.31 & 50.62 & 35.39 & 27.93 & 72.35 & 45.22 & 40.10 & 39.94 & 72.63 & 50.89 \\ \hline \multirow{9}{*}{**MFM (FFP + SSP) - InceptionV3**} & \multirow{9}{*}{\(\sim\)30M} & CE & 52.51 & 44.09 & 73.91 & 56.84 & 46.03 & 35.94 & 74.21 & 52.06 & 52.55 & 42.17 & 74.35 & 56.36 \\ & & & Focal & 51.56 & 34.74 & 73.89 & 56.40 & 35.98 & 27.15 & 74.41 & 45.82 & 56.13 & 47.14 & 74.43 & 55.93 \\ & & & Dice & 20.91 & 19.28 & 51.25 & 45.07 & 25.51 & 24.61 & 74.94 & 43.55 & 20.92 & 19.29 & **51.54** & 45.08 \\ \cline{1-1} & & & Weighted CE & 51.58 & 43.72 & 73.95 & 56.42 & 44.78 & 34.04 & 74.32 & 51.05 & 51.64 & 41.85 & 74.40 & 55.96 \\ & & & Weighted Focal & 51.01 & 43.62 & 78.99 & 56.18 & 37.56 & 26.00 & 74.22 & 45.53 & 51.08 & 41.60 & 74.35 & 55.68 \\ & & & Weighted Dec & 22.30 & 21.25 & 87.03 & 41.42 & 29.11 & 29.48 & 77.97 & 45.55 & 22.32 & 21.26 & 80.74 & 41.44 \\ & & Fusion & 53.22 & 51.75 & 71.84 & 58.77 & 50.05 & 48.83 & 72.03 & 56.97 & **53.32** & 49.83 & 72.72 & 58.62 \\ \hline \multirow{9}{*}{**MFM (FFP + SSP) - ResNet34**} & \multirow{9}{*}{\(\sim\)24 8.0% (50\(\rightarrow\)54.02) for the WGM-SYN dataset, and 2.6% (81.09\(\rightarrow\)83.15) for the SignaTR6K dataset. * Applying CRF post-processing generally degrades the results. In our research, we observe that CRF post processing is dataset dependent and decreases the performance of segmentation in some cases. CRF post-processing relabels aggressively and incorrectly converts HT and PT pixels to background ones, or OV pixels to PT ones. For example, Figure 5(i) illustrates that how OV pixels are wrongly relabeled to PT class by CRF post-processing. Contrary to CRF post-processing, our CRF heuristic generally improves the IoU values. For example, for the SignaTR6K dataset, for FCN-light with four-class implementation and CE loss, CRFH improves HT IoUs by 2.9% (89.21\(\rightarrow\)91.81). * Moving from the FCN-light architecture to larger models, i.e., SSP and MFM models, generally improves the results, except for the VGG16 backbone. For VGG16, we rationalize that having a deep network without residual connections or varied-size convolutions, as seen in ResNet34 and InceptionV3 architectures, hurts the model's ability in capturing fine features. In addition, we observe that adding FFP helps to improve the performance of the VGG model as we compare SSP and MFM performance values for both datasets. This also confirms the FFP's ability to capture low-level features that are missed by the SSP path. * The MFM model with ResNet34 and InceptionV3 backbones generally achieves the best results. MFM-ResNet34 with Fusion loss and CRFH achieves the best result for the WGM-SYN dataset (58.93%), while MFM-InceptionV3 with CE loss and CRFH achieves the best mean IoUs for the SignaTR6K dataset (89.10%). Overall, the best-performing model from our designs improves on the mean IoU performance of the best-performing prior work [35] by 17.9% (50\(\rightarrow\)58.93) and 7.3% (83.02\(\rightarrow\)89.10) for the WGM-SYN and SignaTR6K datasets, respectively. * Although using skip connections in SSP helps with the segmentation performance and thus less pronounced improvement is observed in MFM, our FFP design is agnostic of the SSP implementation. This distinction becomes clear when VGG16 is employed as SSP; the addition of FFP boosts the IoU scores by 7% and 8% on the WGM-SYN and SignaTR6K datasets, respectively. The FCN-based model with 295K parameters outperforms VGG16 with 30M parameters. Thus, larger models do not necessarily outper Figure 5: Convergence speed of different loss functions and normalized loss values on the validation set of the SignaTR6K dataset. \begin{table} \begin{tabular}{c c c c c c c|c c c c|c c c c|c c c} \hline \multicolumn{1}{c}{} & & & \multicolumn{4}{c}{**IoU \%**} & \multicolumn{4}{c}{**With CRF (IoU \%)**} & \multicolumn{4}{c}{**With CRFH (IoU \%)**} \\ \cline{3-14} \multicolumn{1}{c}{Formulation} & \multicolumn{1}{c}{Backbone} & \multicolumn{1}{c}{\# Parameters} & \multicolumn{1}{c}{Loss function} & PT & HT & BG & Mean & PT & HT & BG & Mean & PT & HT & BG & Mean \\ \hline \multirow{4}{*}{**3-Class**} & \multirow{4}{*}{**FCN-based [11, 35]**} & \multirow{4}{*}{\(\sim\)295K} & CE & 62.56 & 88.09 & 98.40 & 83.07 & 52.68 & 89.68 & 99.26 & 80.46 & 62.72 & 90.58 & 99.05 & 84.11 \\ & & & Feal & 62.34 & 88.00 & 98.45 & 82.93 & 44.60 & 84.86 & 99.28 & 76.25 & 62.57 & 90.33 & 99.21 & 84.21 \\ & & & Dice & 60.45 & 87.29 & 97.85 & 81.16 & 62.14 & 88.83 & 89.17 & 82.74 & 82.74 & 85.78 & 97.97 & 82.12 \\ & & & Weighted CE & 60.58 & 84.99 & 97.78 & 81.09 & 53.45 & 90.73 & 99.52 & 81.23 & 60.84 & 88.36 & 83.92 & 84.10 \\ & & & Weighted Focal & 61.25 & 85.74 & 90.82 & 81.67 & 44.28 & 85.77 & 99.50 & 76.51 & 61.55 & 87.42 & 95.88 & 82.52 \\ & & & Weighted Dice & 60.21 & 87.00 & 97.74 & 81.65 & 60.72 & 88.17 & 97.88 & 82.29 & 60.27 & 87.46 & 97.83 & 81.86 \\ & & & Fusion & 61.52 & 88.39 & 98.38 & 82.76 & 58.94 & 92.55 & 99.37 & 83.62 & 61.67 & 90.45 & 98.91 & 83.68 \\ \hline \multirow{4}{*}{**4-Class**} & \multirow{4}{*}{**FCN-based**} & \multirow{4}{*}{\(\sim\)295K} & \multirow{4}{*}{\(\sim\)295K} & CE & 64.55 & 89.21 & 98.39 & 84.05 & 54.60 & 89.68 & 99.23 & 81.17 & 64.87 & 91.81 & 99.06 & 85.25 \\ & & & Focal & 64.10 & 88.86 & 96.32 & 83.76 & 46.64 & 86.01 & 99.26 & 77.03 & 64.31 & 96.59 & 99.11 & 85.03 \\ & & & Dec & 64.37 & 88.68 & 98.37 & 83.81 & 65.17 & 90.13 & 98.59 & 84.63 & 64.57 & 89.14 & 98.46 & 84.06 \\ & & & Weighted CE & 63.78 & 87.15 & 98.19 & 83.15 & 54.77 & 90.50 & 94.21 & 85.66 & 40.58 & 97.92 & 98.80 & 84.19 \\ & & & Weighted Dice & 63.68 & 87.71 & 89.19 & 83.20 & 43.88 & 81.70 & 94.95 & 78.24 & 64.05 & 89.72 & 98.80 & 84.19 \\ & & & Weighted Dice & 64.21 & 88.57 & 98.30 & 83.69 & 65.08 & 90.10 & 98.53 & 84.57 & 64.44 & 89.06 & 98.39 & 83.96 \\ & & & Fusion & 64.68 & 88.48 & 98.33 & 83.93 & 98.17 & 97.92 & 83.63 & 65.00 & 90.72 & 98.91 & 84.87 \\ \hline \multirow{4}{*}{**4-Class** **(Ours)**} & \multirow{4}{*}{**MFM (FFP + SSP) - InceptionV3**} & \multirow{4}{*}{\(\sim\)30M**} & CE & 73.10 & 29.66 & 89.77 & 88.18 & 63.48 & 92.89 & 99.55 & 83.71 & 73.05 & 94.89 & 99.36 & **82.10** \\ & & & Focal & 72.80 & 92.50 & 89.75 & 88.01 & 57.47 & 90.58 & 99.54 & 82.53 & 77.27 & 94.74 & 93.58 & 88.59 \\ & & & Dice & 72.56 & 92.20 & 87.00 & 87.82 & 72.38 & 93.60 & 99.00 & 88.33 & 72.52 & 94.73 & 93.97 & 88.87 \\ & & & Weighted CE & 72.66 & 91.90 & 89.54 & 87.70 & 62.65 & 93.04 & **92.62** & 81.50 & 76.93 & 93.11 & 98.94 & 88.29 \\ & & & Weighted Focal & 72.55 & 92.13 & 98.62 & 87.77 & 59.35 & 91.64 & 99.59 & 83.53 & 72.50 & 93.77 & 90.07 & 88.45 \\ & & & Weighted Dice & 72.59 & 92.51 & 89.70 & 87.87 & 72.66 & 94.00 & 99.07 & 88.56 & 72.54 & 94.60 & 99.32 & 88.82 \\ & & & Fusion & 72.55 & 92.25 & 98.71 & 87.83 & 65.57 & 93.49 & 99.53 & 86.19 & 72.49 & 94.69 & 99.38 & 88.02 \\ \hline \multirow{4}{*}{**MFM (FFP + SSP) - ResNet34**} & \multirow{4}{*}{\(\sim\)24M} & CE & 72.81 & 92.56 & 98.78 & 88.05 & 63.04 & 92.94 & 99.55 & 85.17 & 72.75 & **94.93** & 99.39 & 89.02 \\ & & & Focal & 73.04 & form smaller ones consistently, and integrating FFP can bring significant improvements in IoU scores. Our findings show that, irrespective of the SSP size and architecture, incorporating FFP improves the performance, and it is essential for segmentation tasks involving fine objects, e.g., text. Loss Functions.We also perform an analysis to confirm the stability and convergence of the Fusion loss compared to other losses. Figures 4(a) and 4(b) compare different loss functions on the validation set of the SignaTR6K dataset. In Figure 4(a), we observe that the Fusion loss is generally stable while converging to its minimum loss. Additionally, in Figure 4(b), we observe Fusion loss reaches its maximum IoU around Epoch 15, generally faster than most other losses and stays close to the maximum IoU values on the validation set. It is important to highlight that the maximum IoU on the validation set for Focal and CE losses does not necessarily imply the best performance on the test set. Overall, Fusion loss shows a stable behavior and achieves a better performance on the test set, whereas for comparison, Focal loss shows instability at some epochs during the training. Visual Comparison.Figure 6 shows sample model outputs for the SignaTR6K dataset, with two rectangle regions of interest highlighted to showcase the differences between various models. Figure 5(b) shows the ground truth, and a visual trend indicates that the performance on the regions of interest improves from Figure 5(c) to Figure 5(b). It is also visually noticeable that CRF post-processing aggressively relabels pixels (6i), whereas CRFH generally improves the results. More visual comparisons on the WGM-SYN and SignaTR6K datasets can be found in the supplementary. Limitations.Compared to prior work, the complexity and training cost of our approach present some limitations. Our MFM model, being larger than FCN-light, requires greater GPU memory sizes for training, and takes longer to train. However, we believe some of the limitations on the training time can be offset through transferring weights from SSP models to MFM ones. Additionally, while our approach outperforms prior work on the WGM-SYN dataset, the mean IoU performance is lower compared to the SignaTR6K dataset. This indicates potential limitations in our method's efficacy for lower-quality original documents, those undergoing low-quality scanning processes (like historical documents), or those with errors from automated labeling. We also attribute the improved performance on the SignaTR6K dataset to its higher quality and our thorough manual annotation, that have resulted in better model training and improved IoU results. ## 6 Conclusion Segmentation of handwritten text (HT) and printed text (PT) is vital for digitization and understanding of scanned documents. The complexity increases with the overlapping of different text types. In this research, we introduced SignaTR6K, a new open-source dataset with high-quality, manual pixel-level annotations, sourced from original legal documents. Additionally, we proposed a novel four-class formulation and a new architecture for the segmentation task. Our design leverages both the Fine Feature Path (FFP) and the Semantic Segmentation Path (SSP) to create the Mixed Feature Model (MFM), and incorporates both high-level and low-level features and improves on the text segmentation performance. We also introduced a CRF-based post-processing heuristic (CRFH) that further improves the model output, and included a new loss function, Fusion loss, that combines the advantages of different loss functions and achieves faster convergence and higher stability compared to most of the evaluated losses. In conclusion, our designs outperform the prior work in mean IoU scores by 17.9% and 7.3% on WGM-SYN and SignaTR6K datasets, respectively. Figure 6: Example results on the test set of the SignaTR6K dataset for our approach compared to the ground truth and prior works. (a) Input image; (b) Ground truth; (c) & (d) 3-class FCN-based [11, 35] with CE loss without (c) and with (d) CRF post-processing; (e), (f), & (g) Our FCN-based 4-class formulation with CE loss without CRF (e), with CRF (f), and with CRFH (g); (h), (i), & (j) SSP-ResNet34 with CE loss without CRF (h), with CRF (i), and with CRFH (j); (k), (l), & (m) MFM-ResNet34 with CE loss without CRF (k), with CRF (l), and with CRFH (m); (n), (o), & (p) MFM-ResNet34 with Fusion loss without CRF (n), with CRF (o), and with CRFH (p).
2305.00749
Robust Low-Tubal-rank tensor recovery Using Discrete Empirical Interpolation Method with Optimized Slice/Feature Selection
In this paper, we extend the Discrete Empirical Interpolation Method (DEIM) to the third-order tensor case based on the t-product and use it to select important/ significant lateral and horizontal slices/features. The proposed Tubal DEIM (TDEIM) is investigated both theoretically and numerically. The experimental results show that the TDEIM can provide more accurate approximations than the existing methods. An application of the proposed method to the supervised classification task is also presented.
Salman Ahmadi-Asl, Anh-Huy Phan, Cesar F. Caiafa, Andrzej Cichocki
2023-05-01T10:09:48Z
http://arxiv.org/abs/2305.00749v2
Robust Low-Tubal-rank tensor recovery Using Discrete Empirical Interpolation Method with Optimized Slice/Feature Selection ###### Abstract In this paper, we extend the Discrete Empirical Interpolation Method (DEIM) to the third-order tensor case based on the t-product and use it to select important/significant lateral and horizontal slices/features. The proposed Tubal DEIM (TDEIM) is investigated both theoretically and numerically. The experimental results show that the TDEIM can provide more accurate approximations than the existing methods. An application of the proposed method to the supervised classification task is also presented. keywords: Cross tensor approximation, DEIM sampling, tubal product Msc: 15A69, 46N40, 15A23 + Footnote †: journal: Computer Science ## 1 Introduction The singular value decomposition (SVD) [1; 2; 3; 4] is costly for the computation of low-rank approximations of large-scale data matrices. To solve this problem, the matrix CUR (MCUR) or cross approximation methods have been proposed where a fast low-rank matrix approximation of a data matrix is computed using some selected columns and rows [5; 6]. The MCUR approximation is not only useful in terms of time complexity but can also provide interpretable approximations as the factor matrices preserve the properties of the original data matrix, such as nonnegativity, sparsity, or the elements being integers [7]. There are several approaches to sample columns and rows of a matrix. Generally, one can categorize these sampling methods into _randomized_ and _deterministic_ methods. The uniform, length-squared, and leverage-score probability distributions [7] as randomized algorithms have been widely used in the literature for sampling columns/rows of a large-scale data matrix. On the other hand, the maxvolume [8] and the discrete empirical interpolation method (DEIM) [9] are two well-known deterministic sampling algorithms. The first method samples columns/rows in such a way that the volume1 of the intersection matrix is maximized. It was shown that this method can provide almost optimal approximations. The DEIM method uses a basis e.g., the top left singular vectors, to sample rows of a matrix. The columns of the matrix \(\mathbf{X}^{T}\) are sampled and treated as the rows of the matrix \(\mathbf{X}\). The DEIM is indeed an interpolative MCUR method as the approximation obtained by this method matches the actual columns/rows of the original data matrix in the sampled indices. We should point out that the Cross2D [10; 5] is also a deterministic sampling approach2 which sequentially interpolates a given matrix in new sampled columns and rows. However, in contrast to the Cross2D, which for different runs can provide different MCUR approximations, the DEIM method for a given fixed basis matrix always gives the same MCUR approximation. Another benefit of the DEIM method is that for two given data matrices \(\mathbf{X},\mathbf{Y}\), if we can find a shared basis matrix for them, for example by the Generalized SVD (GSVD) [11; 12], it can be used to sample columns of the mentioned matrices. Here, the sampled columns of the matrices \(\mathbf{X},\ \mathbf{Y}\) have the same indices because the DEIM method employees the same basis for two datasets. Indeed the authors in [13; 14] use this trick for generalizing the MCUR to the tensor case. Footnote 1: The volume of a square matrix is defined as the absolute value of the determinant of a matrix, whereas the volume of a rectangular matrix is defined as the multiplication of its singular values. Footnote 2: Except the first stage of the algorithm where the first column is selected randomly, all other stages are performed deterministically. The MCUR methods are generalized to tensors in [15; 16; 17; 18; 19]. The methods in [15; 17; 18] are related to the Tucker model [20; 21], the method in [16] is for the Tensor Train model [22] and the approach proposed in [19] is for the tensor/tubal SVD (t-SVD) [23]. In this paper, we focus on the tensor SVD (t-SVD), which is defined based on the tubal product (t-product). The t-SVD has similar properties as the classical SVD. In particular, in contrast to the Tucker decomposition [20; 21] or Canonical polyadic decomposition [24; 25], its truncation provides the best low tubal rank approximation in the least-squares sense. The tubal leverage score sampling and uniform sampling are used in [19] and [26; 27], respectively, for sampling lateral and horizontal slices. To the best of our knowledge, the DEIM method has not been generalized to the tensor case yet based on the t-product, while its better approximation accuracy and robustness with respect to rank variation has been shown for matrices [9]. Note that the DEIM sampling is used in [28] to sample fibers for the computation of low Tucker rank approximation but our work is different from that as it is for the t-SVD model. Also, there is no any paper comparing different sampling approaches for lateral/horizontal slices. This paper aims to investigate more deeply these issues and motivated by the work [9], we extend the DEIM method to the t-product, which we refer to it as tubal DEIM (TDEIM). The TDEIM selects the indices of important lateral and horizontal slices. The proposed method outperforms all baseline sampling algorithms including _the top leverage scores method_, _the leverage score sampling_ and _the uniform sampling without replacement_. The key contributions of this work are stated as follows * Extending the discrete empirical interpolation method (DEIM) to the tensor tubal case based on the t-product and using it to select important/optimized lateral and horizontal slices/features. * Developing a new hybrid TDEIM algorithm that uses the tubal leverage scores for lateral/horizontal slice sampling. * Extensive simulations on synthetic and real-world datasets with an application to the supervised classification task to evaluate the performance and efficiency of the proposed algorithms. This paper is organized as follows. We first present some basic tensor concepts in Section 2. The t-SVD is introduced in Section 3 with some intuitions behind this model. Tha matrix and tensor cross approximation methods are outlined in Section 4. The Discrete Empirical Interpolation Method (DEIM) is presented in Section 5 and its extension to tensors is studied in Section 6. The computer simulation results are presented in Section 7 and a conclusion is given in Section 8. ## 2 Preliminaries To present the main materials, we first need to introduce the basic notations and definitions. An underlined bold capital letter, a bold capital letter, and a bold lower letter denote a tensor, a matrix, and a vector, respectively. The subtensors, which are generated by fixing all but two modes are called slices. For a special case of a third-order tensor \(\underline{\mathbf{X}}\), we call the slices \(\underline{\mathbf{X}}(:,:,k),\)\(\underline{\mathbf{X}}(:,j,:),\)\(\underline{\mathbf{X}}(i,:,:)\) frontal, lateral, and horizontal slices. For the brevity of presentation, sometimes for a frontal slice \(\underline{\mathbf{X}}(:,:,i)\), we use the notation \(\underline{\mathbf{X}}_{i}\). Similarly, fibers are generated by fixing all but one mode. For a third-order tensor \(\underline{\mathbf{X}}\), a special type of fiber that is generated by fixing the first and the second modes, e.g. \(\underline{\mathbf{X}}(i,j,:)\) is called a tube. The notation "conj" denotes the component-wise complex conjugate of a matrix. The notation \(\|.\|_{F}\) stands for the Frobenius norm of tensors/matrices and \(|.|\) is used to the denote absolute value of a number. The symbol \(\|.\|_{2}\) denotes the spectral norm of matrices or Euclidean norm of vectors. The Moore-Penrose inverse is denoted by \(\dagger\). We use the MATLAB notation to denote a subset of a matrix or tensor. For example, for a given data matrix \(\mathbf{X}\), by \(\mathbf{X}(:,\mathcal{J})\) and \(\mathbf{X}(\mathcal{I},:)\) we mean two matrices, which sample a part of rows and columns if the matrix \(\mathbf{X}\), respectively where \(\mathcal{I}\subset\{1,2,\ldots,I_{1}\}\) and \(\mathcal{J}\subset\{1,2,\ldots,I_{2}\}\). **Definition 1**.: **(t-product)** Let \(\underline{\mathbf{X}}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\) and \(\underline{\mathbf{Y}}\in\mathbb{R}^{I_{2}\times I_{4}\times I_{3}}\), the t-product \(\underline{\mathbf{X}}*\underline{\mathbf{Y}}\in\mathbb{R}^{I_{1}\times I_{4 }\times I_{3}}\) is defined as follows \[\underline{\mathbf{C}}=\underline{\mathbf{X}}*\underline{\mathbf{Y}}=\mathrm{ fold}\left(\mathrm{circ}\left(\underline{\mathbf{X}}\right)\mathrm{unfold}\left( \underline{\mathbf{Y}}\right)\right), \tag{1}\] where \[\mathrm{circ}\left(\underline{\mathbf{X}}\right)=\begin{bmatrix}\underline{ \mathbf{X}}(:,:,1)&\underline{\mathbf{X}}(:,:,I_{3})&\cdots&\underline{\mathbf{ X}}(:,:,2)\\ \underline{\mathbf{X}}(:,:,2)&\underline{\mathbf{X}}(:,:,1)&\cdots&\underline{ \mathbf{X}}(:,:,3)\\ \vdots&\vdots&\ddots&\vdots\\ \underline{\mathbf{X}}(:,:,I_{3})&\underline{\mathbf{X}}(:,:,I_{3}-1)&\cdots& \underline{\mathbf{X}}(:,:,1)\end{bmatrix},\] and \[\mathrm{unfold}(\underline{\mathbf{Y}})=\begin{bmatrix}\underline{\mathbf{Y} }(:,:,1)\\ \underline{\mathbf{Y}}(:,:,2)\\ \vdots\\ \underline{\mathbf{Y}}(:,:,I_{3})\end{bmatrix},\quad\underline{\mathbf{Y}}= \mathrm{fold}\left(\mathrm{unfold}\left(\underline{\mathbf{Y}}\right)\right).\] As described in [29, 23], the t-product is performed via Discrete Fourier Transform (DFT) and in [30] it was suggested to use any invertible transformation rather than the DFT. Later, noninvertible and even nonlinear transformations were used in [31] and [32], respectively. The advantage of using such unitary transformations is the possibility of computing the t-SVD of a data tensor with lower tubal rank [33, 31] (to be discussed later). The MATLAB command \(\mathrm{fft}(\underline{\mathbf{X}},[],3)\), computes the DFT of all tubes of the data tensor \(\underline{\mathbf{X}}\). The fast version of the t-product is summarized in Algorithm 1 where the DFT of only the first \(\lceil\frac{I_{3}+1}{2}\rceil\) frontal slices is needed while the original version processes all frontal slices [29, 23]. **Definition 2**.: (**Transpose**) The transpose of a tensor \(\underline{\mathbf{X}}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\) is denoted by \(\underline{\mathbf{X}}^{T}\in\mathbb{R}^{I_{2}\times I_{1}\times I_{3}}\) produced by applying the transpose to all frontal slices of the tensor \(\underline{\mathbf{X}}\) and reversing the order of the transposed frontal slices from the second till the last one. **Definition 3**.: (**Identity tensor**) Identity tensor \(\underline{\mathbf{I}}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) is a tensor whose first frontal slice is an identity matrix of size \(I_{1}\times I_{1}\) and all other frontal slices are zero. It is easy to show \(\underline{\mathbf{I}}*\underline{\mathbf{X}}=\underline{\mathbf{X}}\) and \(\underline{\mathbf{X}}*\underline{\mathbf{I}}=\underline{\mathbf{X}}\) for all tensors of conforming sizes. **Definition 4**.: (**Orthogonal tensor**) A tensor \(\underline{\mathbf{X}}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) is orthogonal if \(\underline{\mathbf{X}}^{T}*\underline{\mathbf{X}}=\underline{\mathbf{X}}* \underline{\mathbf{X}}^{T}=\underline{\mathbf{I}}\). **Definition 5**.: (**f-diagonal tensor**) If all frontal slices of a tensor are diagonal then the tensor is called f-diagonal. **Definition 6**.: (**Inverse of a tensor**) The inverse of the tensor \(\underline{\mathbf{X}}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) is denoted by \(\underline{\mathbf{X}}^{-1}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) is a unique tensor satisfying the following equations \[\underline{\mathbf{X}}^{-1}*\underline{\mathbf{X}}=\underline{\mathbf{X}}* \underline{\mathbf{X}}^{-1}=\mathbf{I},\] where \(\mathbf{I}\) is an identity tensor of size \(I_{1}\times I_{1}\times I_{3}\). The inverse of a tensor can be computed in the Fourier domain very fast as presented in Algorithm 2. ``` Input : Two data tensors \(\underline{\mathbf{X}}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}},\)\(\underline{\mathbf{Y}}\in\mathbb{R}^{I_{2}\times I_{4}\times I_{3}}\) Output : t-product \(\underline{\mathbf{C}}=\underline{\mathbf{X}}*\underline{\mathbf{Y}}\in \mathbb{R}^{I_{1}\times I_{4}\times I_{3}}\) 1\(\widehat{\underline{\mathbf{X}}}=\mathrm{fft}\left(\underline{\mathbf{X}},[ ],3\right)\); 2\(\widehat{\underline{\mathbf{Y}}}=\mathrm{fft}\left(\underline{\mathbf{Y}},[ ],3\right)\); 3for\(i=1,2,\ldots,\lceil\frac{I_{3}+1}{2}\rceil\)do 4\(\widehat{\underline{\mathbf{C}}}\left(.;.;i\right)=\widehat{\underline{ \mathbf{X}}}\left(.;.;i\right)\)\(\widehat{\underline{\mathbf{Y}}}\left(.;.;i\right)\); 5 6 end for 7for\(i=\lceil\frac{I_{3}+1}{2}\rceil+1,\ldots,I_{3}\)do 8\(\widehat{\underline{\mathbf{C}}}\left(.;.;i\right)=\mathrm{conj}(\widehat{ \underline{\mathbf{C}}}\left(.;.;I_{3}-i+2\right))\); 9 end for 10\(\underline{\mathbf{C}}=\mathrm{ifft}\left(\widehat{\underline{\mathbf{C}}},[ ],3\right)\); ``` **Algorithm 1**Fast t-product of two tensors [29, 34] The following identity \[\|\underline{\mathbf{X}}\|_{F}^{2}=\frac{1}{I_{3}}\sum_{i=1}^{I_{3}}\|\widehat{ \underline{\mathbf{X}}}(:,:,i)\|_{F}^{2}, \tag{2}\] is useful in our error analysis where \(\underline{\mathbf{X}}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\) is a given data tensor and \(\widehat{\mathbf{X}}(:,:,i)\) is the \(i\)-th frontal slice of the tensor \(\widehat{\mathbf{X}}=\mathrm{fft}(\mathbf{X},[],3)\), see [35]. We will use this identity in our theoretical analyses. ## 3 Tensor decompositions based on the t-product and tubal leverage-scores The tensor SVD (t-SVD) is a viable tensor decomposition that represents a tensor as the t-product of three tensors. The first and last tensors are orthogonal while the middle tensor is an f-diagonal tensor. The generalization of the t-SVD to tensors of order higher than 3 is done in [36]. Let \(\underline{\mathbf{X}}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\), then the t-SVD gives the following model \[\underline{\mathbf{X}}\approx\underline{\mathbf{U}}*\underline{\mathbf{S}}* \underline{\mathbf{V}}^{T},\] where \(\underline{\mathbf{U}}\in\mathbb{R}^{I_{1}\times R\times I_{3}},\)\(\underline{\mathbf{S}}\in\mathbb{R}^{R\times R\times I_{3}},\) and \(\underline{\mathbf{V}}\in\mathbb{R}^{I_{2}\times R\times I_{3}}\). The tensors \(\underline{\mathbf{U}}\) and \(\underline{\mathbf{V}}\) are orthogonal while the tensor \(\underline{\mathbf{S}}\) is f-diagonal. We also refer to \(\underline{\mathbf{U}}\) and \(\underline{\mathbf{V}}\), as the \(R\) leading left and right singular lateral slices of the tensor \(\underline{\mathbf{X}}\), respectively. The procedure of the computation of the t-SVD is presented in Algorithm 3. As can be seen, Algorithm 3 only needs the SVD of the first \(\lceil\frac{I_{3}+1}{2}\rceil\) slices in the Fourier domain. This idea was suggested in [37; 34] taking into account the special structure of discrete Fourier transform, while the original t-SVD algorithm developed in [29; 23] involves the SVD of all frontal slices. Note that this trick is applicable only for real tensors and for complex tensors, we need to compute the SVD of all frontal slices in the Fourier domain. Naturally, we should utilize this idea to skip redundant computations. The tubal leverage scores of lateral and horizontal slices can be defined for a third order tensor [19]. The tubal leverage scores of horizontal slices to \(\underline{\mathbf{U}},\) are defined as follows \(l_{i}=\|\underline{\mathbf{U}}(i,:,:)\|_{F}^{2},\ i=1,2,\ldots,I_{1}.\) The tubal leverage scores of lateral slices can be computed similarly for the tensor \(\underline{\mathbf{V}}\). Here, we consider \(l_{j}^{\prime}=\|\underline{\mathbf{V}}(j,:,:)\|_{F}^{2},\ j=1,2,\ldots,I_{2}.\) The important lateral/horizontal slices can be selected according to high tubal leverage scores or using the probability distributions \[P_{i}=\frac{l_{i}}{R},\ i=1,2,\ldots,I_{1},\quad P_{j}=\frac{l_{j}^{\prime}}{R },\ j=1,2,\ldots,I_{2}, \tag{3}\] where the fractions \(P_{i}\) and \(P_{j}\) are the probabilities of selecting the \(i\)-th horizontal slices and the \(j\)-th lateral slices, respectively. It is obvious that \[\sum_{i=1}^{I_{1}}\|\underline{\mathbf{U}}(i,:,:)\|_{F}^{2}=\sum_{j=1}^{I_{2} }\|\underline{\mathbf{V}}(j,:,:)\|_{F}^{2}=R,\] and the fractions in (3) indeed define probability distributions. The authors in [19] used the tubal leverage scores to sample horizontal and lateral slices for low tubal rank approximation. We will use the tubal leverage scores in Section 7 as a baseline method to sample horizontal and lateral slices. ## 4 Matrix and tensor CUR approximation methods Matrix CUR or cross approximation is a popular method for fast low-rank matrix approximation with interpretable factor matrices and linear computational complexity [6]. It samples individual columns and rows of a data matrix, so it can preserve the properties of the original data matrix such as nonnegativity or sparsity. Let \(\mathbf{X}\in\mathbb{R}^{I_{1}\times I_{2}}\) be a given data matrix. The CUR approximation seeks the approximation of the form \(\mathbf{X}\approx\mathbf{C}\mathbf{U}\mathbf{R}\) where \(\mathbf{C}=\mathbf{X}(:,\mathcal{J}),\)\(\mathbf{R}=\mathbf{X}(\mathcal{I},:)\) and \(\mathcal{I}\subset\{1,2,\ldots,I_{1}\}\) and \(\mathcal{J}\subset\{1,2,\ldots,I_{2}\}\). The optimal middle matrix is \[\mathbf{U}=\mathbf{C}^{\dagger}\mathbf{X}\mathbf{R}^{\dagger}. \tag{4}\] This procedure requires one or two passes over the data matrix \(\mathbf{X}\) and this depends on how the indices are sampled. For instance, for the case of uniform sampling, we do not need to view the whole data matrix while for the case of the leverage-scores or the DEIM, we need access to the whole data matrix. However, for computing the middle matrix \(\mathbf{U}\) in (4) both of them require the access to the whole data matrix \(\mathbf{X},\) which is prohibitive for extremely large-scale matrices. To ease the computational complexity, it was proposed to use the Moore-Penrose pseudoinverse of the intersection matrix obtained by crossing the sampled columns and rows, i.e., \(\mathbf{U}=(\mathbf{X}(\mathcal{J},\mathcal{I}))^{\dagger}\) and considering the CUR approximation \[\mathbf{X}\approx\mathbf{C}\mathbf{U}\mathbf{R}. \tag{5}\] It is demonstrated in [9] that the latter approximation indeed interpolates the data matrix \(\mathbf{X}\) at the sampled column and row indices. More precisely, if \(\mathbf{W}=\mathbf{X}-\mathbf{C}\mathbf{U}\mathbf{R}\), then \(\mathbf{W}(\mathcal{I},:)=0\) and \(\mathbf{W}(:,\mathcal{J})=0\). This does not necessarily hold if we use the middle matrix \(\mathbf{U}\) computed via (4). Moreover, it is proved in[38] that for a data matrix \(\mathbf{X}\) with exact rank \(R\), the CUR approximation in (5) is exact provided that \(\mathrm{rank}(\mathbf{X})=\mathrm{rank}(\mathbf{X}(\mathcal{J},\mathcal{I}))\). The columns and rows may be selected either deterministically or randomly for which additive or relative approximation errors can be achieved. In the deterministic case, it is known that the columns or rows with maximum volume can provide almost optimal solutions [8]. The discrete empirical interpolation method (DEIM) is another type of deterministic method to select the columns and rows of a matrix that relies on the top singular vectors[9]. The sampling methods based on a prior probability distribution are also widely used in the literature using uniform, length-squared, or leverage-score probability distributions, see [7], for an overview on these sampling approaches. It has been demonstrated that sampling columns with leverage-score probability distribution can provide approximations with relative error accuracy, which is of more interest in practice [39]. The MCUR was extended to the tensor case for different types of tensors decompositions. For example, the authors in [7] proposed to sample some columns of the unfolding matrices randomly to approximate the factor matrices of the Tucker decomposition. Also, it is suggested to use the Cross2D method to deterministically sample the fibers instead of random sampling. The DEIM method and leverage score sampling methods are also used in[28] to sample columns of the unfolding matrices to approximate the factor matrices. The MCUR is used in [16] to compute low rank approximation of unfolding matrices to compute the TT approximation. The cross approximation is generalized based on the t-product in [19], where some horizontal and lateral slices are selected. Here, the slices are selected based on the tubal leverage scores. The uniform sampling without replacements are used in [26; 27] for image/video completion and compression. However, there are only these works on tubal CUR approximation and this problem has not been investigated extensively. In this paper, we extend the DEIM method to the tensor case based on the t-product and extensively compare it with the known sampling algorithms. The results show more accurate approximations of the proposed sampling method compared to the baseline methods as was also reported for the matrix case before. ## 5 Discrete Empirical Interpolation Method (DEIM) for column/row sampling The DEIM is a building block in our formulation. It is a discrete form of its continuous version _Empirical Interpolation Method_[40; 41] for model order reduction of nonlinear dynamical systems. The DEIM method captures important variables of the underlying dynamics system. It was then used to sample important columns/rows of matrices to compute a low-rank matrix approximation [9]. To start with describing the DEIM method, let us introduce the _interpolatory projector_, which plays an important role in our analysis. **Definition 7**.: Assume that \(\mathbf{U}\in\mathbb{R}^{I_{1}\times R}\) is a full-rank matrix and \(\mathbf{p}\in\mathbb{N}^{R}\) is a set of distinct indices. The interpolatory projector \(\mathcal{P}\) is an oblique projector onto the range of \(\mathbf{U}\), which is defined as follows \[\mathcal{P}=\mathbf{U}\left(\mathbf{P}^{T}\mathbf{U}\right)^{-1}\mathbf{P}^{T} \tag{6}\] where \(\mathbf{P}=\mathbf{I}(:,\mathbf{p})\) and \(\mathbf{I}\) is the identity matrix of size \(I_{1}\times I_{1}\). Let \(\mathbf{y}=\mathcal{P}\mathbf{x},\) then as shown in [9], an important property of the operator \(\mathcal{P}\) is that it preserves the elements of \(\mathbf{x}\) with the indices \(\mathbf{p}\), i.e. \[\mathbf{y}(\mathbf{p})=\mathbf{P}^{T}\mathbf{y}=\mathbf{P}^{T}\mathbf{U} \left(\mathbf{P}^{T}\mathbf{U}\right)^{-1}\mathbf{P}^{T}\mathbf{x}=\mathbf{x} (\mathbf{p}). \tag{7}\] This justifies the name of interpolation as the operator \(\mathcal{P}\) can interpolate a vector \(\mathbf{x}\) in the index set \(\mathbf{p}\). The DEIM algorithm iteratively samples the columns/rows according to the columns of a given matrix basis. This method is summarized in Algorithm 4. For a given data matrix \(\mathbf{X}\in\mathbb{R}^{I_{1}\times I_{2}}\), the DEIM algorithm uses a basis \(\mathbf{U}=[\mathbf{u}_{1},\mathbf{u}_{2},\ldots,\mathbf{u}_{R}]\in\mathbb{R }^{I_{1}\times R}\) to sample the indices of important rows of \(\mathbf{X}\). It first starts from the first column \(\mathbf{u}_{1}\) and selects the index of an element with maximum absolute values, that is \[\mathbf{u}_{1}(p_{1})=\|\mathbf{u}_{1}\|_{\infty},\] and set \(\mathbf{p}=[p_{1}]\). Then, a new index, \(p_{2}\), is selected by first computing the residual \[\mathbf{r}_{1}=\mathbf{u}_{2}-\mathcal{P}_{1}\mathbf{u}_{1},\] where \(\mathcal{P}_{1}=\mathbf{u}_{1}\left(\mathbf{P}_{1}^{T}\mathbf{u}_{1}\right)^ {-1}\mathbf{P}_{1}^{T}\) is the interpolatory projector for \(\mathbf{p}\) onto the range of \(\mathbf{u}_{1}\). Selecting an index of \(\mathbf{r}_{1}\) with maximum absolute value, i.e. \(\mathbf{r}_{1}(p_{2})=\|\mathbf{r}_{1}\|_{\infty},\) and we update the index set as \(\mathbf{p}=[p_{1},p_{2}]\). This procedure is continued by eliminating the direction of the so-called _interpolatory projection_ in the former basis vectors from the next one and again finding the index of the entry with the largest magnitude in the residual vector. To be more precise, assume that we have already sampled \((j-1)\) row indices as \(\mathbf{p}_{j-1}=[p_{1},p_{2},\ldots,p_{j-1}]\) and we need to select the \(j\)-th row index. The residual term \(\mathbf{r}_{j-1}\) computed as \[\mathbf{r}_{j}=\mathbf{u}_{j}-\mathcal{P}_{j-1}\mathbf{u}_{j}. \tag{8}\] where \[\mathcal{P}_{j-1} = \mathbf{U}_{j-1}\left(\mathbf{P}_{j-1}^{T}\mathbf{U}_{j-1}\right) ^{-1}\mathbf{P}_{j-1}^{T}, \tag{9}\] \[\mathbf{U}_{j-1} = [\mathbf{u}_{1},\mathbf{u}_{2},\ldots,\mathbf{u}_{j-1}],\] (10) \[\mathbf{P}_{j-1} = \mathbf{I}(:,\mathbf{p}_{j-1}). \tag{11}\] Now, the \(j\)-th index, which is chosen as \(\mathbf{r}_{j}(p_{j})=\|\mathbf{r}_{j}\|_{\infty}.\) It is observed that the DEIM sampling method requires that the matrix \(\mathbf{P}_{j-1}\mathbf{U}_{j-1}\) to be nonsingular at each iteration. This is demonstrated in [9] under the condition that U is of full rank, which was our supposition Note that the DEIM method is basis dependent but for two different bases, \(\mathbf{Q}\) and \(\mathbf{U}\), that \(\mathrm{Range}(\mathbf{U})=\mathrm{Range}(\mathbf{Q})\), the DEIM approach provides the same indices [41] \[\mathbf{U}\left(\mathbf{P}^{T}\mathbf{U}\right)^{-1}\mathbf{P}^{T}=\mathbf{Q} \left(\mathbf{P}^{T}\mathbf{Q}\right)^{-1}\mathbf{P}^{T},\] **Remark 1**.: [9] At the iteration \(j\), we have \(\mathbf{r}_{j-1}(\mathbf{p}_{j-1})=0,\) because \(\mathbf{P}_{j-1}\mathbf{u}_{j}\) matches \(\mathbf{u}_{j}\) in the indices \(\mathbf{p}_{j-1}\). This guarantees that each iteration samples distinct indices. The error bound of the approximation obtained by the DEIM is presented in the next lemma. **Lemma 2**.: [9; 28] Assume \(\mathbf{P}^{T}\mathbf{U}\) is invertible and let \(\mathcal{P}\) be the interpolatory projector \(\mathcal{P}=\mathbf{U}(\mathbf{P}^{T}\mathbf{U})^{-1}\mathbf{P}^{T}\). If \(\mathbf{U}^{T}\mathbf{U}=\mathbf{I}\), then any \(\mathbf{X}\in\mathbb{R}^{I_{1}\times I_{2}}\) satisfies \[\|\mathbf{X}-\mathcal{P}\mathbf{X}\|_{F}^{2}\leq\|(\mathbf{P}^{T}\mathbf{U})^ {-1}\|_{2}^{2}\|(\mathbf{I}-\mathbf{U}\mathbf{U}^{T})\mathbf{X}\|_{F}^{2}, \tag{12}\] Additionally, if \(\mathbf{U}\) consists of the \(R\) leading left singular vectors of \(\mathbf{X}\), then \[\|\mathbf{X}-\mathcal{P}\mathbf{X}\|_{F}^{2}\leq\|(\mathbf{P}^{T}\mathbf{U})^ {-1}\|_{2}^{2}\|(\mathbf{I}-\mathbf{U}\mathbf{U}^{T})\mathbf{X}\|_{F}^{2}\leq \|(\mathbf{P}^{T}\mathbf{U})^{-1}\|_{2}^{2}\sum_{t>R}\sigma_{t}^{2}. \tag{13}\] The same result can be stated for the column selection process as follows \[\|\mathbf{X}-\mathbf{X}\mathcal{W}\|_{F}^{2}\leq\|(\mathbf{V}^{T}\mathbf{Q})^ {-1}\|_{2}^{2}\|\mathbf{X}(\mathbf{I}-\mathbf{V}\mathbf{V}^{T})\|_{F}^{2}\leq \|(\mathbf{V}^{T}\mathbf{Q})^{-1}\|_{2}^{2}\sum_{t>R}\sigma_{t}^{2}, \tag{14}\] where \(\mathcal{W}=\mathbf{V}(\mathbf{Q}^{T}\mathbf{V})^{-1}\mathbf{Q}^{T}\) and \(\mathbf{Q}=[\mathbf{e}_{q_{1}},\mathbf{e}_{q_{2}},\ldots,\mathbf{e}_{q_{j}}]\) is a collection of standard unit vectors corresponding to the row index set \(\mathbf{q}=[q_{1},q_{2},\ldots,q_{j}]\) and \(\mathbf{V}\) is a basis for the row space of the matrix \(\mathbf{X}\). We see that the following quantities \[\eta_{p}=\|(\mathbf{P}^{T}\mathbf{U})^{-1}\|_{2}^{2},\quad\eta_{q}=\|(\mathbf{ V}^{T}\mathbf{Q})^{-1}\|_{2}^{2}, \tag{15}\] play important roles in the upper error bounds. So, the conditioning of the problem heavily depends on these quantities and we are interested in sampling algorithms with these quantities being as small as possible. For the upper bounds of the mentioned quantities, see [9]. The next lemma demonstrates the upper bound on a CUR approximation obtained by DEIM for the row/column selection. **Lemma 3**.: [9] Suppose \(\mathbf{X}\in\mathbb{R}^{I_{1}\times I_{2}}\) that for the row and column indices \(\mathbf{p}\) and \(\mathbf{q}\), the matrices \(\mathbf{C}=\mathbf{X}(:,\mathbf{q})=\mathbf{X}\mathbf{Q}\) and \(\mathbf{R}=\mathbf{X}(\mathbf{p},:)=\mathbf{P}\mathbf{X}\), are full-rank where \[\mathbf{P}=[\mathbf{e}_{p_{1}},\mathbf{e}_{p_{2}},\ldots,\mathbf{e}_{p_{j}}], \quad\mathbf{Q}=[\mathbf{e}_{q_{1}},\mathbf{e}_{q_{2}},\ldots,\mathbf{e}_{q_{ j}}],\] with finite error constants \(\eta_{p}\) and \(\eta_{q}\), defined in (15) and set \(\mathbf{U}=\mathbf{C}^{\dagger}\mathbf{X}\mathbf{R}^{\dagger}\), where \(1\leq R<\min(I_{1},I_{2})\). Then \[\|\mathbf{X}-\mathbf{C}\mathbf{U}\mathbf{R}\|_{2}^{2}\leq(\eta_{p}+\eta_{q}) \sigma_{R+1}^{2}. \tag{16}\] Using the sub-multiplicative property of the Frobenius norm, we have \[\|\mathbf{X}-\mathbf{C}\mathbf{U}\mathbf{R}\|_{F}^{2}\leq(\eta_{p}+\eta_{q}) \sum_{t>R}\sigma_{t}^{2}. \tag{17}\] ## 6 Tubal Discrete Empirical Interpolation Method (TDEIM) for lateral/horizontal slice sampling It is empirically shown in [9], that the DEIM algorithm outperforms the leverage-score sampling method as one of the best sampling approaches. This motivates us to generalize it to the tensor case based on the t-product. In this section, we discuss how to perform this generalization properly. A link between the tubal DEIM and the tubal leverage score sampling is also studied. We we call the extended methods as the tubal DEIM (TDEIM). Similar to the DEIM, a key concept of the TDEIM is the _interpolatory projector_ that we now define it. For a given set of \(R\) indices \(\mathbf{s}\in\mathbb{N}^{R}\), a full tubal-rank tensor3\(\underline{\mathbf{U}}\in\mathbb{R}^{I_{1}\times R\times I_{3}},\) consider \(\underline{\mathbf{S}}=\underline{\mathbf{I}}(:,\mathbf{s},:)\in\mathbb{R}^{ I_{1}\times R\times I_{3}}\), as the selection tensor where \(\underline{\mathbf{I}}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) is an identity tensor. The tensorial oblique projection operator is defined as follows Footnote 3: A tensor with linear independent lateral slices, for example, the tensor \(\underline{\mathbf{U}}\) obtained from the t-SVD can be used. \[\underline{\mathcal{P}}=\underline{\mathbf{U}}*(\underline{\mathbf{S}}^{T}* \underline{\mathbf{U}})^{-1}*\underline{\mathbf{S}}^{T}. \tag{18}\] **Definition 8**.: (**Horizontal slice sampling**) Let \(\underline{\mathbf{X}}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\) be a given tensor and \(\mathbf{s}\in\mathbb{N}^{R}\) be an index set. The subtensor \(\underline{\mathbf{X}}(\mathbf{s},:,:)\in\mathbb{R}^{R\times I_{2}\times I_{3}}\) that collects some horizontal slices of the tensor \(\underline{\mathbf{X}}\) is refered to as the horizontal slice sampling tensor. Assume \(\underline{\mathbf{S}}=\underline{\mathbf{I}}(:,\mathbf{s},:)\in\mathbb{R}^{I_ {1}\times R\times I_{3}}\), then the horizontal slice sampling is equivalent to \(\underline{\mathbf{X}}(\mathbf{s},:,:)=\underline{\mathbf{S}}^{T}*\underline{ \mathbf{X}}\). Given an arbitrary tensor \(\underline{\mathbf{G}}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\) and \(\underline{\mathbf{X}}=\underline{\mathcal{P}}*\underline{\mathbf{G}}\), we have \[\underline{\mathbf{X}}(\mathbf{s},:,:) = \underline{\mathbf{S}}^{T}*\underline{\mathbf{X}}=\underline{ \mathbf{S}}^{T}*\underline{\mathbf{U}}*(\underline{\mathbf{S}}^{T}*\underline{ \mathbf{U}})^{-1}*\underline{\mathbf{S}}^{T}*\underline{\mathbf{G}} \tag{19}\] \[= \underline{\mathbf{S}}^{T}*\underline{\mathbf{G}}=\underline{ \mathbf{G}}(\mathbf{s},:,:).\] This means that the projection operator \(\underline{\mathcal{P}}\) preserves the horizontal slices of \(\underline{\mathbf{G}}\) specified by the index set \(\mathbf{s}\). Let us describe the TDEIM for sampling horizontal slices of a given tensor. The TDEIM begins by the first lateral slice of the basis tensor, i.e. \(\underline{\mathbf{U}}(:,1,:)\), and select the index with the highest Euclidean norm among all its tubes, e.g. \(s_{1}=\arg\max_{1\leq i\leq I_{1}}\|\underline{\mathbf{U}}(i,1,:)\|\). The index of the first sampled horizontal slice will be \(s_{1}\). The subsequent indices are selected according to the indices with the maximum Euclidean norm of the tubes of the residual lateral slice that is computed by removing the direction of the interpolatory projection in the previous basis horizontal slice from the subsequent one. To be more specific, let the selected indices be \(\mathbf{s}_{j-1}=\{s_{1},s_{2},\ldots,s_{j-1}\}\) and we want to select the new index \(s_{j}\). To do so, we compute the residual slice \[\underline{\mathbf{R}}(:,j,:)=\underline{\mathbf{U}}(:,j,:)-\underline{ \mathcal{P}}_{j-1}*\underline{\mathbf{U}}(:,j,:),\] where \(\underline{\mathcal{P}}_{j-1}=\underline{\mathbf{U}}^{j-1}*(\underline{ \mathbf{S}}^{j-1}{}^{T}*\underline{\mathbf{U}}^{j-1})^{-1}*\underline{\mathbf{ S}}^{j-1}{}^{T},\)\(\underline{\mathbf{S}}^{j-1}=\underline{\mathbf{I}}(:,\mathbf{s}_{j-1},:),\) and \(\underline{\mathbf{U}}^{j-1}=\underline{\mathbf{U}}(:,\mathbf{s}_{j-1},:)\). Then, a new sampled horizontal slice index \(s_{j}\) with the maximum Euclidean norm of tubes is computed or \[s_{j}=\arg\max_{1\leq i\leq I_{1}}\|\underline{\mathbf{R}}(i,j,:)\|_{2}.\] The same process can be carried out to select lateral slices where the basis tensor \(\underline{\mathbf{V}}\) should be used. The TDEIM method for sampling horizontal slices is summarized in Algorithm 5. With a slight modification of Algorithm 5, it can be used for lateral slice sampling. Similar to Lemma 2, the next theorem demonstrates the error bound of the approximation yielded by the TDEIM method. **Lemma 4**.: Assume \(\underline{\mathcal{P}}=\underline{\mathbf{U}}*(\underline{\mathbf{S}}^{T}* \underline{\mathbf{U}})^{-1}*\underline{\mathbf{S}}^{T}\) and \(\underline{\mathbf{S}}^{T}*\underline{\mathbf{U}}\) is invertible. If \(\underline{\mathbf{U}}^{T}*\underline{\mathbf{U}}=\underline{\mathbf{I}}\), then any tensor \(\underline{\mathbf{X}}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\) satisfies \[\|\underline{\mathbf{X}}-\underline{\mathcal{P}}*\underline{\mathbf{X}}\|_{F}^{ 2}\leq\frac{1}{I_{3}}\max_{i}\Big{(}\|(\underline{\mathbf{S}}_{i}^{T} \widehat{\mathbf{U}}_{i})^{-1}\|_{2}^{2}\Big{)}\sum_{i=1}^{I_{3}}\|(\mathbf{I }-\widehat{\mathbf{U}}_{i}\widehat{\mathbf{U}}_{i}^{T})\widehat{\mathbf{X}}_{ i}\|_{F}^{2}, \tag{20}\] where \(\underline{\underline{\mathbf{X}}}=\mathrm{fft}(\underline{\underline{\mathbf{X}}}, \,[\![],3)\), \(\underline{\underline{\mathbf{U}}}=\mathrm{fft}(\underline{\underline{\mathbf{U}}}, \,[\![],3)\) and \(\underline{\underline{\mathbf{S}}}=\mathrm{fft}(\underline{\underline{\mathbf{S} }},\,[\![],3)\). Here, \(\underline{\underline{\mathbf{X}}}_{i}=\underline{\underline{\mathbf{X}}}(: :,i)\), \(\underline{\underline{\mathbf{S}}}_{i}=\underline{\underline{\mathbf{S}}}(:,:,i)\) and \(\underline{\underline{\mathbf{U}}}_{i}=\underline{\underline{\mathbf{U}}}(: :,i)\). Moreover, if \(\underline{\underline{\mathbf{U}}}\) contains the \(R\) leading left singular lateral slices of the tensor \(\underline{\underline{\mathbf{X}}}\), then \[\|\underline{\underline{\mathbf{X}}}-\underline{\mathcal{P}}*\underline{ \underline{\mathbf{X}}}\|_{F}^{2}\leq\frac{1}{I_{3}}\max_{i}\,\,\Big{(}\|( \underline{\underline{\mathbf{S}}}_{i}^{T}\underline{\underline{\mathbf{U}}} _{i})^{-1}\|_{2}^{2}\Big{)}\sum_{i=1}^{I_{3}}\sum_{t>R}\,(\sigma_{t}^{i})^{2}, \tag{21}\] where \(\sigma_{t}^{i}\) are the \(t\)-th largest singular values of the frontal slice \(\underline{\underline{\mathbf{X}}}_{i}=\underline{\underline{\mathbf{X}}}(:,:,i)\). Proof.: From identity (2), we have \[\|\underline{\underline{\mathbf{X}}}-\underline{\mathcal{P}}*\underline{ \underline{\mathbf{X}}}\|_{F}^{2}=\frac{1}{I_{3}}\sum_{i=1}^{I_{3}}\| \underline{\underline{\mathbf{X}}}_{i}-\widehat{\underline{\mathcal{P}}}_{i }\underline{\underline{\mathbf{X}}}_{i}\|_{F}^{2}, \tag{22}\] where \(\widehat{\underline{\mathcal{P}}}=\mathrm{fft}(\mathcal{P},[\![],3)\) and \(\widehat{\underline{\mathcal{P}}}_{i}=\widehat{\underline{\mathcal{P}}}(:,:,i)\). Using Lemma 2, we arrive at \[\|\underline{\underline{\mathbf{X}}}-\underline{\mathcal{P}}* \underline{\underline{\mathbf{X}}}\|_{F}^{2} = \frac{1}{I_{3}}\sum_{i=1}^{I_{3}}\|\underline{\underline{\mathbf{ X}}}_{i}-\widehat{\underline{\mathcal{P}}}_{i}\underline{\underline{\mathbf{X}}}_{i} \|_{F}^{2}\] \[\leq \frac{1}{I_{3}}\sum_{i=1}^{I_{3}}\|(\underline{\underline{ \mathbf{S}}}_{i}^{T}\underline{\underline{\mathbf{U}}}_{i})^{-1}\|_{2}^{2}\|( \mathbf{I}-\underline{\underline{\mathbf{U}}}_{i},\underline{\underline{ \mathbf{U}}}_{i}^{T})\underline{\underline{\mathbf{X}}}_{i}\|_{F}^{2}\] \[\leq \frac{1}{I_{3}}\max\,\,\Big{(}\|(\underline{\underline{\mathbf{S} }}_{i}^{T}\underline{\underline{\mathbf{U}}}_{i})^{-1}\|_{2}^{2}\Big{)}\sum_{ i=1}^{I_{3}}\|(\mathbf{I}-\underline{\underline{\mathbf{U}}}_{i}\,\underline{ \underline{\mathbf{U}}}_{i}^{T})\underline{\underline{\mathbf{X}}}_{i}\|_{F}^{ 2},\] taking into account that \(\widehat{\underline{\mathcal{P}}}_{i}=\underline{\underline{\mathbf{U}}}_{i}( \underline{\underline{\mathbf{S}}}_{i}^{T}\underline{\underline{\mathbf{U}}} _{i})^{-1}\underline{\underline{\mathbf{S}}}_{i}^{T}\) where \(\underline{\underline{\mathbf{S}}}=\mathrm{fft}(\underline{\underline{ \mathbf{S}}},[\![],3)\) and \(\underline{\underline{\mathbf{S}}}_{i}=\underline{\underline{\mathbf{S}}}(:,:,i)\). So, the proof of the first part is completed. It suffices to consider \(\|(\underline{\mathbf{I}}-\underline{\underline{\mathbf{U}}},\underline{ \underline{\mathbf{U}}}_{i}^{T})\widehat{\mathbf{X}}_{i}\|_{F}^{2}=\sum_{t>R}( \sigma_{t}^{i})^{2}\) and substitute it into the last equation, to get the second part of the theorem. Similar to the matrix case, let us introduce two quantities as follows \[\tilde{\eta}_{p}=\frac{1}{I_{3}}\max_{i}\,\,\Big{(}\|(\underline{\underline{ \mathbf{S}}}_{i}^{T}\underline{\underline{\mathbf{U}}}_{i})^{-1}\|_{2}^{2} \Big{)}\,,\quad\tilde{\eta}_{q}=\frac{1}{I_{3}}\max_{i}\,\,\Big{(}\|(\underline{ \underline{\mathbf{\mathbf{V}}}}_{i}^{T}\underline{\underline{\mathbf{Q}}}_{ i})^{-1}\|_{2}^{2}\Big{)}\,, \tag{23}\] that will be used in the next theorem. Here, \(\underline{\mathbf{V}}\in\mathbb{R}^{I_{2}\times R\times I_{3}}\) is a tensor basis for the subspace of lateral slices of the original data tensor, \(\underline{\mathbf{Q}}=\underline{\mathbf{I}}(:,\mathbf{s}_{j-1},:)\) is a tensor of some sampled lateral slices of the identity tensor specified by the index set \(\mathbf{s}_{j-1}=\{s_{1},s_{2},\ldots,s_{j-1}\},\underline{\underline{\mathbf{Q} }}=\mathrm{fft}(\underline{\underline{\mathbf{Q}}},[\!],3)\), \(\underline{\underline{\mathbf{V}}}=\mathrm{fft}(\underline{\underline{\mathbf{V }}},[\![],3)\) and \(\underline{\underline{\mathbf{Q}}}_{i}=\underline{\underline{\mathbf{Q}}}(:,:,i)\), \(\underline{\underline{\mathbf{V}}}_{i}=\underline{\underline{\mathbf{V}}}(:,:,i)\). **Theorem 5**.: Suppose \(\underline{\mathbf{X}}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\) and \(1\leq R<\min(I_{1},I_{2})\). Assume that horizontal slice and lateral slice indices \(\mathbf{p}\) and \(\mathbf{q}\) give full tubal rank tensors \(\underline{\mathbf{C}}=\underline{\mathbf{X}}(:,\mathbf{q},:)=\underline{ \mathbf{X}}\ast\underline{\mathcal{Q}}\) and \(\underline{\mathbf{R}}=\underline{\mathbf{X}}(\mathbf{p},:,:)=\underline{ \mathcal{P}}\ast\underline{\mathbf{X}}\), where \(\underline{\mathcal{P}}\) and \(\underline{\mathcal{Q}}\) are tensorial interpolation projectors for horizontal and lateral slice sampling, respectively with corresponding finite error constants \(\tilde{\eta}_{p},\ \tilde{\eta}_{q}\) defined in (23) and set \(\underline{\mathbf{U}}=\underline{\mathbf{C}}^{\dagger}\ast\underline{ \mathbf{X}}\ast\underline{\mathbf{R}}^{\dagger}\). Then \[\|\underline{\mathbf{X}}-\underline{\mathbf{C}}\ast\underline{ \mathbf{U}}\ast\underline{\mathbf{R}}\|_{F}^{2}\leq(\tilde{\eta}_{p}+\tilde{ \eta}_{q})\sum_{i=1}^{I_{3}}\sum_{t>R}(\sigma_{t}^{i})^{2}, \tag{24}\] where \(\sigma_{t}^{i}\) are the \(t\)-th largest singular values of the frontal slice \(\underline{\widehat{\mathbf{X}}}_{i}=\underline{\widehat{\mathbf{X}}}(:,:,i)\). Proof.: Using identity (2), we have \[\|\underline{\mathbf{X}}-\underline{\mathbf{C}}\ast\underline{ \mathbf{U}}\ast\underline{\mathbf{R}}\|_{F}^{2} = \frac{1}{I_{3}}\sum_{i=1}^{I_{3}}\|\underline{\widehat{\mathbf{X}} }-\underline{\widehat{\mathbf{C}}}_{i}\,\underline{\widehat{\mathbf{U}}}_{i} \,\underline{\widehat{\mathbf{R}}}_{i}\|_{F}^{2},\] where \(\underline{\widehat{\mathbf{C}}}=\mathrm{fft}(\underline{\mathbf{C}},[],3), \)\(\underline{\widehat{\mathbf{U}}}=\mathrm{fft}(\underline{\mathbf{U}},[],3),\)\(\underline{\widehat{\mathbf{R}}}=\mathrm{fft}(\underline{\mathbf{R}},[],3)\) and \(\underline{\widehat{\mathbf{C}}}_{i}=\underline{\widehat{\mathbf{C}}}(:,:,i),\)\(\underline{\widehat{\mathbf{U}}}_{i}=\underline{\widehat{\mathbf{U}}}(:,:,i),\)\(\underline{\widehat{\mathbf{R}}}_{i}=\underline{\widehat{\mathbf{R}}}(:,:,i).\) Then, the result can be readily concluded from Lemma 3. Theorem 5 shows that the TDEIM approximation with the middle tensor defined above can provide approximation within a factor \(\tilde{\eta}_{p}+\tilde{\eta}_{q}\) of the best tubal rank \(R\) approximation. It also indicates that the conditioning of the problem depends on these two quantities and the lateral/horizontal slices should be selected in such a way that these quantities be controlled. In the simulation section (7), we will show that the proposed TDEIM provide lower values for the quantities \(\tilde{\eta}_{p}\) and \(\tilde{\eta}_{q}\) and they change smoothly. ### Faster TDEIM Algorithm Although the TDEIM can be used to choose the indices of lateral and horizontal slices, its primary restriction is that the maximum number of indices that can be chosen must match the specified tubal rank \(R\) of the tensor \(\underline{\mathbf{X}}\). Using a larger \(R\), we need to compute larger tensor basis \(\underline{\mathbf{U}}\in\mathbb{R}^{I_{1}\times R\times I_{3}}\) and this requires higher computational complexity and memory resources, which makes the algorithm prohibitive for big data tensors. It is suggested in [42], to combine the DEIM with the leverage scores approach to find more than \(R\) column/row indices. Let \(\mathbf{X}\in\mathbb{R}^{I_{1}\times I_{2}}\) be given. The idea is to start with a small rank \(R\) where ``` Input : \(\mathbf{U}\in\mathbb{R}^{I_{1}\times R}\) with \(R\leq I_{1}\) (linearly independent columns) Output : Indices \(\mathbf{s}\in\mathbb{N}^{R}\) with distinct entries in \(\{1,2,\ldots,I_{1}\}\) 1\(\mathbf{u}=\mathbf{U}(:,1)\); 2\(s_{1}=\arg\max_{1\leq i\leq I_{1}}|\mathbf{u}_{i}|\) 3for\(j=2,3,\ldots,R\)do 4\(\mathbf{u}=\mathbf{U}(:,j)\); 5\(\mathbf{c}=\mathbf{U}(s,1:j-1)^{-1}\mathbf{u}(s)\); 6\(\mathbf{r}=\mathbf{u}-\mathbf{U}(:,1:j-1)\mathbf{c}\); 7\(s_{j}=\arg\max_{1\leq i\leq I_{1}}|\mathbf{r}_{i}|\) 8 end for ``` **Algorithm 4**DEIM index selection for row selection [9] ``` Input : \(\mathbf{U}\in\mathbb{R}^{I_{1}\times R}\) with \(R\leq I_{1}\) (linearly independent lateral slices) Output : Indices \(\mathbf{s}\in\mathbb{N}^{R}\) with distinct entries in \(\{1,2,\ldots,I_{1}\}\) 1\(s_{1}=\arg\max_{1\leq i\leq I_{1}}\|\mathbf{U}(i,1,:)\|_{2}\) 2for\(j=2,3,\ldots,R\)do 3\(\underline{\mathbf{C}}=\underline{\mathbf{U}}(\mathbf{s},1:j-1,:)^{-1}* \underline{\mathbf{U}}(\mathbf{s},j,:)\); 4\(\underline{\mathbf{R}}=\underline{\mathbf{U}}(:,j,:)-\underline{\mathbf{U}}(:,1:j-1,:)*\underline{\mathbf{C}}\); 5\(s_{j}=\arg\max_{1\leq i\leq I_{1}}\|\underline{\mathbf{R}}(i,j,:)\|_{2}\) 6 end for ``` **Algorithm 5**Proposed Tubal DEIM (TDEIM) index selection approach for horizontal slice selection \(R<R^{\prime}<\min(m,n)\) and apply the DEIM to select \(R\) rows of the matrix \(\mathbf{X}\). Then, more \(R^{\prime}-R\) indices are selected according to the leverage scores of the residuals. Since \(R<R^{\prime}\), we only compute a smaller set of left singular vectors \(R\) rather than \(R^{\prime}\), so, we can speed up the computations. We can straightforwardly generalize this idea to tensors and this method is summarized in Algorithm 6. Indeed, we need to only compute the left singular tensor \(\underline{\mathbf{U}}\) of size \(I_{1}\times R\times I_{3}\) rather than \(I_{1}\times R^{\prime}\times I_{3}\), and the rest of indices are sampled using the tubal leverage scorers of the residual tensor \(\underline{\mathbf{U}}\) in Lines 5-7 of Algorithm 6. **Remark 6**.: The basis tensors \(\underline{\mathbf{U}}\) and \(\underline{\mathbf{V}}\) are required in Algorithms 4 and 5 can be computed very fast through the randomized truncated t-SVD [35; 43; 44]. This version can be regarded as a randomized version of the TDEIM algorithm. ``` Input :\(\underline{\mathbf{U}}\in\mathbb{R}^{I_{1}\times R\times I_{3}}\) and \(\underline{\mathbf{V}}\in\mathbb{R}^{I_{2}\times R\times I_{3}}\) with a target tubal rank \(R\) with \(R\leq R^{\prime}\leq\min\{I_{1},I_{2}\}\) Output : Index set \(\mathbf{s}\in\mathbb{N}^{R}\) and \(\mathbf{p}\in\mathbb{N}^{R}\)with distinct entries in \(\{1,2,\ldots,I_{1}\}\) and \(\{1,2,\ldots,I_{2}\}\) for the selected horizontal and lateral slices 1for\(j=1,2,\ldots,R\)do 2\(\mathbf{s}(j)=\max_{1\leq i\leq I_{1}}\|\underline{\mathbf{U}}(i,j,:)\|\); \(\underline{\mathbf{U}}=\underline{\mathbf{U}}-\underline{\mathbf{U}}(\mathbf{ s},1:j-1,:)^{-1}*\underline{\mathbf{U}}(\mathbf{s},j,:)\); 3 end for 4Compute the tubal leverage scores \(l_{i}=\|\underline{\mathbf{U}}(i,:,:)\|^{2},\;\;i=1,2,\ldots,I_{1}\), and sort \(l\) in non-increasing order; 5Delete components in \(l\) corresponding to the indices in \(\mathbf{s}\); 6 Sample \(s^{\prime}=R^{\prime}-R\) indices corresponding to \(R^{\prime}-R\) largest entries of \(l\); 7\(\mathbf{s}=[\mathbf{s};\mathbf{s}^{\prime}]\); 8 Perform 1-8 on \(\underline{\mathbf{V}}\) to get index set \(\mathbf{p}\) ``` **Algorithm 6**Proposed hybrid tubal Leverage Scores and TDEIM (HTDEIM) ## 7 Experiments This section presents numerical results to demonstrate the performance of our proposed algorithms. We have implemented and ran the proposed algorithm in MATLAB on a computer with 2.60 GHz Intel(R) Core(TM) i7-5600U processor and 8GB memory. We consider four experiments. We use synthetic and optimization based tensors in the first simulation. The second and third examples are for image and video approximations, respectively. In the last experiment, we use the proposed tubal sampling approach to compute a light-weight model for the supervised classification task on the MNIST dataset. We mainly compare the results achieved by the proposed TDEIM with three sampling algorithms as follows: * Top tubal leverage scores [19] * Tubal leverage score sampling [19] * Uniform sampling without replacement [26] The _top leverage score_ method first computes the tubal leverage scores of horizontal and lateral slices as described in Section 3 and then selects the indices corresponding to the \(R\) top horizontal and lateral leverage scores. The _Tubal leverage score sampling_ method builds the probability distributions (3) and samples the horizontal and lateral slices based on them. The _Uniform sampling without replacement_ applies the uniform sampling for selecting the horizontal and lateral slices. We perform this procedure without replacement as it is not required to select a horizontal or a lateral slice multiple times. It has been experimentally reported that uniform sampling without replacement works better than the one with replacement. We use the " Absolute Frobenius Error" metric defined as follow \[\mathrm{Error}=\|\underline{\mathbf{X}}-\underline{\mathbf{C}}*\underline{ \mathbf{U}}*\underline{\mathbf{R}}\|_{F},\] to compare the performance of the proposed sampling approach with the three considered baseline methods. **Example 1**.: (**Synthetic and optimization based tensors**) In this experiment, we use synthetic and optimization based tensors in our simulations. First consider following synthetic data tensor of size \(300\times 400\times 300\) \[\underline{\mathbf{X}}(i,j,k) =\frac{1}{(i^{p}+j^{p}+k^{p})^{1/p}}, \tag{25}\] \[1\leq i,k \leq 300,\,1\leq j\leq 400.\] We mainly use \(p=3\) and \(p=5\) in the simulations and top singular tensors tubal of tubal rank \(R=15\) for building the tubal leverage scores and also as a basis for using TDEIM Algorithm. The tubal leverage scores corresponding to the data tensor (25) for \(p=5\) and \(p=3\) ar displayed in Figures 1 and 2, respectively. Here, we also show the indices, which were selected using the TDEIM for horizontal and lateral slice selection. Th error achieved by the proposed and the three baselines are reported in Figure 3. As can be seen, the proposed algorithm provides lower errors and also is monotonically decreasing while the tubal rank is increasing.This experiment clearly shows that the proposed algorithm is robust and can achieve better results than the baseline sampling algorithms. We see that the TDEIM samples most of the indices with high tubal leverage scores but not all of them contrary to the top tubal leverage scores. This indicates that some indices with lower tubal leverage scores are also crucial for getting more accurate results. This experiment also shows that the uniform sampling or top tubal leverage scores could be unstable while the tubal leverage score sampling provided better results. Similar numerical results were reported in [9]. The error constants \(\tilde{\eta}_{p}\) and \(\tilde{\eta}_{q}\) for different tubal ranks are displayed in Figure 4. This figure demonstrates the mentioned error constants are increasing smoothly, which leads to lower error bound and better conditioning. In the second set of experiments, we consider the _Exponential, Rastrigin, Booth, Matyas and Easom functions_ as two-dimensional functions defined as follows \[f(\mathbf{x},\mathbf{y}) = -\exp(-0.5(\mathbf{x}^{2}+\mathbf{y}^{2})), \tag{26}\] \[f(\mathbf{x},\mathbf{y}) = 20+\mathbf{x}^{2}+\mathbf{y}^{2}-10(\cos(2\pi\mathbf{x})+\cos(2 \pi\mathbf{y}))),\] (27) \[f(\mathbf{x},\mathbf{y}) = (\mathbf{x}+2\mathbf{y}-7)^{2}+(2\mathbf{x}+\mathbf{y}-5),\] (28) \[f(\mathbf{x},\mathbf{y}) = 0.26(\mathbf{x}^{2}+\mathbf{y}^{2})-0.48\mathbf{x}\mathbf{y},\] (29) \[f(\mathbf{x},\mathbf{y}) = -\cos(\mathbf{x})\cos(\mathbf{y})\exp(-((\mathbf{x}-\pi)^{2}+( \mathbf{y}-\pi)^{2})). \tag{30}\] respectively, which are widely used as baseline in optimization. We discretize the mentioned functions over the \([0,1000]\times[0,1000]\) to build matrices of size \(1000\times 1000\) and then reshape them to third order tensors of size \(100\times 100\times 100\). Here, we apply the proposed TDEIM algorithm and the baseline methods to the generated data tensors with the tubal rank \(R=10\). The error achieved by the algorithms are shown in Table 1. In most of the cases the proposed TDEIM method provided the best results. Let us now compare the running time of the HTDEIM (Algorithm 6) with the TDEIM (Algorithm 5). As we discussed in Section (6.1), the TDEIM requires the basis tensors \(\underline{\mathbf{U}}\) and \(\underline{\mathbf{V}}\) for a given tubal rank \(R\) to select \(R\) lateral and horizontal slices. For a large \(R\), computing these data tensors could be expensive and to circumvent this issue, we can use smaller tubal rank \(R^{\prime}<R\) and use the TDEIM to sample \(R^{\prime}\) horizontal and lateral slices. The rest of slices \((R-R^{\prime})\) are selected according to the top tubal leverage scores of the residual tensor (see subsection 6.1). The running times of the HTDEIM and TDEIM algorithms for the data tensor (25) are compared in Table 2. It is seen that the HTDEIM algorithm requires less execution times compared to the TDEIM method. The errors achieved by the HTDEIM and TDEIM are compared in Table 3. So, the HTDEIM algorithm delivers quicker results that are comparable to those of the TDEIM approach. **Example 2** (**Image approximation)**.: In this experiment, we consider the two color images "Lena" and "peppers" which are of size \(256\times 256\times 3\). We used the top left and right tubal singular tensors of tubal rank \(R=20\) to build the tubal leverage scores and also as a basis in the TDEIM algorithm. The tubal leverage scores and the indices, which were selected by the TDEIM are demonstrated in Figures 5 and 6. The errors of the approximations obtained by the proposed and three baseline algorithms are also reported in Figure 7. Similar to the synthetic case, the better performance of the proposed algorithm is visible. In this experiment, the proposed TDEIM and top tubal leverage scores approaches were monotonically decreasing while the tubal leverage score sampling and uniform sampling methods were not. Since the tubal leverage scores were almost uniformly distributed, the TDEIM approximately samples indices for lateral and horizontal slices uniformly. Figure 1: **(Left)** The selected horizontal slice indices (**Right**) The selected lateral slice indices for \(p=5\). Top singular tensors of tubal rank \(R=15\) were used for Example 1. Figure 3: Errors of different sampling algorithms, (**Left**) \(p=5\) (**Right**) \(p=3\). Top singular tensors of tubal rank \(R=15\) were used for Example 1. Figure 2: (**Left**) The selected horizontal slice indices (**Right**) The selected lateral slice indices for \(p=3\). Top singular tensors of tubal rank \(R=15\) were used for Example 1. \begin{table} \begin{tabular}{||c|c|c|c|c|c||} \hline \multirow{2}{*}{Tensors} & \multicolumn{2}{c|}{Top Leverage} & \multicolumn{2}{c|}{Top Leverage} & \multicolumn{2}{c|}{Uniform sampling} & \multirow{2}{*}{TDEIM} \\ & Scores & \multicolumn{1}{c|}{Score sampling} & \multicolumn{1}{c|}{without replacement} & \\ \hline \hline Exponential & 2.42e-17 & **1.38e-17** & 2.26e-16 & 2.35e-16 \\ \hline Rastrigin & 4.20e-05 & 1.06e-04 & 7.88e-05 & **4.13e-05** \\ \hline Booth & 1.41e-03 & 2.22e-04 & 1.04e-03 & **2.10e-04** \\ \hline Matyas & 3.70e-06 & 9.47e-07 & 1.10e-06 & **4.09e-07** \\ \hline Easom & 7.67e-16 & 4.96e-16 & 5.70e-16 & **7.16e-17** \\ \hline \end{tabular} \end{table} Table 1: Comparing the approximation errors obtained via the TDEIM (Algorithm 5) and the baselines for a low tubal approximation of tubal rank \(R=10\) using optimization based tensors for Example 1. \begin{table} \begin{tabular}{||c|c c c c||} \hline & \(R=2\) & \(R=5\) & \(R=10\) & \(R=15\) \\ \hline \hline HTDEIM & **12.52** & **14.354** & **17.15** & **20.90** \\ \hline TDEIM & 16.01 & 23.29 & 25.31 & 32.19 \\ \hline \end{tabular} \end{table} Table 2: Comparing the running time (seconds) of the HTDEIM (Algorithm 6) and the TDEIM (Algorithm 5) for the data tensor (25) for Example 1. \begin{table} \begin{tabular}{||c|c c c c||} \hline & \(R=2\) & \(R=5\) & \(R=10\) & \(R=15\) \\ \hline \hline HTDEIM & 0.3102 & 0.0403 & 0.0015 & 0.0014 \\ \hline TDEIM & 0.2902 & 0.0308 & 0.0014 & 0.0012 \\ \hline \end{tabular} \end{table} Table 3: Comparing the errors of the HTDEIM (Algorithm 6) and the TDEIM (Algorithm 5) for Example 1. Figure 5: (**Left**) The selected horizontal slice indices (**Right**) The selected lateral slice indices for the Lena image. Top singular tensors of tubal rank \(R=20\) were used for Example 2. Figure 6: (**Left**) The selected horizontal slice indices (**Right**) The selected lateral slice indices for the peppers image. Top singular tensors of tubal rank \(R=20\) were used for Example 2. **Example 3**.: (**Video approximation**) In this experiment, we considered two videos: "Foreman" and "Suzie video" from [http://trace.eas.asu.edu/yuv/](http://trace.eas.asu.edu/yuv/). The size of Foremen video is \(176\times 144\times 300\) and the size of the Suzie video is \(176\times 144\times 150\). For both of them, we used tubal rank \(R=40\) and computed the top singular tensors to build the tual leverage scores (horizontal and lateral) and also used them as bases in the TDEIM algorithms. The tubal leverage scores of the Foremen and Suzie video datasets and the indices, which were selected by the TDEIM are shown in Figures 8 and 9, respectively. The accuracy of the algorithms are reported in Figure 10. The results verifies the superiority of the proposed sampling method over the there widely used baseline sampling algorithms. It is interesting to note that here again since we do not have uniform distribution of tubal leverage scores, the TDEIM selects most of the indices in the region with high tubal leverage scores but all of them. **Example 4**.: (**Classification problem**) In this experiment, we demonstrate the application of the proposed TDEIM method to the classification task on a subset of the MNIST hand-written images4, which consists of the first \(1000\) images for digits \(1\) and \(7\). Each image of size \(28\times 28\) is transformed by Gabor wavelets [45] to give an order-4 tensor of size \(28\times 28\times 8\left(\mathrm{orientations}\right)\times 4\left(\mathrm{scales}\right)\) or order-3 tensor of size \(784\times 8\left(\mathrm{orientations}\right)\times 4\left(\mathrm{scales}\right)\). Concatenate order-3 Gabor Figure 7: Errors of different sampling algorithms, **(Left)** lena (**Right**) peppers. Top singular tensors of tubal rank \(R=20\) were used for Example 2. Figure 8: **(Left) The selected horizontal slice indices. (Right) The selected lateral slice indices for the Foreman video. Top singular tensors of tubal rank \(R=40\) were used for Example 3.** Figure 9: **(Left) The selected horizontal slice indices (Right) The selected lateral slice indices for the Suzie video. Top singular tensors of tubal rank \(R=40\) were used for Example 3.** tensors from all images to give a new tensor, \(\underline{\mathbf{Y}}\) of dimensions \(784\times 8\times 4\times N\), where \(N\) is the number of images. We split the data into 10 folds randomly following the 10-fold cross-validation. For each phase of test, one fold of the data is used for test, and the rest for training. We train a linear discriminant analysis classifier (LDA) on the mentioned dataset [46; 47]. It is known that although the LDA is a simple method, it can provide decent, interpretable and robust classification results [46; 47]. The weight tensor, which was trained by the LDA is a third order tensor of size \(32\times 32\times 28\) and to build a light-weight model with lower number of parameters, we compute a low tubal rank approximation of the weight tensor by sampling some lateral and horizontal slices using the proposed TDEIM method. This leads to a faster inference time of the underlying model. The corresponding accuracy achieved by the low tubal rank approximation was also calculated for different numbers of sampled lateral and horizontal slices. The classification accuracy for the digits \((1,7)\) using different numbers of sampled lateral/horizontal slices are reported in Figure 11 (Upper). As we can see, the accuracy of 0.98 is attained for 15 horizontal and lateral slices, indicating that the classifier is fairly accurate in recognizing the digits 1 and 7. The 10-fold cross validation results for all combinations of different digits are displayed in Figure 11 (Bottom). The results clearly demonstrate that the lightweight model can provide correct classifications for the any combination of digits. Figure 10: Errors of different sampling algorithms, (**Left**) Foreman video (**Right**) Suzie video. Top singular tensors of tubal rank \(R=40\) were used for Example 3. Figure 11: **(Upper)** The accuracy yielded by the lightweight classification model of digits \((1,7)\) using a low tubal rank approximation of the weight tensor by the proposed TDEIM approach for different numbers of sampled lateral and horizontal slices. **(Bottom)** The classification accuracy of the lightweight model using a low tubal rank approximation of the weight tensor (with the tubal rank \(R=15\)) using the proposed TDEIM method for different combinations of digits for Example 4. ## 8 Conclusion In this paper, we extended the discrete empirical interpolation method (DEIM) to tensors based on the t-product. The tubal DEIM (TDEIM) is used to select important horizontal and lateral slices of a given third order tensor. We studied the theoretical aspects of the TDEIM and conducted simulations on synthetic and real-world datasets. The results show the better accuracy of the proposed algorithm compared to other sampling algorithms such as top tubal leverage scores, tubal leverage scores sampling and inform sampling without replacement. ## 9 Conflict of interest The authors declare that they have no conflict of interest.
2304.00460
GitHub OSS Governance File Dataset
Open-source Software (OSS) has become a valuable resource in both industry and academia over the last few decades. Despite the innovative structures they develop to support the projects, OSS projects and their communities have complex needs and face risks such as getting abandoned. To manage the internal social dynamics and community evolution, OSS developer communities have started relying on written governance documents that assign roles and responsibilities to different community actors. To facilitate the study of the impact and effectiveness of formal governance documents on OSS projects and communities, we present a longitudinal dataset of 710 GitHub-hosted OSS projects with \path{GOVERNANCE.MD} governance files. This dataset includes all commits made to the repository, all issues and comments created on GitHub, and all revisions made to the governance file. We hope its availability will foster more research interest in studying how OSS communities govern their projects and the impact of governance files on communities.
Yibo Yan, Seth Frey, Amy Zhang, Vladimir Filkov, Likang Yin
2023-04-02T06:07:00Z
http://arxiv.org/abs/2304.00460v1
# GitHub OSS Governance File Dataset ###### Abstract Open-source Software (OSS) has become a valuable resource in both industry and academia over the last few decades. Despite the innovative structures they develop to support the projects, OSS projects and their communities have complex needs and face risks such as getting abandoned. To manage the internal social dynamics and community evolution, OSS developer communities have started relying on written governance documents that assign roles and responsibilities to different community actors. To facilitate the study of the impact and effectiveness of formal governance documents on OSS projects and communities, we present a longitudinal dataset of 710 GitHub-hosted OSS projects with GOVERNANCE.md governance files. This dataset includes all commits made to the repository, all issues and comments created on GitHub, and all revisions made to the governance file. We hope its availability will foster more research interest in studying how OSS communities govern their projects and the impact of governance files on communities. ## I Introduction As one of the largest online open code hosting platforms, GitHub provides a widely accessible and easy-to-use platform for many Open Source Software (OSS) projects and communities. With the free availability of OSS digital traces, studying OSS at scale has become feasible, and academic studies of OSS repositories have proliferated. Research into OSS communities spans topics including code quality [1, 2], sustainability [3, 4, 5], and more recently governance [6, 7]. GitHub governance is different than the governance in other OSS hosting organizations. For instance, Apache Software Foundation (ASF), a well-known not-for-profit organization, provides an incubator for OSS projects. Projects in the ASF Incubator (ASFI) receive mentorship, and their efforts are guided and governed by ASFI's committee. Most OSS projects and communities on GitHub do not have such a committee to support OSS activities. As a solution, many OSS communities draft their own governance files and utilize GitHub "issue" feature to govern and coordinate OSS activities. Based on the existing convention of using text-based documentation to govern collective activities (e.g., Contributor Covenant), open-source communities start to adopt the approach of hosting a text-based file named GOVERNANCE.md, making an effort to foster a healthier community and convey labor division and participation expectations more clearly. As a project evolves, the governance file often evolves through periodic revisions, as community members introduce, modify, or delete governance rules. Studying governance files can shed light on how OSS projects coordinate work and potentially reveal aspects of the socio-technical dynamics within OSS projects. Understanding how OSS projects on GitHub govern their collaborative work and how socio-technical dynamics change within projects can further facilitate the study of sustainability in OSS projects to pinpoint crucial factors in the successful governance of OSS projects and communities [8, 9]. We present the GitHub Open-Source Software governance documentation dataset. It includes governance files, projects' commit history, and issues for 710 OSS projects hosted on GitHub. To facilitate longitudinal studies, we provide separate tables, capturing the commit history of GOVERNANCE.md files and detailed changes made in each commit on the GOVERNANCE.md file at line-level granularity. To the best of our knowledge, this is the first time such a governance-documentation-oriented GitHub-hosted OSS project dataset has been presented to the empirical software engineering community. Next, we present the details of our dataset, scraping methodology, and storage, followed by two preliminary examples of research studies that can benefit from this data. Our dataset, along with the scripts we used to scrape it, is available at Zenodo: [https://doi.org/10.5281/zenodo.7530768](https://doi.org/10.5281/zenodo.7530768). ## II Related Work Analyzing the community organization, structure, and governance of OSS projects from a socio-technical perspective enables the understanding of dynamics [10], project quality over time [11], and project coordination effectiveness [12] within OSS projects [13, 14]. Besides GitHub, many large organizations and foundations like OSGeo [15], Apache Software Foundation [16, 17], and Linux Foundation [18] also support a multitude of OSS projects. Various tools have been produced to mine these projects, e.g., Perceval provides a unified entry-point to gather the data of software repositories from various backends [19]; GHTorrent provides a streamlined approach to gather mirrored data from GitHub [20]; Project CHAOSS delivers metrics, model and software to understand OSS community health [21]. Our governance documentation dataset is orthogonal, and together with other sources can enable holistic longitudinal analyses of OSS project evolution. ## III OSS Governance Documentation Dataset The dataset comprises a longitudinal record of the entire revision history of the GOVERNANCE.md file for each collected project, at varying levels of granularity. Besides the detailed information on each commit that developers made to the governance file, we also extracted the line change information for each commit made on the governance file. #### Iii-B1 Data Access GitHub maintains a set of interfaces through which developers can fetch repository-related data. GitHub supports both REST [22] and GraphQL [23] APIs for data retrieval, and we take advantage of both to scrape the data. Since GitHub provides first-hand data access, the data gathered through GitHub's official APIs is at least as reliable and up-to-date as the data from other social coding sites. Although GHTorrent serves the purpose of mining GitHub, it might not be useful for particular research directions, as it produces excessive data by exhaustively mirroring GitHub data but misses detailed data on the governance file's per-line revisions. #### Iii-B2 Repositories Based on GitHub search results, there are more than 1.6M projects with a GOVERNANCE.MD file. However, not all of them are signaling meaningful information as these projects often do not have a GOVERNANCE.MD file for their own projects but include other packages or dependencies that have GOVERNANCE.MD files. 1 Therefore, to reduce noise in the data set, we used the REST API's search code endpoint 2 to fetch only the repositories that contain a file with a GOVERNANCE.MD (case-insensitive and non-exact match) in the root directory. 3 The initial data set consists of 1,899 unique projects that have a GOVERNANCE.MD file in their root directory. To ensure the projects are meaningful, we only keep projects that have at least 1 commit/issue. This results in the final data set of 710 projects. We gathered a list of basic metadata for each repository, such as the repository name and the filename of the governance file. Footnote 1: This is because the npm installs packages/dependencies in the project directory, and project developers do not “gitignore” the dependency folder. Footnote 2: [https://docs.github.com/en/rest/search?apiVersion=2022-11-28#search-code](https://docs.github.com/en/rest/search?apiVersion=2022-11-28#search-code) Footnote 3: Therefore, results like ‘Governance-Committee_Charter.md’, ‘IPEP-29:-Project-Governance.md’, and ‘GovernancePolicy.md’, etc., exist. Due to the non-deterministic nature of GitHub's search API, we kept calling the search API repeitively until 99 out of the 100 last searched repositories were already encountered. We used _gql4_ to fetch commits, issues, and comments in each issue for each repository. Additionally, we used _PyDriller_[24] to locally locate all commits that involve a change on the governance file. With the help of _PyDriller_, we obtained detailed information on each commit made on the governance file at the granularity of a line by processing through the parsed diff output, offering a view of which line along with the edited content is added or deleted in each commit. Footnote 4: [https://github.com/graphql-python/gql](https://github.com/graphql-python/gql) #### Iii-B3 Commits, Issues, and Comments We used the GraphQL endpoint, [https://docs.github.com/en/graphql/guides/forming-calls-with-graphql#the-graphql-endpoint](https://docs.github.com/en/graphql/guides/forming-calls-with-graphql#the-graphql-endpoint), to collect commits, issues, and comments inside each issue. We sent sequential GraphQL queries to iterate through the search space by adjusting the cursor correspondingly. Preliminary project-scoped aggregations were done on the collected data to generate basic metrics for each repository and were added to the repository list afterward. As a result, the repository list contains information regarding the number of stars, forks, commits, committers, issues, and comments; the number of submitters of issues and comments; and the number of commits and committers on the governance file. #### Iii-B4 Commit History of the Governance File As GitHub lacks support for retrieving commit history of a specific file, we used PyDriller [24] to examine commits that involve a change (i.e., add, delete, or modify) to the governance file. We extracted basic metadata, e.g., _author_, _committer_, _fileLOC_, along with the content and diff output before and after the commit. We also extracted information on how each line got modified (i.e., added or deleted) in each commit on the governance file. We used scripts on the collected list of commits on the governance file and obtained: a) a list of commits per section of the governance file; 5 and b) a list of the latest governance file of every repository by pulling out the content after the last collected commit on the governance file. All data--commits, issues, comments--were collected from the initial creation date of the corresponding project on GitHub to June 2022, the last day updated by the script. Footnote 5: The section is defined as the content between two heading elements in the Markdown file. The heading element is written as #’, ##’, etc., followed by the heading content. #### Iii-B5 Database Most data were stored directly in a MongoDB database, a type of NoSQL database, and later were exported to different formats of data, i.e., CSV, SQL, and MongoDB archive dumps. A small portion of data, such as the repository list, was initially processed by _pandas_ in-memory and written to the MongoDB database afterward by scripts. All data were re-exported in the end. We used PyMongo6 to interact with the MongoDB database via python scripts. The associated data schema is shown in Figure 1. Each table in the schema corresponds to one CSV table, SQL table, or MongoDB collection in the dataset. Footnote 6: [https://github.com/mongodb/mongo-python-driver](https://github.com/mongodb/mongo-python-driver) Table _repo-list_ contains a list of metadata of GitHubed projects. Each row represents a project. There is no explicit foreign key across tables; instead, _'repo_name'_ and _'filename'_ serve as two major keys to locate commits, issues, and comments of the corresponding repository. Table _commit-list_ contains the commits from all projects in _repo-list_; each row represents a commit. Table _goverrance-change-commit_ contains the commits for the governance file of every project. Each row represents a commit. As this list was collected by locally analyzing git repositories, no GitHub-related information is presented. Table _goverrance-change-content_ contains the list of line-change information of each commit made on the governance file over all collected projects. Each row represents one line-change information of a specific commit. The combination of _oid_, _repository.nameWithOwner_, and _filename_ should be used as the key to locate all line changes on one specific commit. Table _goverrance-change-commit-by-section_ contains the same information as _goverrance-change-commit_. For each row in the _goverrance-change-commit_ table, the _'sourceFile'_ field was used to extract section information. Each section information was stored as a row in the _governance-change-commit-by-section_ table. Most fields are identical to the fields in the _governance-change-commit_ table. Table _lastest-governance-file-content_ contains the latest version of the governance file of each collected project. #### Iii-C6 Metrics We compute a set of standard software engineering metrics for OSS project activity from our governance documentation dataset. These include: the number of stars (_star_count_), forks (_fork_count_), commits (_num_commits_), committers (_num_committers_), issues (_num_issues_), issue submitters (_num_issues_submitters_), and issue comments (_num_issues_comments_); the number of people who commented on issues (_num_issues_commenters_); the number of commits to the governance file (_num_gov_commits_); and the number of people committed to it (_num_gov_committers_). The descriptive statistics for these metrics over the 710 GitHub-hosted projects are given in Table I. ## IV Potential Use Cases This section presents two potential use cases that can be studied based on the presented dataset. ### _Case I: Studying Governance in Digital Commons_ Scholars of self-organized self-governance in the online context have long drawn on the theories of common pool resource scholars such as Elinor Ostrom [25, 26]. One particularly prominent contribution from this literature are the Design Principles for Sustainable Common Pool Resource Fig. 1: Data Schema of the GitHub OSS Governance Documentation Dataset. Note: the actual data type can vary based on different sources for the archives, e.g., for a MongoDB dump, all date fields like ’createdAt’ will be a native _Date_ type in the MongoDB context. Management [27]. These were developed for studying institutions for managing natural resources such as fisheries, forests, and water systems, but have been extended to OSS and other digital resources [28]. Yet, there remains a critical need to test the generality of the design principles in the OSS context, which would require a comprehensive comparative dataset of formal OSS governance records. For example, with a large collection of formal governance documents on GitHub, a scholar could test "Principle 1: Clearly defined boundaries" by extracting reference to GitHub platform position constructs like users, committers, and contributors. For "Principle 5: Graduated sanctions for appropriators who do not respect community rules" a scholar might search GOVERNANCE.md files for whether the rules contain a variety of sanctions (both warnings and bans, not just one or the other). With this dataset, we can apply these principles in any open-source project hosted on GitHub and see how it works and whether the best practices align with these principles. We can also study the collaboration mechanisms, the rules and norms set in the community, and the role of institutions, such as maintainers, collaborators, and contributors in the governance of the project. ### _Case II: Semantic Search in Institutions Analysis_ Institutional analysis using natural language processing (NLP) methods on GitHub can be used to study the organizational structure, rules, and norms of online communities, and how they relate to the governance and performance of open-source software projects. With the classified sentences, we can conduct institutional analysis to understand the governance mechanisms. In what way do people invoke the rules of an open-source project in regular discourse? What is the overlap between the rules people discuss and those they use? How do rules change over time, and in what direction? How many people tend to contribute to rules, and what is the typical number of discrete roles they define as a project matures? These questions all inform the practice of OSS governance, and effective peer production generally. The dataset we introduce thus makes it possible to advance the entire front of online governance research. ### _Case III: Development of Governance Design Patterns_ By examining a repository of enacted governance across a range of communities, governance practitioners and tool developers can gain an understanding of what governance components are common. Broadly common components could be surfaced as governance design patterns to then be integrated into guides and other materials to help communities who are deciding on what kind of governance to enact [29]. These patterns could also be translated into programs within existing software toolkits for programmatically enacting governance [30]. Finally, developers of governance-related features on GitHub or third-party tools could build in customized support for common governance components; for instance, implementing certain roles, tiered permissions, or voting mechanisms that were often expressed. ## V Limitations and Conclusion Limitations We note that this dataset has two major limitations regarding **completeness** and potential **bias**. **Completeness** 1) Because GitHub's search API doesn't return a deterministic and full set of results, the dataset is not a complete set of all GitHub-hosted repositories with a governance file. 2) As we only collected projects which contain the GOVERNANCE.MD file in the root directory, some GitHub-hosted projects are missing from our dataset as they might organize and store their governance files differently. For example, some projects put their governance files directly in the _readme_ file. 3) The commit history collected in our dataset does not necessarily represent the full history of all commits that have been made. Certain Git operations may lose commit history, such as _squash_ or _rebase_. **Bias** 1) Some projects may use the same governance file. Under some circumstances, multiple projects may put the same line of redirection link in their project-level governance file, referring to the external governance documentation. In this case, all governance files in these projects will produce the exact same but not meaningful governance content. 2) Some projects didn't start at GitHub in the first place; instead, the projects and corresponding governance files were hosted on other platforms and were moved to GitHub in the later development stage. In this case, the commit history on the governance file might be incomplete, causing bias in analytic studies. To mitigate the completeness and bias mentioned above, we can 1) integrate data from other sources, such as GHTorrent's mirrored data, to workaround GitHub APIs' non-deterministic behavior; 2) we can use the available governance file dataset to train a classifier to classify whether some text files contain governance content, avoiding searching for the governance file in a specific pattern and expanding the search space to include more repositories. Conclusion This work presents the development of a longitudinal dataset of 710 Open Source Software (OSS) projects hosted on GitHub that includes information about governance files. OSS projects and communities still fail frequently, despite the popularity of hosting platforms like GitHub, and governance files are sometimes drafted and revised to serve the community's needs. This dataset aims to help researchers and developers understand best practices and common patterns in OSS governance documentation and to identify projects with poor governance that could lead to better maintainability and sustainability in the long run. We present how the data was collected and what specific criteria were used to select the projects, as well as a description of the data, its structure, and the challenges encountered during the data collection process. Additionally, we discussed the potential applications and the value of the dataset and encourage researchers to use this dataset to study the state of open-source software governance across different projects and communities. ## Acknowledgement We thank the anonymous reviewers for their constructive comments. This material is based upon work supported by the National Science Foundation under GCR #2020751/2020900 "Jumpstarting Successful OSS Projects With Evidence-Based Rules and Structures", and DASS #2217652/2217653 "Transitioning OSS projects to accountable community governance".
2310.15694
COPR: Continual Learning Human Preference through Optimal Policy Regularization
The technique of Reinforcement Learning from Human Feedback (RLHF) is a commonly employed method to improve pre-trained Language Models (LM), enhancing their ability to conform to human preferences. Nevertheless, the current RLHF-based LMs necessitate full retraining each time novel queries or feedback are introduced, which becomes a challenging task because human preferences can vary between different domains or tasks. Retraining LMs poses practical difficulties in many real-world situations due to the significant time and computational resources required, along with concerns related to data privacy. To address this limitation, we propose a new method called Continual Optimal Policy Regularization (COPR), in which we compute the distribution of optimal policy bypassing the partition function and then regularize the current policy based on the historically optimal distribution to mitigate Catastrophic Forgetting (CF). COPR involves a single learning phase and doesn't necessitate complex reinforcement learning. Importantly, it shares the capability with RLHF to learn from unlabeled data by maintaining a scoring module, similar to reward model, making it flexible for continually learning without human feedback. Our experimental results show that COPR outperforms strong Continuous Learning (CL) baselines when it comes to consistently aligning with human preferences on incremental tasks and domains.
Han Zhang, Lin Gui, Yuanzhao Zhai, Hui Wang, Yu Lei, Ruifeng Xu
2023-10-24T10:05:32Z
http://arxiv.org/abs/2310.15694v5
# COPF: Continual Learning Human Preference through Optimal Policy Fitting ###### Abstract The technique of Reinforcement Learning from Human Feedback (RLHF) is a commonly employed method to improve pre-trained Language Models (LM), enhancing their ability to conform to human preferences. Nevertheless, the current RLHF-based LMs necessitate full retraining each time novel queries or feedback are introduced, which becomes a challenging task because human preferences can vary between different domains or tasks. Retraining LMs poses practical difficulties in many real-world situations due to the significant time and computational resources required, along with concerns related to data privacy. To address this limitation, we propose a new method called Continual Optimal Policy Fitting (COPF), in which we estimate a series of optimal policies using the Monte Carlo method, and then continually fit the policy sequence with the function regularization. COPF involves a single learning phase and doesn't necessitate complex reinforcement learning. Importantly, it shares the capability with RLHF to learn from unlabeled data, making it flexible for continual preference learning. Our experimental results show that COPF outperforms strong Continuous learning (CL) baselines when it comes to consistently aligning with human preferences on different tasks and domains. ## 1 Introduction In the realm of natural language processing (NLP), large language models (LLMs) are vital tools with the potential to bridge human language and machine understanding. Learning human preferences is a crucial step towards ensuring that language models not only generate responses that are useful to users but also adhere to ethical and societal norms, namely helpful and harmless responses [1]. However, they face a fundamental challenge in aligning with human preferences and values, hindering their full potential. Traditional alignment methods, namely Reinforcement Learning from Human Feedback (RLHF) [2, 3], involve supervised fine-tuning (SFT), reward model (RM) training, and policy model training. This complex pipeline lacks flexibility for continual learning (CL) of human preferences, hence existing work [1] often necessitates retraining models to adapt to dynamic preferences. Hence, there is a pressing need for research into continual alignment methods to address this limitation, enabling LLMs to better adhere to evolving human preferences and values while generating helpful responses. In this paper, we propose an innovative approach to address these challenges by enhancing the utility of the Deterministic Policy Optimization (DPO) [4] algorithm, a non-reinforcement learning, and a non-continual learning method. DPO, rooted in rigorous reinforcement learning theory, offers promising advantages but suffers from three critical limitations: 1. DPO is not supported for evolving human preferences which is common in real-world applications. 2. Instability during the initial training phase, characterized by a substantial gap between reward estimate 3.1 and the ground-truth reward function. 3. A tendency to over-optimize [5] by increasing positive sample generation probability while decreasing negative sample generation probability, leading to _text degeneration_[6] responses. To overcome these limitations, we introduce improvements to DPO's theoretical optimization objectives. Our approach, as illustrated in Figure 1(a), involves employing Monte Carlo estimation to derive a sequence of optimal policies (\(\pi_{1}^{*}\rightarrow\pi_{2}^{*}\rightarrow\pi_{3}^{*}\)) in tasks with continuously changing human preferences. This estimation is then incorporated into the probability distribution for positive and negative examples of the optimal policies based on real data. We directly fit this probability distribution, and retain a small amount of old task data while recording their distributions under the optimal policy. Meanwhile, when learning new tasks, we employ a function regularization [7] strategy to maintain performance on previous tasks. To the best of our knowledge, we are the first to study the CL of alignment methods. For fair evaluation, we construct the first benchmark for continuous learning of human preferences based on different human preference data (Section 4.3), including Helpful and Harmless preference (HH) [1] data, Standard Human Preference (SHP) [8] data, Reddit TL,DR summary [2] human preference data provided by CarperAI 2, and the _enhanced_ IMDB [9] Sentiment Text Generation benchmark released by RL4LMs [10]. Footnote 2: For each Reddit post in the dataset, multiple summaries are generated using various models. These models include pre-trained ones used as zero-shot summary generators, as well as supervised fine-tuned models (12B, 6B, and 1.3B) specifically trained on the Reddit TL;DR dataset. Additionally, the human-written TL;DR (reference) is considered as a sample for comparison. **URL**: [https://huggingface.co/datasets/CarperAI/openai_summarize_comparisons](https://huggingface.co/datasets/CarperAI/openai_summarize_comparisons) In summary, our work presents a novel approach, termed "COPF" which leverages improvements to DPO's theoretical foundations. By doing so, we tackle the challenges associated with learning human preferences in a continual learning scenario, ultimately achieving SOTA performance. ## 2 Preliminaries ### Static Alignment **Reinforcement Learning from Human Feedback**. The recent RLHF pipeline consists of three phases: 1) supervised fine-tuning (SFT); 2) preference sampling and reward learning and 3) reinforcement-learning optimization. In the SFT phases, the language model is fine-tuned with supervised learning (maximum likelihood) on the downstream tasks. In the reward learning phase, human annotators rank multiple answers {\(y_{1}\), \(y_{2}\),..., \(y_{n}\)} for a prompt x based on human preferences, generating human feedback data. Then, this feedback data is used to train a reward model, which assigns higher scores to pairs consisting of prompts and answers that are preferred by humans. In the RL Fine-Tuning phase, Figure 1: **(a)** The framework of COPF. The optimal policy \(\pi_{t}^{*}\) (\(t=1,2,3\)) is derived from the policy \(\pi_{t-1}\). The optimal policy \(\pi_{t}^{*}\) is utilized as the fitting objective of \(\pi_{t}\) and the regularization term of \(\pi_{t+1}\). (b) A state-of-the-art and elaborated taxonomy [7] of representative continual learning methods. Bold indicates the category to which our method belongs. the mainstream methods maximize a KL-constrained reward objective like \[\max_{\pi_{\theta}}\mathbb{E}_{x\sim\mathcal{D},y\sim\pi_{\theta}(y|x)}\big{[}r_{ \phi}(x,y)\big{]}-\beta\mathbb{D}_{\text{KL}}\big{[}\pi_{\theta}(y\mid x)\mid \mid\mid\pi_{ref}(y\mid x)\big{]} \tag{1}\] where \(\beta\) is a parameter controlling the deviation from the base reference policy \(\pi_{ref}\), namely the initial SFT model \(\pi_{sf}\). Because language generation operates discretely, this objective lacks differentiability and is generally optimized using reinforcement learning techniques. The recent approaches [1, 2, 3] reconstruct the reward function \(r(x,y)=r_{\phi}(x,y)-\beta(\log\pi_{\theta}(y\mid x)-\log\pi_{ref}(y\mid x))\), and maximize using PPO [11]. **Direct Preference Optimization**. Previous work DPO [4] proposes a direct optimization objective, which requires no reward learning and reinforcement-learning optimization. The objective of DPO is based on the optimal solution to the KL-constrained reward maximization objective: \[\pi_{r}(y\mid x)=\frac{1}{Z(x)}\pi_{ref}(y\mid x)\exp\bigg{(}\frac{1}{\beta}r( x,y)\bigg{)} \tag{2}\] where \(Z(x)=\sum_{y}\pi_{ref}(y\mid x)\exp\Big{(}\frac{1}{\beta}r(x,y)\Big{)}\) is the partition function. In the RLHF scenario, x represents the prompt, and y represents a potential response. Specifically, we first take the logarithm of both sides of Eq.2 and then with some algebra we obtain: \[r(x,y)=\beta\log\frac{\pi_{r}(y\mid x)}{\pi_{ref}(y\mid x)}+\beta\log Z(x). \tag{3}\] When collecting human feedback data, it is common to generate two or multiple answers for a single prompt (which can come from different models or even humans) and then rank the answers based on human preferences. Based on the Eq. 2 and the Bradley-Terry (BT) model, DPO proposes a direct policy objective: \[\mathcal{L}_{\text{DPO}}(\pi_{\theta};\pi_{ref})=-\mathbb{E}_{(x,y_{w},y_{l}) \sim\mathcal{D}}\bigg{[}\log\sigma\bigg{(}\beta\log\frac{\pi_{\theta}(y_{w} \mid x)}{\pi_{ref}(y_{w}\mid x)}-\beta\log\frac{\pi_{\theta}(y_{l}\mid x)}{ \pi_{ref}(y_{l}\mid x)}\bigg{)}\bigg{]} \tag{4}\] ### Continual Alignment In the static alignment, the dataset typically consists of a fixed, static set of examples that are collected and labeled for a specific task. The dataset remains constant throughout the training process. In the continual alignment scenario, the human preference dataset evolves over time, often consisting of a sequence of tasks or domains. Each task or domain may have its own set of data, and these tasks are presented to the model sequentially. The order in which tasks are presented can vary, and the model needs to adapt to new tasks without forgetting previously learned ones. In this paper, we consider that there is a sequence of tasks \(\mathbb{T}=\{\mathcal{T}_{1},\mathcal{T}_{2},...\}\) to learn, and a sequence of corresponding human preference datasets \(\mathbb{D}=\{\mathcal{D}_{1},\mathcal{D}_{2},...\}\). The initial policy is the SFT model, namely, \(\pi_{0}=\pi_{SFT}\). For each task \(\mathcal{T}_{t}(t=1,2,...)\), the policy \(\pi_{t}\) is initialized by \(\pi_{t-1}\) and there is a latent scoring function (i.e. the reward model) \(r_{t}(x,y)\) can be learned from \(\mathcal{D}_{t}\). A naive \(r_{t}(x,y_{w})\) is that \(r_{t}(x,y_{w})=1\) and \(r_{t}(x,y_{l})=0\). To mitigate forgetting, we maintain a replay memory buffer \(\mathbb{R}=\{\mathcal{R}_{1},\mathcal{R}_{2},...\}\), where \(\mathcal{R}_{i}\subset\mathcal{D}_{i}\) (\(i=1,2,...,t\)) stores training data from 1% of historical tasks. When learning new tasks, the data in the replay memory is merged with the training data of the new task. ## 3 Method ### Motivation of the Method The DPO method derives the maximum likelihood optimization solution from the theory of reinforcement learning, and its theoretical foundation is rigorous. However, the optimization process of DPO has two flaws. * **Suboptimal learning results**: The gradient of the loss function \(\mathcal{L}_{\text{DPO}}\) with respect to the parameters \(\theta\) can be written as: \[\nabla_{\theta}\mathcal{L}_{\text{DPO}}(\pi_{\theta};\pi_{ref})=\\ -\beta\mathbb{E}_{(x,y_{w},y_{l})\sim\mathcal{D}}\bigg{[}\underbrace{ \sigma(\hat{r}_{\theta}(x,y_{l})-\hat{r}_{\theta}(x,y_{w}))}_{\text{higher weight when reward estimate is wrong}}\bigg{[}\underbrace{\nabla_{\theta}\log\pi(y_{w}\mid x)}_{\text{ increase likelihood of }y_{w}}-\underbrace{\nabla_{\theta}\log\pi(y_{l}\mid x)}_{\text{decrease likelihood of }y_{l}}\bigg{]}\bigg{]},\] where \(\hat{r}_{\theta}(x,y)=\beta\log\frac{\pi_{\theta}(y|x)}{\pi_{ref}(y|x)}\) is the reward implicitly defined by the language model \(\pi_{\theta}\) and reference model \(\pi_{ref}\). In the DPO paper, the author claims that the weight term \(\sigma(\hat{r}_{\theta}(x,y_{l})-\hat{r}_{\theta}(x,y_{w}))\) represents how much higher the implicit reward model rates the dispreferred completions. Compared \(\hat{r}_{\theta}(x,y)\) with the \(r(x,y)\) in Eq 3, the term \(\pi_{\theta}(y\mid x)\) has a significant gap from the \(\pi_{r}(y\mid x)\) in the true reward function at the beginning of optimization. Hence, the implicit reward model still has a gap with the true reward model in the learning process, which may lead to suboptimal learning results. * **The risk of over-optimization**: In the learning process of DPO, the \(\log\pi_{\theta}(y_{w}|x)\) is increased, and the \(\log\pi_{\theta}(y_{l}|x)\) is declined. Hence, the weight term \(\sigma(\hat{r}_{\theta}(x,y_{l})-\hat{r}_{\theta}(x,y_{w}))=\sigma(\beta\log \pi_{\theta}(y_{w}|x)-\beta\log\pi_{\theta}(y_{l}|x)+\beta\log\pi_{ref}(y_{l }|x)-\beta\log\pi_{ref}(y_{w}|x))\) is also increasing, which exacerbates the widening gap between \(\log\pi_{\theta}(y_{w}|x)\) and \(\log\pi_{\theta}(y_{l}|x)\) and leads to _over-optimization_[5] and _text degeneration_[6]. Although it is possible to control the increase in the gap by the _max-margin strategy_[12], it introduces a new hyper-parameter to determine the maximal margin. ### Continual Optimal Policy Fitting Previous works [4, 13], prove that the optimal solution to the KL-constrained reward maximization objective takes the form of Eq. 2. Based on this, we conclude that the optimal policy of task \(\mathcal{T}_{t}\) is \[\pi_{t}^{*}(y|x)=\frac{1}{Z_{t}(x)}\pi_{t-1}(y|x)exp(\frac{1}{\beta}r_{t}(x,y)) \tag{5}\] where \(Z_{t}(x)=\Sigma_{y}\pi_{t-1}(y|x)exp(\frac{1}{\beta}r_{t}(x,y))\) is the partition function, \(x\in\mathcal{D}_{t}\) denotes the prompt, \(y\in\) denotes the possible response. For the estimation \(\pi_{t}^{*}(y|x)\), we suppose that there are \(J_{x}\) responses of each prompt \(x\) and the partial order annotated by humans is \(y_{1}^{x}\prec y_{2}^{x}\prec...\prec y_{J_{x}}^{x}\). **Step-1: Construct reward function** We introduce 2 methods to determine reward function. **Linear reward**: We simulate training the reward model using a pairwise loss function \(\mathcal{L}_{ranking}=-\log(\sigma(r_{\theta}(x,y_{w})-\tau_{\theta}(x,y_{l})))\), where \(y_{w}\) and \(y_{l}\) represent the human choose and human rejected response respectively. We found that the reward scores are approximately linearly related to human preferences.3 We provide theoretical derivations in the Appendix Section A.1. Previous works [14, 15, 16, 17] present the reward as a linear combination of pre-trained features or hand-crafted features. Recent work [18] gets enhanced performance by modeling a linear reward function according to _regret_ which is the negated sum of an optimal policy's advantage in the segment. Inspired by this, we propose a linear reward according to human preference Footnote 3: As the training steps increasing, the scale of reward is raising and the score function tends to a non-linear style. Because the sigmoid function has an approximate linear region. When the reward values are within this region, the gradients also approximately linearly increase with the degree of preference. When the reward values reach the saturation region of the sigmoid, the gradients no longer increase approximately linearly with the degree of preference. \[r_{t}(x,y_{j}^{x})=Adv(x,y_{j}^{x})+\delta(x) \tag{6}\] for approximating the well trained reward function, where \(j=1,2,...,J_{x}\) represents the degree of human preference, the function \(\delta(x)\) depends solely on the prompt \(x\), \(Adv(x,y_{j}^{x})\triangleq\frac{2j-J_{x}-1}{J_{x}}\in(-1,1)\) denotes the advantage score. For a given prompt \(x\), if the question is easy to answer, the average reward score should be positive (\(\delta(x)>0\)), if the question is hard then the average reward score should be negative (\(\delta(x)<0\)). **Gaussian reward**: The Gaussian distribution is also observed in well-trained reward scores [19]. Inspired by this observation, we attempt to modeling the advantage score, i.e., the extra reward one response can obtain compared with the expected reward, by a normal distribution and regulating the advantage score distribution dynamically during training, ensuring that the variances and means are maintained within a reasonable range. We split the reward \(r(x,y)\) into the advantage score \(Adv(x,y)\) and the expected reward \(E_{y\sim\pi(\cdot\mid x)}r(x,y)\), where \(Adv(x,y)\sim N(0,\sigma)\), \(\sigma\) is hyperparameter. Given a prompt \(x\), and partially-ordered set of responses \(\mathcal{Y}^{x}=\{y_{1}^{x}\prec y_{2}^{x}\prec y_{3}^{x}\prec y_{4}^{x}\}\), where \(y_{2}^{x}\) is the reward function. The reward function is defined as \[\mathcal{L}_{adv}(x,y)=\frac{1}{\beta}\sum_{j=1}^{N}\frac{\sigma(y_{j}^{x})}{ \sigma(y_{j}^{x})} \tag{7}\] Figure 2: The score distribution of pairwise reward learning. \(y_{2}^{x}\prec...\prec y_{J_{x}}^{x}\)}. We sample \(J_{x}\) values \(\{R_{1}<R_{2}<...<R_{J_{x}}\}\) from the distribution \(N(0,\sigma)\), and define \(Adv(x,y_{j}^{x})\triangleq R_{j}.\) Overall, the reward function can be writen as \[r_{t}(x,y_{j}^{x})=Adv(x,y_{j}^{x})+\delta(x) \tag{7}\] for approximating the well trained reward function, where \(j=1,2,...,J_{x}\) represents the degree of human preference, the expectation \(\delta(x)=E_{y\sim\pi(\cdot|x)}r(x,y)\) depends solely on the prompt \(x\). In Step-2, we theoretically prove that there is no need to calculate specific values for \(\delta(x)\) and partition function \(Z(x)\). **Step-2: Calculate the distribution of sampling space \(P_{y\in\mathcal{Y}^{x},t}^{*}(y|x)\)**. We sample \(y\) multiple times to obtain the partially-ordered set \(\mathcal{Y}^{x}=\{y_{1}^{x}\prec y_{2}^{x}\prec...\prec y_{J_{x}}^{x}\}.\) Due to \(\pi_{t}^{*}(y|x)\) is based on the partition function \(Z_{t}(x)\) which is hard to estimate. We calculate the re-normalized distribution \(P_{y\in\mathcal{Y}^{x},t}^{*}(y|x)\): KL loss task \(\mathcal{T}_{t}\): \[\begin{split} P_{y\in\mathcal{Y}^{x},t}^{*}(y|x)& \triangleq\frac{\pi_{t}^{*}(y|x)}{\Sigma_{y^{{}^{\prime}}\in \mathcal{Y}^{x}}\pi_{t}^{*}(y^{{}^{\prime}}|x)}\\ &=\frac{\frac{\mathcal{I}}{\mathcal{G}(x)}\cdot\pi_{t-1}(y|x) \cdot exp(\frac{1}{\beta}Adv(x,y))\cdot exp(\frac{1}{\beta}\delta(\overleftarrow {\mathcal{G}})\overline{\mathcal{I}}}{\sum_{y^{{}^{\prime}}\in\mathcal{Y}^{x}} \overline{\mathcal{I}}(x)}\cdot\pi_{t-1}(y^{{}^{\prime}}|x)\cdot exp(\frac{1} {\beta}Adv(x,y^{{}^{\prime}}))\cdot exp(\frac{1}{\beta}\delta(\overleftarrow {\mathcal{G}})\overline{\mathcal{I}})}{\sum_{y^{{}^{\prime}}\in\mathcal{Y}^{x}} \overline{\mathcal{I}}(x)}\cdot\pi_{t-1}(y^{{}^{\prime}}|x)\cdot exp(\frac{1} {\beta}Adv(x,y^{{}^{\prime}}))\cdot exp(\frac{1}{\beta}\delta(\overleftarrow {\mathcal{G}})\overline{\mathcal{I}})}\\ &=\frac{\pi_{t-1}(y|x)exp(\frac{1}{\beta}Adv(x,y))}{\Sigma_{y^{{} ^{\prime}}\in\mathcal{Y}^{x}}\pi_{t-1}(y^{{}^{\prime}}|x)exp(\frac{1}{\beta} Adv(x,y^{{}^{\prime}}))}\end{split} \tag{8}\] **Step-3: Fit the distribution \(P_{y\in\mathcal{Y}^{x},t}^{*}(y|x)\)**. Next, we directly fit the re-normalized probability \(P_{y\in\mathcal{Y}^{x},t}^{*}(y|x)\) by minimizing the KL loss task \(\mathcal{T}_{t}\): \[L_{t}^{fit}(\theta_{t})=\mathbb{E}_{x\sim\mathcal{D}_{t}}\Sigma_{y\in\mathcal{Y }^{x}}D_{KL}(P_{y\in\mathcal{Y}^{x},t}(y|x,\theta_{t}),P_{y\in\mathcal{Y}^{x}, t}^{*}(y|x)) \tag{9}\] where \(\theta_{t}\) denotes the parameters of the policy model \(\pi_{t}(y|x)\) at task \(\mathcal{T}_{t}\)\((t=1,2,...)\), and \(P_{y\in\mathcal{Y}^{x},t}(y|x,\theta)\) denotes the re-normalized probability of current model: \[P_{y\in\mathcal{Y}^{x},t}(y|x,\theta_{t})\triangleq\frac{\pi_{t}(y|x)}{\Sigma _{y^{{}^{\prime}}\in\mathcal{Y}^{x}}\pi_{t}(y^{{}^{\prime}}|x)} \tag{10}\] Steps 2-3 can be implemented by 4 lines of codes under the pytorch environment: **Step-4: Function Regularization of \(\pi_{t}\)**. To preserve the old knowledge, we calculate a regularization loss to ensure that the new and old optimal policies do not differ too significantly in terms of the distribution of old human preferences. In detail, for each replay sample \(x\in\mathcal{R}_{i}\)\((i=1,2,...,t-1)\), the new policy \(\pi_{t}\) is regularized to not differ significantly from the optimzal policy \(\pi_{i}^{*}\). Hence, the regularization loss is \[L_{t}^{reg}(\theta_{t})=\mathbb{E}_{x\sim\cup_{i=1}^{i=t-1}\mathcal{R}_{i}} \Sigma_{i=1}^{t-1}\mathbf{I_{\mathcal{R}_{i}}}(x)\cdot\Sigma_{y\in\mathcal{Y}^{x}} D_{KL}(P_{y\in\mathcal{Y}^{x},t}(y|x,\theta_{t}),P_{y\in\mathcal{Y}^{y},t}^{*}(y|x)) \tag{11}\] where \(\{\mathbf{I_{\mathcal{R}_{i}}}(x)\}_{i=1}^{i=t-1}\) is the set of indicator functions, namely, the task identifier. The final training loss of task \(\mathcal{T}_{t}\) is: \[L_{t}^{train}(\theta_{t})=\left\{\begin{array}{ll}L_{t}^{fit}(\theta_{t})&x \in\mathcal{D}_{t}\\ L_{t}^{reg}(\theta_{t})&x\in\cup_{i=1}^{i=t-1}\mathcal{R}_{i}\end{array}\right. \tag{12}\] ### Continual Learning on Unlabeled Data In the above steps of COPF, we learn a policy based on the human preference dataset, where the reward score is determined based on the preference order annotated by humans. We name this mode the _hard reward score_ mode. Inspired by the learned reward model in the RLHF pipeline, we introduce the reward value head, named the _linear reward score_ mode, to learn the human preference scoring on the labeled data at the meanwhile optimal policy fitting process. We find that the reward model head does not need to learn an independent model, like the RLHF pipeline. In detail, we introduce a value head \(\mathcal{V}\) and utilize the pairwise ranking loss \(\mathcal{L}_{ranking}\) to learn an RM score, where \(\mathcal{L}_{ranking}=-\log(\sigma(r_{\theta}(x,y_{w})-r_{\theta}(x,y_{l})))\), where \(y_{w}\) and \(y_{l}\) represent the human choose and human rejected response respectively. The final training loss is the sum of \(L_{t}^{train}(\theta_{t})\) and \(\mathcal{L}_{ranking}\). After learning the labeled data, the reward value head can be used to score the unlabeled data, and ranking unlabeled responses. (Section 4.5). Based on this ranking, we can utilize the hard mode of the COPF method. ### Comparison with other methods We compare COPF with current alignment methods in table 1. **Comparison with DPO:** 1. DPO uses the log ratio as a reward value, while COPF uses a custom reward (linear or gaussian). 2. DPO employs pairwise training, while COPF uses listwise training. 3. DPO maximizes the gap between wins and losses, while COPF fits the distribution of the optimal policy on the sampled dataset. **Comparison with PRO:** 1. PRO enhances the probability of top-ranked samples occupying all sampled instances, while COPF fits the optimal policy distribution. 2. PRO is optimized based on intuitive reasoning and does not use a reference model, while COPF is derived from reinforcement learning theory as an optimization objective and requires the use of a reference model. ## 4 Experiments ### Datasets **Stanford Human Preferences (SHP) Dataset**[8]: SHP is a dataset of 385K collective human preferences over responses to questions/instructions in 18 different subject areas, from cooking to legal advice. The preferences are meant to reflect the helpfulness of one response over another and are intended to be used for training RLHF reward models and NLG evaluation models (e.g., SteamSHP). **Helpful and Harmless (HH) [1]**: The HH-RLHF dataset is collected by two separate datasets using slightly different versions of the user interface. The helpfulness dataset is collected by asking crowdworkers to have open-ended conversations with models, asking for help, and advice, or for the model to accomplish a task, and to choose the more helpful model response. The harmlessness or red-teaming dataset is collected by asking crowd workers to attempt to elicit harmful responses from our models and to choose the more harmful response offered by the models. **Reddit TL;DR**: For each Reddit post in the Reddit TL;DR [25] dataset, multiple summaries are generated using various models. These models include pre-trained ones used as zero-shot summary generators, as well as supervised fine-tuned models (12B, 6B, and 1.3B) specifically trained on the Reddit TL;DR dataset. Additionally, the human-written TL;DR (reference) is considered as a sample for comparison. \begin{table} \begin{tabular}{l l l l l l l l l l l} \hline \hline \multirow{2}{*}{**Method**} & **RL or** & **Online or** & **Pairwise or** & **Token-wise or** & **Invariance** & **Reward** & **Reference** & **Critic** & **Num of** & **Continual** \\ & **Non-RL** & **Offline** & **Listwise** & **Trajectory-wise** & **[20]** & **Model** & **Model** & **Model** & **Model** & **Models** & **Learning** \\ \hline **PPO**[11] & RL & online & pairwise & token-wise & no & yes & yes & yes & 4 & no \\ **NLPO**[10] & RL & online & pairwise & token-wise & no & yes & yes & yes & 4 & no \\ **P3O**[20] & RL & online & pairwise & trajectory-wise & yes & yes & yes & no & 3 & no \\ **PRO**[21] & non-RL & offline & listwise & trajectory-wise & - & no & no & no & 1 & no \\ **DPO**[4] & non-RL & offline & pairwise & trajectory-wise & - & no & yes & no & 2 & no \\ **RAFT**[22] & non-RL & both & listwise & trajectory-wise & yes & yes & no & no & 2 & no \\ **RRHF**[23] & non-RL & offline & listwise & trajectory-wise & yes & yes & no & no & 2 & no \\ **CoH**[24] & non-RL & offline & pairwise & trajectory-wise & - & no & no & no & 1 & no \\ \hline **COPF** & non-RL & both & listwise & trajectory-wise & - & no & yes & no & 2 & yes \\ \hline \hline \end{tabular} \end{table} Table 1: Compare COPF with other alignment methods. **IMDB**: We consider the IMDB dataset for the task of generating text with positive sentiment. The IMDB text continuation task aims to positively complete the movie review when given a partial review as a prompt. In this task, a trained sentiment classifier DistilBERT [26] is provided as a reward function to train the RL agents and evaluate their task performance. The naturalness of the trained model is evaluated with a perplexity score. The dataset consists of 25k training, 5k validation, and 5k test examples of movie review text with sentiment labels of positive and negative. The input to the model is a partial movie review text (up to 64 tokens) that needs to be completed (generating 48 tokens) by the model with a positive sentiment while retaining fluency. For RL methods, we use a sentiment classifier that is trained on pairs of text and labels as a reward model which provides sentiment scores indicating how positive a given piece of text is. ### Baselines **Supervise fine-tuning (SFT)** directly learns the human-labeled summary through the cross-entropy loss. We combine SFT with classic continual learning methods. * **SFT-Online L2Reg** penalizes the updating of model parameters through an L2 loss \(L_{2}^{t}(\theta)=\sum_{i}(\theta_{t}^{i}-\theta_{t-1}^{i})^{2}\). This regularization term mitigates the forgetting issue by applying a penalty for every parameter change. * **SFT-EWC**[27] uses fisher information to measure the parameter importance to old tasks, then slows down the update of the important parameters by L2 regularization. * **SFT-MAS**[28] computes the importance of the parameters of a neural network in an unsupervised and online manner to restrict the updating of parameters in the next task. * **SFT-AGM**[29] is an improved version of GEM [30], which enjoys better performance than GEM, while being almost as computationally and memory efficient as EWC and other regularization based methods. * **SFT-LwF**[31] is a knowledge-distillation-based method, which computes a smoothed version of the current responses for the new examples at the beginning of each task, minimizing their drift during training. * **SFT-TFCL**[32] proposes to timely update the importance weights of the parameter regularization by detecting plateaus in the loss surface. * **SFT-DER++**[33] addresses the General Continual Learning (GCL) problem by mixing rehearsal with knowledge distillation and regularization, in which the logits and ground truth labels of part of old data are saved into the memory buffer for replaying. Recent alignment methods are not support for continual, we improve those methods with continual tricks. **Ranking-based Approachs**[20, 21, 22, 23] rank human preferences over a set of responses and directly incorporate the ranking information into the LLMs fine-tuning stage. * **DPO\({}^{C}\)[4]** is an offline approach that can directly align LM with human preference data, drawing from the closed-form solution of the Contextual Bandit with KL control problem. * **PRO\({}^{C}\)[21]** learns preference ranking data by initiating with the first prefered response, deems subsequent responses as negatives, then dismisses the current response in favor of the next. * **RRHF\({}^{C}\)[23]** aligns with human preference by a list rank loss and finds that the SFT training objective is more effective and efficient than KL-divergence in preventing LLMs from over-fitting **Language-based Approach** directly use natural language to inject human preference via SFT. * **CoH\({}^{C}\)[24]** directly incorporates human preference as a pair of parallel responses discriminated as low-quality or high-quality using natural language prefixes. CoH only applies the fine-tuning loss to the actual model outputs, rather than the human feedback sequence and the instructions. During inference, CoH directly puts position feedback (e.g.,good) after the input instructions to encourage the LLMs to produce high-quality outputs. ### Tasks and Evaluation Metrics **Task Incremental Learning (TIL)** setting: The policy is required with continuous learning across three distinct tasks: the QA task on the HH-RLHF dataset, the summary task on the Reddit TL;DR dataset, and the positive file review generation task on the IMDB dataset. **Evaluation metrics for TIL**: As shown in Table 2, we utilize 3 preference metrics and 3 naturalness metrics to evaluate the performance of the model. In detail, we employ the _SteamSHP-flan-15-xl model_[8], developed by Stanford, as the preference model (PM) for assessing responses to HH-RLHF prompts. Additionally, we utilize the 6.7B _gpt-j_ reward model 4, released by Carper-AI, to evaluate summaries of Reddit posts. Furthermore, we gauge the positivity of generated film reviews by assessing them using the _distilbert-imdb_ model [26]. Footnote 4: URL: [https://huggingface.co/CarperAI/openai_summarize_tldr_rm_checkpoint](https://huggingface.co/CarperAI/openai_summarize_tldr_rm_checkpoint) **Domain Incremental Learning (DIL)** setting: The policy is required to continuously learn from three segments of the **SHP** dataset. The SHP dataset comprises 18 domains, which we have divided into three parts. To elaborate, we have trained 18 preference models (PM), each corresponding to one of the 18 domains. Subsequently, we assess the performance of each PM across all 18 domains. We record the _performance decline_ observed when a PM trained on one domain is evaluated on the others. Finally, we partition the 18 domains into three groups based on the highest observed performance decline. Further details can be found in the Appendix Section B. **Evaluation metrics for DIL**: We employ the _SteamSHP-flan-15-xl model_[8], developed by Stanford, as the preference model (PM) for assessing responses to SHP prompts. The _SteamSHP-flan-15-xl model_ is trained on the combination of the SHP (all 18 domains) and the HH-RLHF human preference data. **Evaluation Metric for Continual Learning** **Overall performance** is typically evaluated by _average accuracy_ (AA) [30, 34] and _average incremental accuracy_ (AIA) [35, 36]. In our evaluation setting, _the accuaracy is replaced by the Preference Metric_ (0-1). Let \(a_{k,j}\in[0,1]\) denote the Preference Score evaluated on the test set of the \(j\)-th task after incremental learning of the \(k\)-th task (\(j\leq k\)). The two metrics at the \(k\)-th task are then defined as \[\mathrm{AA}_{k}=\frac{1}{k}\sum_{j=1}^{k}a_{k,j}, \tag{13}\] \[\mathrm{AIA}_{k}=\frac{1}{k}\sum_{i=1}^{k}\mathrm{AA}_{i}, \tag{14}\] where AA represents the overall performance at the current moment and AIA further reflects the historical variation. **Memory stability** can be evaluated by _forgetting measure_ (FM) [34] and _backward transfer_ (BWT) [30]. As for the former, the forgetting of a task is calculated by the difference between its maximum performance obtained in the past and its current performance: \[f_{j,k}=\max_{i\in\{1,\dots,k-1\}}(a_{i,j}-a_{k,j}),\forall j<k. \tag{15}\] FM at the \(k\)-th task is the average forgetting of all old tasks: \[\mathrm{FM}_{k}=\frac{1}{k-1}\sum_{j=1}^{k-1}f_{j,k}. \tag{16}\] As for the latter, BWT evaluates the average influence of learning the \(k\)-th task on all old tasks: \[\mathrm{BWT}_{k}=\frac{1}{k-1}\sum_{j=1}^{k-1}(a_{k,j}-a_{j,j}), \tag{17}\] where the forgetting is usually reflected as a negative BWT. \begin{table} \begin{tabular}{l l l l l l} \hline \hline **Dataset** & **Task** & **Input** & **Output** & **Preference** & **Naturalness** \\ & & & & **Metric** & **Metric(s)** \\ \hline **IMDB [9]** & Text Continuation & \begin{tabular}{l} Partial Movie \\ Review \\ \end{tabular} & \begin{tabular}{l} A positive completion \\ of the movie review. \\ \end{tabular} & \begin{tabular}{l} 70M sentiment \\ classifier **DistilBERT** \\ \end{tabular} & \begin{tabular}{l} RougeL, \\ BLEU-4, \\ METEOR \\ \end{tabular} \\ \hline **Reddit TL;DR [2]** & Summarization & Reddit POST & Summarized POST & \begin{tabular}{l} 6.7B **GPT-J** \\ model by Carper-AI \\ \end{tabular} & \begin{tabular}{l} **R**epf-J** \\ model by Carper-AI \\ \end{tabular} \\ \hline \hline \end{tabular} \end{table} Table 2: Various tasks, input and output types, and the metrics used in the TIL settings.
2301.12862
Soft Cap for Eversion Robots
Growing robots based on the eversion principle are known for their ability to extend rapidly, from within, along their longitudinal axis, and, in doing so, reach deep into hitherto inaccessible, remote spaces. Despite many advantages, eversion robots also present significant challenges, one of which is maintaining sensory payload at the tip without restricting the eversion process. A variety of tip mechanisms has been proposed by the robotics community, among them rounded caps of relatively complex construction that are not always compatible with functional hardware, such as sensors or navigation pouches, integrated with the main eversion structure. Moreover, many tip designs incorporate rigid materials, reducing the robot's flexibility and consequent ability to navigate through narrow openings. Here, we address these shortcomings and propose a design to overcome them: a soft, entirely fabric based, cylindrical cap that can easily be slipped onto the tip of eversion robots. Having created a series of caps of different sizes and materials, an experimental study was conducted to evaluate our new design in terms of four key aspects: eversion robot made from multiple layers of everting material, solid objects protruding from the eversion robot, squeezability, and navigability. In all scenarios, we can show that our soft, flexible cap is robust in its ability to maintain its position and is capable of transporting payloads such as a camera across long distances.
Cem Suulker, Sophie Skach, Danyaal Kaleel, Taqi Abrar, Zain Murtaza, Dilara Suulker, Kaspar Althoefer
2023-01-30T13:16:07Z
http://arxiv.org/abs/2301.12862v2
# Soft Cap for Eversion Robots ###### Abstract Growing robots based on the eversion principle are known for their ability to extend rapidly, from within, along their longitudinal axis, and, in doing so, reach deep into hitherto inaccessible, remote spaces. Despite many advantages, exversion robots also present significant challenges, one of which is maintaining sensory payload at the tip without restricting the eversion process. A variety of tip mechanisms has been proposed by the robotics community, among them rounded caps of relatively complex construction that are not always compatible with functional hardware, such as sensors or navigation pouches, integrated with the main version structure. Moreover, many tip designs incorporate rigid materials, reducing the robot's flexibility and consequent ability to navigate through narrow openings. Here, we address these shortcomings and propose a design to overcome them: a soft, entirely fabric based, cylindrical cap that can easily be slipped onto the tip of eversion robots. Having created a series of caps of different sizes and materials, an experimental study was conducted to evaluate our new design in terms of four key aspects: eversion robot made from multiple layers of everting material, solid objects protruding from the eversion robot, squeezability, and navigability. In all scenarios, we can show that our soft, flexible cap is robust in its ability to maintain its position and is capable of transporting payloads such as a camera across long distances. We also demonstrate that the robot's ability to move through restricted aperture openings and indeed its overall flexibility is virtually unhindered by the addition of our cap. The paper discusses the advantages of this design and gives further recommendations in relation to aspects of its engineering. ## I Introduction There is growing interest from a range of industries (prime examples being nuclear, construction, telecommunications, search and rescue, archaeology, and medicine) in robots that are capable of penetrating hard-to-access spaces, and conducting remote inspection, maintenance and repair tasks. The term 'hard-to-access spaces' includes those that are physically constrained as well as those that present danger to humans, such as excessive nuclear radiation or the possibility of collapsing infrastructure. One of the key requirements for such tasks is the capability to travel some distance, along restricted channels, while carrying the tools with which to effect whatever tasks may be required. Small mobile robots have featured strongly in this context [1], though many have proven ineffective as they can get lost, are difficult to recover if broken down, cannot easily overcome obstacles and, when exposed to potentially hostile environments, can end up with damaged electronics and locomotion mechanisms [2]. An alternative route to overcoming the hard-to-access issue is provided by continuum robots. These snake-like robots, with a high length-to-diameter-ratio, can easily pass through small apertures and extend into the space behind [3]. Instilled with considerable navigational capabilities, they can penetrate complex three-dimensional environments all the while dealing with interfering obstacles [4]. The development of the eversion robot - a new kind of continuum robot also known as a vine robot - is significant [2, 5]. In contrast to earlier continuum robots, eversion robots grow from the tip. Likened to the way plants grow, the cylindrical-shaped eversion robot made from an airtight fabric or polyethylene skin ejects its inner structure at the tip using pneumatic pressure - this is best imagined by considering a jacket's sleeve, detached from the jacket itself, continuously unfolding, the inside lining becoming the outer skin [6, 7]. The achieved motion is frictionless longitudinal growth (see Figure 2) - a clear advantage in situations where the path is long or tortuous or the environment should be disturbed as little as possible [8]. They can extend their length hundreds of times with regards to their folded state without putting pressure onto their surrounding environment and can bend passively in Fig. 1: Left, an inflated eversion robot. Right, novel soft cap slipped over tip of evasion robot. A tool, here a camera, is attached to the cap. accordance with the environment, conforming to the shape of their surroundings [9, 10, 11]. Attaching a tool or sensor to the tip of an even robot is challenging, and has baffled scientists working in the area - how can one attach a pauload to a robot whose tip is constantly evolving [15]? With this in mind, various cap designs have been suggested, though all suffer from severe shortcomings when compared to our design. Firstly, their constituent materials are rigid, limiting their ability to squeeze through narrow openings and their capacity to bend their structure. Secondly, the interlocking mechanisms linking the cap to the robot tip - whether achieved by rollers or magnets - are mechanically complex, prone to failure and incapable of handling skins with embedded pouches, electronics or other protruding elements. In this paper, we present a soft cap made from textile material that is able to carry a payload at the tip of an evasion robot. Our soft cap sits on the robot's tip like a beanie and remains there during version. Our research shows that a soft cap that is well integrated with the tip of the evasion robot, satisfying essential parameters such as the cap's diameter, conformability and material friction properties, will remain held in position by the sliding motion of the outer skin everting from the tip towards the robot's base. For example, by attaching a wireless camera to the proposed cap atop of an evenison robot, it is possible to inspect environments that would otherwise be unreachable. It is noted that due to the nature of the cap, it does not compromise any of the desired qualities of an evenison robot, such as squeezability and navigability. ## II Current Cap Designs There are three main functionalities that are important in designing a cap mechanism for an evenison robot: they need to enable rather than hinder movement; they need to be able to remain fixed at the tip and they need to be able to carry a payload such as a sensor or tool. Some cap designs have mechanisms to control the length and rate of extension of the robot, while others simply move as the robot inflates. Cap mechanisms seen in the literature that hold the cap at the tip include rollers, zippers, leads, and magnets. Despite being a drawback for reasons that will become apparent in this section, all evenison robot caps presented in literature at the time of writing this paper are constructed from rigid materials. In this section we will discuss the different designs and outline their operational drawbacks. Figure 2 provides diagrammatic representations of four common types of caps and outlines their limitations in certain operating scenarios. The plastic outer cap [12], shown in Fig. 2 a), is attached to the robot's body by exploiting the friction between the body and the cap. This friction is usually unwanted as it can inhibit version motion, but in this case, is crucial, indeed the key to holding the cap in place. However, the rigid nature of the cap does hinder manoeuvrability of the tip, and the cap also has limited tolerance of increased friction given its non-elastic properties. This can happen due to changes in robot diameter - itself a consequence of influences from the immediate environment such as the impact of protruding objects - ultimately reducing the application areas of the cap. The outer cap with motorised lead attachment from cap to base [2], shown in Fig. 2 b), is kept at the tip of the robot through precise control of the length of the lead, which supplies power to the cap. This regulation of lead length requires a complex control mechanism. As in 2 a), it also uses a plastic cap, bringing with it the same inherent problems. When, for example, the robot is long, the cap mechanism can run out of storage space for the lead, and/or become too heavy, reducing the robot's ability to freely extend and manoeuvre. The magnetic cap with roller magnets [13, 14], shown in Fig. 2 c), consists of inner and outer parts that are magnetically attracted to each other by way of magnetic rollers that roll over the robot's body material and enable extension. Any imperfections on the robot's body, such as dirt from the environment or manufacturing imperfections, could cause the magnets to become misaligned or weaken the attraction between the inner and outer caps sections, potentially causing the cap to fall off. Any areas of thickness in the robot body, caused by additional layers (e.g., integrated navigation pouches) or attached objects (e.g., sensors), would also compromise magnetic attraction. Conversely, if the attraction force between the two magnets is too strong, the robot's body material may become trapped on account of the frictional force, rendering the robot immobile. The caps with roller mechanisms [15] (passive and active rollers), and [16] (passive rollers), shown in Fig. 2 d), are designs in which the robot body material is fed through rollers, and in which multiple internal and external parts need to work together. The rollers themselves can be passive, reacting purely to the motion of the robot, or actively controlled by a motor. The latter version is the only cap with moving parts that contribute to the process of moving the robot body material. One disadvantage of this system is that if any of the rollers become jammed or the motorised rollers stop working, the entire system stops working. Small changes in robot diameter, or the presence of protruding objects are among the potential causes of this kind of breakdown. Another is that the attachment of a sensor or other additional element that changes body thickness [17] is not possible. There is another issue here in relation to the motorised rollers. Although they assist in precise control, they can also create vibrations in the system, rendering this option unsuitable for use in fragile or delicate environments. Overall, the caps presented in the literature are well designed for specific operational situations and tasks and are usually used in controlled settings. The fact, however, that they are made from non-compliant, hard or more rigid materials, means that they compromise the performance of version robots, which need to be entirely soft to achieve compliance throughout the robot's structure. A hard cap effectively prevents the robot from being able to move through paths narrower than the cap itself. A cap design based on soft materials, and the consequential compliance it would offer evenison robots, would therefore represent a significant breakthrough, enabling these devices to be properly utilised in a range of environments, a wider variety of operational situations and for a larger number of specific tasks. ## III Soft Cap Design Here, we take on the challenge to develop such soft cap and design around the shortcomings of its rigid counterparts, taking advantage of the given structural properties of an increasingly popular material for soft robotics: textiles. ### _Friction Design Objectives_ One of the biggest challenges in eversion robot cap design is securing the cap at the tip of the robot. Whether working with hard or soft caps, mounting something onto a moving, from the tip growing robot creates friction between the everting layers. The design we introduce here exploits frictional force between the cap and the eversion robot body as a route to keeping the cap in place without restricting its movement. With the help of elastic textiles [18, 19], the cap is slipped onto the tip of the everting robot, able to slgightly adapt in its diameter through its stretch character, while still encapsulating the body in a firm, yet flexible way - a frictional force enabled by inherent textile properties that we intend to exploit. More specifically, the typically non-elastic, sturdy and robust material forming the robot's body and everting from the tip of the eversion robot moves, at all times, to the robot's base pulling the sides of the cap with it and ensuring that the cap sits tightly on the tip during version (Fig. 3). This frictional force between the elastic fabric and the outer skin of the eversion robot body is essential for the cap to be held in place (Fig. 3). This frictional force is applied in an equally distributed, symmetrical way around the tip, helping to hold the cap's orientation. As this frictional force poses an advantage for a stable positioning of a cap, it also tends to inhibit speed of eversion - indeed if the force is too great (the cap too tight around the tip), it prevents motion entirely. A balance must therefore be maintained, that allows eversion while ensuring the cap remains in place. This begs a number of questions such as how narrow the cap can be (with respect to the eversion robot's diameter), or whether a single cap can be used for different diameters of eversion robots? In this paper, we examine these issues. There are numerous parameters that can be changed to optimise the effectiveness of the cap - these are illustrated in Fig. 3 in which \(L\) is the length of effective contact surface - the region the friction is created. In this region the cap and the eversion robot are in contact, creating friction that holds the cap in place. \(D\) and \(d\) are the diameters of the eversion robot and the (unmounted) cap respectively. \(\Delta D\) is the difference between the two diameters (Equation 1). The key parameter relating to the difference in diameter is %\(D\) Fig. 3: Illustration of exploiting friction between soft cap and downward moving eversion material; (left) an everting eversion robot with a cap mounted; (right) a soft cap in its non-mounted, non-stretched state. The friction force \(f\) between the cap and the eversion robot body holds the cap in place at the tip. \(L\) is the length of the effective friction section. \(D\) is the diameter of the eversion robot as well as the mounted, stretched cap. \(d\) is the diameter of the unmounted, unstretched cap. Fig. 2: Eversion robot cap designs in literature and their respective weaknesses in certain operational conditions. a) Plastic outer cap with zipper attachment [12] b) Plastic outer cap with motorised lead attachment from cap to base [2] c) Magnetic cap with roller magnets [13, 14] d) Cap with roller mechanisms [15, 16]. which can be calculated using Equation 2 and indicates the percentage difference between the two diameters - i.e. %_D_= 10 means the diameter of the cap is 10% smaller than that of the eversion robot body. \[\Delta D=D-d \tag{1}\] \[\%D=\frac{\Delta D}{D}\times 100 \tag{2}\] Existent eversion robots are diverse, though following the same design principle. Parameters that easily change are its diameter, material layers (e.g. through pouches), surface smoothness or unevenness (e.g. through protruding objects), and payloads (e.g. cameras, other sensors), all while retaining the option of passing through the narrowest openings to access target sites. Therefore, a key design objective for engineering a soft cap is to account for these parameters. Under these premises, we have undertaken an iterative, exploratory design journey guided by an in-depth knowledge in textile technology and pattern construction. ### _Prototype Construction_ In this section, the fabrication of a series of prototypes is explained. Changing the diameter %_D_ and length \(L\) in the designs is also examined. With various cutting and sewing patterns that could be explored, this paper focuses on the following approach. A circle is cut from fabric so that it covers the everting tip of the robot; similarly, a rectangle is cut from fabric to cover the outer sides of the robot (Fig. 4), with the shorter (vertical) edges sewn together to form a cylindrical shape. The two pieces, the circle and the cylinder, are sewn together either by using a conventional sewing machine or an 'overlocking' machine1. In this way, the rounded end of the tube-like cylinder is pinned to the circle and attached section by section, until the rounded shape of a one sided closed cylinder is completed. Footnote 1: producing a more elastic stitch that also cuts and fringe seams the fabric edges. To make these textile end caps, two options are considered. First, stretch fabric is used to create the whole cap (Fig. 4 a). The diameter of the circular piece is constant for the whole body of the cap and is equal to \(d\). The dimensions of the rectangle are \(\pi d\times L\). To ensure secure attachment via a seam, a 1cm seam allowance is added, and later concealed on the inside of the cap. The \(\pi d\) long side is sewn to tangents of the \(d\) diameter circle. The other sides of the rectangle meet after this step, and they are sewn together to create the cylindrical side wall of the cap. The second method of fabrication requires greater expertise on account of the complexity of working with elastic bands. The cap is made from non-stretch fabric but by using elastic bands, the cap's bottom section will squeeze onto the eversion robot body. The use of elastic bands in soft robotics is also studied in [20]. Using this method leads to a smaller effective contact surface \(L\) (the width of a single elastic band). The circular pattern should be selected bigger than both the effective contact section diameter \(d\) and robot diameter \(D\). The length of the long side of the rectangular pattern, (_D+j_)\(\pi\) with \(j\) being a small number, is matching the diameter of the circular pattern, _D+j_. The other side is narrowed by use of a long elastic band of length, \(\pi d\). The thickness of the elastic band gives the effective contact length \(L\) (Fig. 4 b). After the same sewing actions are done, the final cap pattern emerges, as can be seen in Fig 4 b. The specifications of the prototypes used in this work are summarised in Table I. ## IV Case Study & Evaluation The prototypes whose specifications are listed in Table I were exposed to a number of key challenges to evaluate their effectiveness in different environments. These challenges were set up to mimic realistic scenarios, with each one tested on a different version robot type. ### _Challenge 1: Eversion of many layered bodies_ The manoeuvring mechanisms of eversion robots often require additional layers of materials on the robot body, i.e., navigation pouches [7], layer jamming elements [17]. These Fig. 4: Pattern constructions a) using elastic finely knitted rib fabric material with diameter d, and b) using non-elastic woven fabric and elastic band with diameter D+j, D being the diameter of the robot, and j = 3 cm. extra layers change the wall thickness and the diameter of the robot, creating significant challenges for caps, especially those made from rigid material. Changes in wall thickness can cause jamming in rigid caps that use rollers and contact loss in those that use magnets. This is because such caps are generally custom built for a specific diameter, and therefore not robust enough to compensate for this kind of change. However, the problem is even greater for friction based systems as these changes create dramatic variations in the frictional force between cap and outer version robot skin. For this challenge we attached four sets of pouches to our version robot body, increasing the thickness of the robot's walls by 2% (0.2 cm) and its diameter by 4% (0.4 cm). To increase the thickness even more, duct tape and loose fabric were attached to the robot body. Overcoming this challenge would prove that the cap could be used in version robots that have thick walls and change diameter along their length. ### _Challenge 2: Eversion of protruding solid objects_ This challenge simulates the placement of sensors, electrical components, air tubes and rigid connectors on the body of the aversion robot. As with, or perhaps even more so than with extra layers, these rigid components can cause jamming in caps with complex mechanisms, entirely blocking movement in the robot. To simulate this, we attach two 1 cm wide, 1.3 cm long and 1.6 cm wide, 1.7 cm long pipe connectors to the robot body in random places. They are roughly taped to the robot body to heighten the difficulty of the challenge. ### _Challenge 3: Squeezability_ The ability of soft version robots to squeeze through narrow openings has been marketed as one of their most alluring properties [5]. However, a rigid cap mounted at the tip significantly compromises this function. With our soft cap approach, we ensure this property is retained, whilst also extending the robot's capability to carry a payload at its tip. In this challenge we built a 9 cm wide gate for a 10-cm diameter version robot. The robot with the soft cap is able to pass through the narrow opening and continue its path. For this proof of concept study, we opted for a gate 10% smaller than the body of the robot. ### _Challenge 4: Navigability_ On account of the material expansion that comes hand in hand with actuation, any rigid cap, and the confined space within in, creates a problem in terms of tip mobility. A soft cap can tolerate greater expansion without compromising mobility. In this challenge we therefore activate our version robot with cap in situ, the challenge being to retain the cap at the tip while allowing requisite motion. ### _Results_ The fabricated prototypes listed in Table I were subjected to each of the aforementioned challenges. Additionally, a wireless camera was deployed at the centre tip of the caps to assess the payload stability during each test. An overview with the results of all experiments are listed in Table II. At a first glance at the Table, we see that all but one prototype perform reasonably well and master most challenges while maintaining a stable position of the camera. Caps 2, 3, 4, 7 and 8 are able to satisfy every challenge. In Figure 7, prototype 3 can be seen to be satisfying all challenges in a single run. The prototype that has failed all tests (5 on the list) is the shortest with only 5 cm length and made from stretch fabric. Looking at the results challenge by challenge, we can further evaluate the other prototypes. When everting multiple layers of the soft robotic body (challenge 1), we find that a too tight (prototype 1) and too short cap (prototype 5) is problematic, while there does not seem to be any advantage over one of the fabric choices (elastic knit fabric or non-stretch fabric with elastic band) in general. Even more forgiving with the different design parameters seems to be the challenge of overting protruding objects between the cap and the body surfaces. All soft fabric caps we produced except Fig. 5: An even erosion robot with soft cap (Prot. 4) and a camera attached to it, successfully completing ”Challenge 1”. Additional layers of materials that change the thickness of the body are marked red. Fig. 6: An even erosion robot with a soft cap (Prot. 2) and a camera attached to it, successfully completing ”Challenge 2”. 1.7 cm long protruding material is circled. cap 5, are able to process such obstacles and adapt in form and stretch capacity as needed. It is becomes more tricky with challenge 3,'squeezing' the robot through narrow paths, where beside cap 5, also prototype 1 and 6 failed. These are the prototypes with the longest cap length and the highest stretch factor. It appears that for challenge 4, when actuating joint-like pouches, the larger the surface creating friction, the worse this form of navigability. Cap 1 with 15 cm length of stretch fabric and 20% %D hinders performance. We can see, however, that the same types of caps just a few cm shorter, prototype 4, or slightly less tight, prototype 2, succeed easily. What do these findings tell us about the design engineering differences of the 8 presented fabric caps? In summary, we can observe that the caps that sit tightest around the robot are most robust and stable in regards to payload positioning, but can be too tight for other challenges. On the other hand, we can see from prototype 1 that if the diameter difference between the stretch fabric cap and the version robot is too large, the design fails to carry out the tasks. This is mainly due to high friction between the body and the cap preventing smooth version. Similarly, it is also clear that when a cap is too short and too loose, all potential obstacles push it off the tip of the robot body too easily and no other manouvering is possible. In general, caps made from knitted stretch fabric achieve high performance with parameters of about 10% %_D_ and 10-15 cm \(L\). For elastic band designs 2-10% %_D_ and 0.5 cm \(L\) (width of single elastic band) offer better results. ## V Conclusions This paper presents the first soft cap for evasion robots made from elastic fabric parts. Exploiting textile properties, the cap adjusts to the everting robot and sits firmly, yet flexible at the tip of the robot's body throughout the evasion process. This novel design preserves squeezability and navigability of the robot, and the version of thick robot wall layers as well as those with protruding elements is shown to be achievable. The results of our experiments provide a design guidance for further developments of soft caps. Depending on application areas, if payload is needed and environments and pathways are challenging, then the strategy of using a tight, but long cap appears most successful. While our design masters key challenges everison robots are confronted with, one limitation that remains to be subject to future work is the retrieval of the cap when the robot retracts. Further, the design parameters and challenges tested here cover key aspects of soft evasion robots, but there is more to explore. Experimenting with various cap pattern constructions, variations of stretch, and testing the limits of cap sizes and payloads are future tasks we develop based on the success of this first design of a soft textile cap for aversion robots. ## Acknowledgment The authors thank Mish Toszeghi, Rodrigo Zenha, Abu Bakar Dawood and Hassan Mirza for their valuable help. Thanks also to the reviewers for their comments. Fig. 8: An evenison robot with soft cap (Prot. 4) and camera attached to it, successfully completing ”Challenge 3”. Diameter of the robot and width of the opening are indicated. Fig. 7: An evenison robot with soft cap (Prot. 3) and a camera achieving all the challenges in a single run. Fig. 9: An evenison robot with soft cap (Prot. 3) and a camera attached to it, successfully completing ”Challenge 4”.
2304.06011
MABL: Bi-Level Latent-Variable World Model for Sample-Efficient Multi-Agent Reinforcement Learning
Multi-agent reinforcement learning (MARL) methods often suffer from high sample complexity, limiting their use in real-world problems where data is sparse or expensive to collect. Although latent-variable world models have been employed to address this issue by generating abundant synthetic data for MARL training, most of these models cannot encode vital global information available during training into their latent states, which hampers learning efficiency. The few exceptions that incorporate global information assume centralized execution of their learned policies, which is impractical in many applications with partial observability. We propose a novel model-based MARL algorithm, MABL (Multi-Agent Bi-Level world model), that learns a bi-level latent-variable world model from high-dimensional inputs. Unlike existing models, MABL is capable of encoding essential global information into the latent states during training while guaranteeing the decentralized execution of learned policies. For each agent, MABL learns a global latent state at the upper level, which is used to inform the learning of an agent latent state at the lower level. During execution, agents exclusively use lower-level latent states and act independently. Crucially, MABL can be combined with any model-free MARL algorithm for policy learning. In our empirical evaluation with complex discrete and continuous multi-agent tasks including SMAC, Flatland, and MAMuJoCo, MABL surpasses SOTA multi-agent latent-variable world models in both sample efficiency and overall performance.
Aravind Venugopal, Stephanie Milani, Fei Fang, Balaraman Ravindran
2023-04-12T17:46:23Z
http://arxiv.org/abs/2304.06011v2
# Bi-level Latent Variable Model for ###### Abstract Despite their potential in real-world applications, multi-agent reinforcement learning (MARL) algorithms often suffer from high sample complexity. To address this issue, we present a novel model-based MARL algorithm, BiLL (Bi-Level Latent Variable Model-based Learning), that learns a bi-level latent variable model from high-dimensional inputs. At the top level, the model learns latent representations of the _global state_, which encode global information relevant to behavior learning. At the bottom level, it learns latent representations for each agent, given the global latent representations from the top level. The model generates latent trajectories to use for policy learning. We evaluate our algorithm on complex multi-agent tasks in the challenging SMAC and Flatland environments. Our algorithm outperforms state-of-the-art model-free and model-based baselines in sample efficiency, including on two extremely challenging Super Hard SMAC maps. Machine Learning, ICML, ICML ## 1 Introduction Multi-agent reinforcement learning (MARL) provides a flexible and adaptive learning framework for modeling real-world problems involving coordinating agents (Matignon et al., 2012; Agogino and Tumer, 2012; Xu et al., 2020). These scenarios offer a plethora of challenges: agents must learn to behave from high-dimensional, partially observable inputs while grappling with the issue of non-stationarity induced by other agents simultaneously learning in the environment (Lowe et al., 2017; Papoudakis et al., 2019). Taken together, these challenges mean that learning effective behavior with MARL often requires a large number of environment interactions (Gronauer and Diepold, 2022). In many real-world tasks, collecting such interaction data may be costly or time-consuming (Bagnell and Schneider, 2001), implicating the importance of _sample efficiency_. Toward the goal of sample efficiency, model-based reinforcement learning (MBRL) has emerged as a practically (Corneil et al., 2018; Wang et al., 2019) and theoretically (Sun et al., 2019) sound solution for the single-agent setting. In MBRL, an agent builds a predictive model of the environment dynamics to generate samples for learning or planning. MBRL with latent variable models (Hafner et al., 2019; Lee et al., 2020; Hafner et al., 2020) allows the RL agent to learn behavior from compact latent representations of high-dimensional inputs generated by the model. Latent variable-model-based methods represent the current state-of-the-art in single-agent MBRL (Hafner et al., 2023). Despite these demonstrated benefits in the single-agent regime, only recently have latent variable models been brought to bear in MARL (Wang et al., 2022). However, existing methods (Egorov and Shpilman, 2022; Krupnik et al., 2020) suffer from key limitations. In the case that the model is learned only with local observations, it cannot incorporate any additional information that may be available. This additional information, which we refer to as _global_ information, is commonly assumed to be accessible in prior MARL (Lowe et al., 2017; Samvelyan et al., 2019). Utilizing this information can enhance representation learning, leading to more sample-efficient behavior learning. On the other hand, attempts to centralize model learning by aggregating latent states of local observations fail to ensure that agents only use their private observations during execution. This aggregation violates the assumptions of the widely-used framework in MARL of centralized training with decentralized execution (CTDE), in which agents may access global information -- such as the observations of other agents and/or the global state -- during training but not during execution (Lowe et al., 2017; Foerster et al., 2018; Kim et al., 2019). The inclusion of all global information may also adversely affect behavior learning if this information is irrelevant to the task that the agent is performing (Tan, 1993; de Witt et al., 2020). Existing work does not address the challenge of learning to encode relevant global information using latent variable models. To address this gap, we develop a novel model-based MARL method, _Bi-Level Latent Variable Model-based Learning (BiLL)_. BiLL can successfully leverage global information during training without violating decentralized execution. It uses a novel latent variable model that learns a latent space with a bi-level structure. At the top level, the model learns global latent states to encode _relevant_ global state information; at the bottom level it learns agent latent states conditioned on the global latent states to encode agent-specific information. This model can be used to generate trajectories of latent states for training policies using any MARL algorithm. By computing agent latent states in a decentralized manner and feeding them to agents' policy networks, BiLL ensures decentralized execution. Experiments on the challenging StarCraft Multi Agent Challenge (SMAC) and Flatland environments show that our algorithm outperforms state-of-the-art model-free and model-based baselines in sample efficiency, including on two extremely challenging Super Hard SMAC maps. ## 2 Preliminaries ### Multi-Agent Reinforcement Learning We consider MARL in a partially observable Markov game (Monahan, 1982)\(G=\langle N,S,\mathbf{A},P,R^{i},\{\mathcal{O}^{i}\},\{O^{i}\},\gamma\rangle\). \(N=\{1,\ldots,n\}\) is the set of agents, \(S\) the set of states, and \(\mathbf{A}=\prod A^{i}\) the joint action space, where \(A^{i}\) is the action space for agent \(i\). At timestep \(t\), agent \(i\in N\) receives a private observation \(o^{i}_{t}\) governed by the observation function \(O^{i}(s):S\rightarrow\mathcal{O}^{i}\), and chooses an action \(a^{i}_{t}\in A^{i}\). Given the current state \(s_{t}\) and the agents' joint action \(\mathbf{a}_{t}=\{a^{i}_{t}\}\), the environment transitions to the next state \(s_{t+1}\) according to the state transition function \(P(s_{t+1}|s_{t},\mathbf{a}_{t}):S\times\mathbf{A}\times S\rightarrow[0,1]\). Each agent then receive a reward \(r^{i}_{t}\) according to its reward function \(R^{i}:S\times a^{i}\rightarrow\mathbb{R}\). Each agent takes actions using its policy \(\pi^{i}(a^{i}_{t}|r^{i}_{t})\), which is conditioned on its action-observation history \(r^{i}_{t}\). Together, these policies comprise the joint policy \(\mathbf{\pi}\), which induces the action-value function for each agent \(i\), \(Q^{\mathbf{\pi}}_{i}=\mathbb{E}_{\mathbf{\pi}}[\sum_{j=0}^{\infty}\gamma^{j}r^{i}_{t+ j}]\), where \(\gamma\in[0,1]\) is the discount factor. In MARL, agents do not know \(P\) or \(R\) and must learn the policies that maximize \(Q\) by interacting with the environment. ### MBRL with Latent Variable Models MBRL algorithms (Sutton, 1991; Janner et al., 2019; Lee et al., 2020; Moerland et al., 2023) employ an explicit model trained to estimate the environment dynamics (i.e., state transition and reward functions) using self-supervised learning. Using this model, they then generate synthetic samples or trajectories of data for training a policy using a RL algorithm. This approach has been shown to improve sample efficiency by reducing the number of environment interactions needed to learn a good policy. Our work focuses on MBRL using latent variable models. Latent variable models are sequential Variational Auto Encoders (VAEs) (Kingma and Welling, 2013) used to learn environment dynamics, as illustrated in Figure 1. More concretely, consider a Partially Observable Markov Decision Process (POMDP) (Cassandra et al., 1994) described by \(\langle S,A,P,R,\mathcal{O},O,\gamma\rangle\) where the symbols mean the same as in Section 2.1, except with a single agent. Given training data consisting of observation and action sequences, we can learn the sequential VAE with latent variables \(z_{t}\) to maximize the probability of the data \(p(o_{1:T}|a_{1:T-1})\). Since maximizing this probability directly is challenging, a typical approach is to consider the Evidence Lower Bound (ELBO) (Kingma and Welling, 2013) for the log-likelihood of the sequence of observations: \[\log p(o_{1:T}|a_{1:T-1})\geq\mathbb{E}_{z_{1:T}\sim q}\sum_{t=1}^{T}\bigg{[} \log p(o_{t}|z_{t})\] \[-D_{KL}(q(z_{t}|o_{t},a_{t-1})\|p(z_{t}|a_{t-1}))\bigg{]}.\] where \(D_{KL}\) refers to the KL divergence. The latent variable model thus consists of a transition model representing the prior distribution \(p(z_{t}|a_{t})\), a representation model representing the posterior distribution \(q(z_{t}|o_{t},a_{t-1})\), and an observation decoder \(p(o_{t}|z_{t})\). All components are parameterized by neural networks and trained through amortized variational inference (Kingma and Welling, 2013). Once trained, \(z_{t}\) serves as the compact latent state, and the model can be used to generate synthetic trajectories of latent states for RL training. ## 3 BiLL We present BiLL, a novel model-based MARL algorithm. We first describe the novel bi-level model architecture and explain how it encodes relevant global information. We then detail the training framework for the model and MARL algorithm, which is trained with the latent trajectories generated Figure 1: A sequential VAE with the transition and representation models. Shaded circled nodes represent inputs and unshaded circled nodes represent random variables. The transition model is shown using black arrows; the representation model is shown using blue arrows. The model uses the agent’s action and observation to learn a latent variable \(z_{(})\). This latent variable is then used as input to the agent’s policy for behavior learning. by the model. We describe the model and MARL algorithm with respect to an agent \(i\). ### Bi-Level Latent Variable Model We introduce a novel bi-level latent variable model to learn environment dynamics in multi-agent settings. This model takes as input multi-agent trajectories of length \(T\), represented as \(\{o_{t}^{i},s_{t},\textbf{a}_{t},r_{t}^{i}\}_{t=1}^{T}\). The input trajectories are sampled from a buffer, which we call the model buffer, populated through interactions of the agents with the environment. Our model comprises of neural networks that serve two main functions: learning the transition dynamics and supporting the trajectory generation for training. We refer to the former as **Transition Dynamics** components and the latter as **Auxiliary** components. All components are parameterized by neural networks with combined weights \(\psi\) and are trained jointly. Transition Dynamics ComponentsFigure 2 illustrates the transition dynamics components. They are the global and agent recurrent models, the transition model, the representation model, and the observation model: Recurrent models: Global embeddings: \[h_{t}^{g,i}=f_{\psi}^{g,i}(h_{t}^{g,i}|h_{t-1}^{g,i},z_{t-1}^{g,i}, \textbf{a}_{t-1})\] Agent embeddings: \[h_{t}^{a,i}=f_{\psi}^{a,i}(h_{t}^{a,i}|h_{t-1}^{a,i},z_{t-1}^{a,i},a_{t-1})\] Transition model (prior distribution): Global latent state: \[\hat{z}_{t}^{g,i}\sim p_{\psi}(\hat{z}_{t}^{g,i}|h_{t}^{g,i})\] Agent latent state: \[\hat{z}_{t}^{a,i}\sim p_{\psi}(\hat{z}_{t}^{a,i}|h_{t}^{a,i},\hat{z}_{t} ^{g,i})\] Representation model (posterior distribution): Global latent state: \[z_{t}^{g,i}\sim q_{\psi}(z_{t}^{g,i}|s_{t},z_{t}^{a,i},h_{t}^{g,i})\] Agent latent state: \[z_{t}^{a,i}\sim q_{\psi}(z_{t}^{a,i}|o_{t}^{i},h_{t}^{a,i})\] Observation model: \[\hat{o}_{t}^{i}=p_{\psi}(\hat{o}_{t}^{i}|h_{t}^{a,i},z_{t}^{a,i}).\] The recurrent models are implemented as Recurrent Neural Networks (RNNs) (Medsker & Jain, 1999). The global recurrent model propagates information about past states and joint-actions using deterministic global embedding \(h_{t}^{g,i}\). The local recurrent model propagates information about action-observation history of the agent using agent embedding \(h_{t}^{a,i}\). Both embeddings are computed as hidden states of the RNNs. The transition model learns the prior distribution over global (\(\hat{z}_{t}^{g,i}\)) and agent (\(\hat{z}_{t}^{a,i}\)) latent states. The representation model learns the posterior distribution over global (\(z_{t}^{g,i}\)) and agent (\(z_{t}^{a,i}\)) latent states. The global posterior latent state \(z_{t}^{g,i}\) encodes relevant global information about \(s_{t}\) while the agent posterior latent state \(z_{t}^{a,i}\) encodes local information about \(o_{t}^{i}\). We implement the transition and representation models as categorical distributions (Hafner et al., 2020). The observation model predicts the current observation \(\hat{o}_{t}^{i}\), given the agent embedding \(h_{t}^{a,i}\) and the agent posterior latent state \(z_{t}^{a,i}\). At each timestep \(t\), the agent prior state \(\hat{z}_{t}^{a,i}\) is conditioned on the global prior state \(\hat{z}_{t}^{g,i}\) in a top-down fashion. At the same time, the global posterior state \(z_{t}^{g,i}\) is conditioned on the agent prior state \(z_{t}^{a,i}\) in a bottom-up fashion. This conditioning ensures a flow of information between the top and bottom levels of the latent variable model, leading to a structured hierarchy of latent states. We design the representation model to enable inference of the agent posterior latent \(z_{t}^{a,i}\) during execution without computing \(z_{t}^{g,i}\). The latent variable model thus incorporates relevant global information without violating the CTDE paradigm. Auxiliary ComponentsTo generate trajectories for MARL training, at each timestep \(t\), we also require to predict the reward, whether the current state is terminal, and what the available actions are (in some environments, only a subset of the action space is available to each agent at timestep \(t\)). To this end, we train neural networks which take the posterior latent states as input to predict the reward, state termination, and actions available at each timestep \(t\). The termination and available action predictors are implemented as Bernoulli distributions. Though not used to generate trajectories, our model also includes an action decoder neural network to reconstruct each agent's action \(\hat{a^{i}}_{t}\)(Ke et al., 2019; Egorov & Shpilman, 2022), as an auxiliary component. This further encourages the model to encode agent-specific global and local information into the latent states. The auxiliary components are: Figure 2: Transition dynamics components of the bi-level latent variable model. Shaded circled nodes represent inputs, unshaded circled nodes represent random variables and square nodes represent deterministic embeddings. The transition model and recurrent models are shown using black arrows while the representation model is shown using blue arrows. This figure is with respect to an agent \(i\). The model uses the agent’s observations, states, actions and joint actions to learn a bi-level latent space consisting of global and agent latent states. \[\text{Reward predictor:}\ \hat{r}_{t}^{i}\sim p_{\psi}(\hat{r}_{t}^{i}|z_{t}^{a,i},h_{t}^{a,i})\] \[\text{Termination predictor:}\ \hat{\gamma}_{t}^{i}\sim p_{\psi}(\hat{ \gamma}_{t}^{i}|z_{t}^{a,i},z_{t}^{g,i},h_{t}^{a,i},h_{t}^{g,i})\] \[\text{Action predictor:}\ \hat{A}_{t}^{s,i}\sim p_{\psi}(\hat{A}_{t}^{s,i}|z_{t}^{a,i},z_{t}^{g,i},h _{t}^{a,i},h_{t}^{g,i})\] \[\text{Action decoder:}\ \hat{a}_{t}^{i}\sim p_{\psi}(\hat{a}_{t}^{i}|z_{t}^{a,i},z_{t}^{g,i},h _{t}^{a,i},h_{t}^{g,i}).\] ### Training the Model We train all components of our latent variable model jointly with the loss \(\mathcal{L}(\psi)\). This loss consists of the sum of the following losses and is written as: \[\mathcal{L}(\psi)=\mathcal{L}_{\text{ELBO}}+\mathcal{L}_{\hat{r}_{t}}+\mathcal{ L}_{\hat{\gamma}_{t}}+\mathcal{L}_{\hat{A}_{t}}+\mathcal{L}_{\hat{a}_{t}}.\] The first term is the ELBO loss \(\mathcal{L}_{\text{ELBO}}\), which trains the transition dynamics components to maximize the ELBO under the data generating distribution \(p(o_{1:T}^{i}|\textbf{a}_{1:T})\) using amortized variational inference. We provide a detailed derivation of the ELBO in Appendix A. We write it as: \[\mathcal{L}_{\text{ELBO}} =-\sum_{t=1}^{T}\log p_{\psi}(\hat{o}_{t}^{i}|z_{t}^{a,i},h_{t}^{a,i})\] \[+D_{KL}\Big{(}q_{\psi}(z_{t}^{a,i}|o_{t}^{i},h_{t}^{a,i})||p_{\psi }(\hat{z}_{t}^{a,i}|h_{t}^{a,i},\hat{z}_{t}^{g,i})\Big{)}\] \[+D_{KL}\Big{(}q_{\psi}(z_{t}^{g,i}|s_{t},z_{t}^{a,i},h_{t}^{g,i}) ||p_{\psi}(\hat{z}_{t}^{g,i}|h_{t}^{g,i})\Big{)}.\] The first term in \(\mathcal{L}_{\text{ELBO}}\) corresponds to maximising the log likelihood of the observations, given \(z_{t}^{a,i}\) and \(h_{t}^{a,i}\). The second and third terms minimise KL divergence between \(p_{\psi}(.)\) and \(q_{\psi}(.)\). We use KL balancing (Hafner et al., 2020), to ensure that the prior approximates the posterior well. The remaining terms in \(\mathcal{L}(\psi)\) train the auxiliary components to maximize the log likelihoods of their corresponding targets, given the latent states and embeddings from the recurrent models. These are: \[\text{Reward:}\ \mathcal{L}_{\hat{r}_{t}} =-\sum_{t=1}^{T}\log p_{\psi}(\hat{r}_{t}^{i}|z_{t}^{a,i},h_{t}^{a,i})\] \[\text{Termination:}\ \mathcal{L}_{\hat{\gamma}_{t}} =-\sum_{t=1}^{T}\log p_{\psi}(\hat{\gamma}_{t}^{i}|z_{t}^{a,i},z_ {t}^{g,i},h_{t}^{a,i},h_{t}^{g,i})\] \[\text{Av. action:}\ \mathcal{L}_{\hat{A}_{t}} =-\sum_{t=1}^{T}\log p_{\psi}(\hat{A}_{t}^{s,i}|z_{t}^{a,i},z_{t}^ {g,i},h_{t}^{a,i},h_{t}^{g,i})\] \[\text{Action decoder:}\ \mathcal{L}_{\hat{u}_{t}} =-\sum_{t=1}^{T}\log p_{\psi}(\hat{a}_{t}^{i}|z_{t}^{a,i},z_{t}^ {g,i},h_{t}^{a,i},h_{t}^{g,i}).\] By sharing the parameters (\(\psi\)) of latent variable model among all agents, we ensure scalability to multi-agent settings with large numbers of agents. ### Multi-Agent Behavior Learning We learn multi-agent behavior purely within the latent variable model. By this, we mean that the trajectories used for training consist of latent states generated by the model. We use an iterative training procedure (Schmidhuber, 1991; Ha & Schmidhuber, 2018) with the following steps. 1. The MARL agents interact with the environment. Trajectories of transitions are stored in the model buffer. 2. The latent variable model is trained using trajectories sampled from the model buffer. The model learns multi-agent environment dynamics to encode the trajectories to latent states. 3. Initial states, observations and actions are drawn from the model buffer. Corresponding global and agent latent states are then computed by the representation model. At each timestep \(t\), the agents choose a joint action \(\textbf{a}_{t}\) according to their policies. The transition model and recurrent models then predict the next latent states \(\hat{z}_{t+1}^{a,i}\) and \(\hat{z}_{t+1}^{g,i}\). This process is repeated to generate trajectories of latent states. The trajectories are used to train an MARL algorithm, following the CTDE paradigm. To learn multi-agent behavior, we can use any MARL algorithm. We choose Multi-Agent PPO (MAPPO) (Yu et al., 2021), which is an on-policy actor-critic algorithm that has demonstrated strong results in a variety of multi-agent tasks. Each agent is equipped with a policy, or actor, \(\pi_{\theta}^{i}\), which is implemented as a neural network with parameters \(\theta\) and trained to maximize the MAPPO objective. At timestep \(t\), each agent receives as input its agent posterior state \(z_{t}^{a,i}\) and outputs an action: \(a_{t}^{i}\sim\pi_{\theta}(a_{t}^{i}|z_{t}^{a,i},h_{t}^{a,i})\). For each agent, BiLL infers \(z_{t}^{a,i}\) solely from its current observation and its agent embedding (\(h_{t}^{a,i}\)), facilitating decentralized policy execution. Each agent is also equipped with a critic \(V_{\phi}^{i}\). Because these critics are _centralized_, they take as input the global latent in addition to the agent latent. We denote by \(z_{t}^{i}\) the concatenation of these inputs. Then, the critic is represented by a neural network with parameters \(\phi\). The critic includes an attention mechanism (Vaswani et al., 2017) to process the inputs. The critic is trained to predict the discounted reward-to-go. The actor and critic network parameters are shared by all agents to facilitate faster training in tasks that involve large numbers of agents (Yu et al., 2021). ## 4 Experimental Evaluation We evaluate BiLL on multiple partially-observable multi-agent tasks from both the SMAC (Samvelyan et al., 2019) and Flatland (Mohanty et al., 2020) environments. Through our empirical analysis, we aim to answer the following questions: **RQ1:**: Does BiLL lead to better sample-efficiency compared to the state-of-the-art? **RQ2:**: Is the bi-level latent variable model responsible for the improved sample-efficiency? If so, does it effectively capture relevant global information? **RQ3:**: Can our model accurately reconstruct complex multi-agent trajectories? ### Experimental Setup Because our goal is to investigate improvements in sample efficiency, we adopt the low data regime established in previous work (Kaiser et al., 2019; Egorov and Shpilman, 2022). We describe the evaluation environments, SMAC and Flatland, and the baselines used in our experiments. SmacSMAC is one of the most popular evaluation platforms for MARL, based on the Real Time Strategy game StarCraftII. It consists of many scenarios, or maps, in which ally units must collaborate to varying levels of complexity to defeat heuristic enemy units. Critically, each agent observes only a small area of the global state space within its sight range. The goal is to maximise the win rate, or the proportion of battles won by the ally units. The maps are categorized according to their difficulties: _Easy_, _Hard_, and _Super Hard_(Samvelyan et al., 2019). State-of-the-art model-free MARL algorithms generally perform poorly on the _Hard_, and _Super Hard_ maps due to the precise control and team coordination required to achieve strong performance. To evaluate our algorithm at varying levels of difficulty, we conduct experiments on two _Easy_ maps (2s vs 1sc and 3s vs 4z), one _Hard_ map (3s vs 5z), and two _Super Hard_ maps (Corridor and 3s5z vs 3s6z). The _Easy_, _Hard_, and _Super Hard_ maps are constrained to 100k, 200k, and 450k environment steps, respectively. FlatlandFlatland is a 2D grid environment that simulates train traffic on a railway network. Each train is controlled by a MARL agent, and the goal is for each agent to reach its destination on time without colliding with other agents. Agents receive individual positive rewards on reaching their destinations and penalties for colliding or being late. As in previous work (Egorov and Shpilman, 2022), we use a dense reward variant where each agent also receives a small positive reward for reducing distance to its destination.We use the joint observation \(\mathbf{o}_{t}\) as the global state \(s_{t}\) while training our model; since Flatland does not provide a separate global state. We conduct experiments with 5 and 10 agents. The 5- and 10- agent environments are constrained to 100k and 450k environment steps, respectively. BaselinesWe compare BiLL against state-of-the-art model-free and model-based MARL algorithms. We first explain the model-free baselines. For SMAC, we compare BiLL with the multi-agent actor-critic algorithms COMA (Foerster et al., 2018) and MAPPO (Yu et al., 2021), and the value-based methods Q-MIX (Rashid et al., 2018) and Q-TRAN (Son et al., 2019), both of which use factored global critics to incorporate global information. For Flatland, we do not compare against COMA, Q-MIX and Q-TRAN since there is no shared reward or global state. We only compare with MAPPO, which is also the best-performing model-free MARL algorithm. We also compare against MAMBA (Egorov and Shpilman, 2022) in both sets of en Figure 3: Training curves on SMAC. The Y axis denotes the winrate; the X axis denotes the number of environment steps. The X axis denotes the number of environment steps. The plots were smoothed using an exponential moving average. The error bars show the maximum and minimum winrate. Overall, BiLL outperforms both MAMBA and MAPPO. vironments. To the best of our knowledge, MAMBA is the current state-of-the art model-based MARL algorithm. However, it violates the CTDE paradigm: it permits centralized execution by providing agents with access to the latent states of all other agents. Experimental DetailsEach algorithm was trained across 3 independent runs with the same number of environment steps. MAMBA and all variants of BiLL use the same MARL algorithm, MAPPO, for behavior learning. These methods also generate the same number of synthetic samples for training. We provide a detailed description of the hyperparameters, neural network architecture, and implementation details in Appendix B. We will make our code publicly available on acceptance. ### Results Sample Efficiency of BiLLWe find that BiLL achieves superior or comparable sample efficiency to the state-of-the art algorithms in model-free and model-based MARL on all tasks. We show the final performance of all algorithms, computed as the average over the last 4000 environment steps, for all SMAC maps (Table 1) and Flatland variants (Table 2). These results show that, with the same number of samples, BiLL achieves the highest win rate on all tasks, except for one easy SMAC map (2s vs 1 sc). On that map, BiLL performs similarly to MAMBA but slightly lower than MAPPO. Of the model-free baselines, MAPPO performs the best overall, but all model-free baselines fail to learn successful behavior on any map except for 2s vs 1sc. We now closely examine the performance of BiLL compared to the best performing model-based (MAMBA) and model-free baselines (MAPPO) over the course of training. We present these training curves in Figures 3 and 4. We first compare the SMAC maps. While BiLL and MAMBA both approach a 100% winrate on 2s vs 1sc; on 3s vs 4z, BiLL matches the final performance of MAMBA in half the number of steps. On the hard 3s vs 5z map, MAMBA and MAPPO achieve similar performance (\(<1\%\)), while BiLL approaches a winrate of nearly 60% with less than 100k environment steps. Impressively, BiLL outperforms both MAMBA and MAPPO on both _Super Hard_ maps, which are extremely challenging tasks. MAPPO, on the other hand, struggles to achieve a win rate over 1% on all maps except one easy map (2s vs 1sc), on which it is the best performer by a small margin. However, both BiLL and MAMBA initially learn much faster than MAPPO, approaching the 50k step performance of MAPPO within 25k steps. On Flatland, BiLL significantly outperformed both MAMBA and MAPPO on the 5 agent task, approaching the final performance of the baselines in less than 50k steps \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|} \hline Map & Steps & BiLL & MAMBA & MAPPO & COMA & QMIX & QTRAN \\ \hline 2s vs 1sc & 100k & 91(4) & 94(5) & **98(2)** & 37(12) & 7(9) & 12(12) \\ \hline 3s vs 4z & 100k & **81(17)** & 21(11) & 0 & 0 & 0 & 0 \\ \hline 3s vs 5z & 200k & **34(26)** & 0 & 0 & 0 & 0 & 0 \\ \hline Corridor & 450k & **51(26)** & 32(26) & 0 & 0 & 0 & 0 \\ \hline 3s5z vs 36z & 450k & **20(15)** & **20(28)** & 0 & 0 & 0 & 0 \\ \hline \end{tabular} \end{table} Table 1: Average win rate and standard deviation (%) on SMAC maps. We calculate rewards and standard deviation from the performance over the last 4000 environment steps. BiLL achieves the highest overall performance in the same number of environment interactions for all maps except 2s vs 1sc. We also note that it is joint highest on 3s5z vs 3s6z \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline Task & Steps & BiLL & MAMBA & MAPPO \\ \hline 5 agents & 100k & **45(25)** & 26(25) & 22(22) \\ \hline 10 agents & 450k & **37(17)** & 35(27) & 18(16) \\ \hline \end{tabular} \end{table} Table 2: Average rewards and standard deviation (%) on Flatland. We calculate the numbers from the performance over the last 4000 environment steps. BiLL achieves the highest overall performance in the same number of environment interactions compared to MAMBA and MAPPO. Figure 4: Training curves on Flatland. The Y axis denotes the average reward while the X axis denotes the number of environment steps. The plots were smoothed using an exponential moving average, and the error bars show the maximum and minimum reward. BiLL emerges as the best-performing algorithm on both Flatland maps, being more sample-efficient than MAMBA and MAPPO. and achieving a final performance almost twice that of both baselines. Although BiLL performed best on the 10 agent task as well, we saw that overall performance decreased as the difficulty increased with the number of agents. To more rigorously evaluate sample efficiency, we adopt the evaluation paradigm from prior work Hafner (2021) and examine the average performance over the entire training history. We perform this comparison for all SMAC and Flatland environments, comparing BiLL with MAMBA and MAPPO. For each environment, we conduct a one-way ANOVA (\(\alpha=0.05\)) to compare the performance of the three algorithms averaged over the entire training history.1 The one-way ANOVA tests revealed that there was a statistically significant difference in performance for at least two groups for all tasks. Tukey's HSD test for multiple comparisons (\(\alpha=0.05\)) found that the mean win rate of BiLL was significantly higher than MAPPO on all SMAC and Flatland tasks and significantly higher (\(p=0.001\)) than MAMBA on all tasks but one: the easy 2s vs 1sc map. On this task, there was no significant difference between the mean win rate of BiLL and MAMBA (\(p=0.29\)). Overall, we also observe that in easy tasks, it may be easier for MARL algorithms to learn from raw inputs rather than latent states generated by the model, which are subject to epistemic uncertainty Janner et al. (2019). This could explain why we see smaller improvements on 2s vs 1sc and the Flatland tasks. Footnote 1: Further details about the statistical tests in Appendix B.6. We emphasize that MAMBA performs _centralized execution_, while BiLL performs _decentralized execution_. In other words, BiLL achieves superior performance both in terms of overall sample efficiency and final performance in the sample-constrained regimes -- all despite lacking access to the privileged information enjoyed by MAMBA during execution. We therefore conclude that BiLL is the most sample efficient algorithm on all environments, answering **RQ1** in the affirmative. Role of the Bi-level ModelWe now investigate the role of the bi-level latent variable model of BiLL by ablating two important features that we believe are responsible for the gains in sample efficiency: its ability to incorporate global information and learn a bi-level latent space, and it's ability to encode relevant global information. To assess this, we choose one SMAC map from each level of difficulty: 3s vs 4z (_Easy_); 3s vs 5z (_Hard_); and Corridor (_Super Hard_). We compare BiLL with MAMBA (the best performing baseline) and two BiLL variants. To assess whether the use of global information to inform representation learning through our bi-level model improves sample efficiency, we compare against **BiLL-A**. This variant has a single-level latent variable model that learns only agent latent states using the agent's local observations. We also suspect that equipping each agent with the ability to learn its own latent states using the bi-level model enables each agent to incorporate global information relevant to its behavior learning. To test this hypothesis, we compare against **BiLL-S**, in which all agents use the same shared global latent states at each step. We visualize the training curves in Figure 5. On all maps, BiLL achieves greater sample efficiency than the two variants. Interestingly, BiLL-S outperforms MAMBA on two of three maps, while BiLL-A outperforms MAMBA on one map. Because BiLL-A exhibits the lowest performance, we believe that the bi-level model's ability to inform representation learning by incorporating global information is most responsible for BiLL's gains in sample efficiency. The decreased performance of BiLL-S compared to BiLL supports our hypothesis that sharing the global latent among agents impedes behavior learning. We suspect that this performance decrease is because noisy, irrelevant information is included in the shared global latent. Taken together, the superior performance of BiLL highlights the crucial role of the bi-level model. In particular, these experiments suggest that the bi-level model effectively incorporates relevant global information and learns a structured hierarchy of latent states, answering **RQ2**. Model AccuracyGiven the strong performance of BiLL, we investigate the ability of the bi-level model to accurately learn environment dynamics. We qualitatively examine the model's reconstruction of long trajectories of observations with multiple agents. The first row of Figure 6 visualizes the Figure 5: Training curves of ablation studies. The X axis denotes the number of environment steps. The plots were smoothed using an exponential moving average. The error bars show the maximum and minimum winrate. BiLL outperforms the baseline (MAMBA) and all variants. The superior performance of Bill implicates the crucial role of the bi-level model to inform representation learning. raw observations of a sample trajectory of 18 timsteps from the SMAC Corridor map. In the second row, we compare it against the bi-level model's reconstruction of it using posterior agent latent states, which are computed with access to the raw observation. We see that the model accurately reconstructs the trajectory. For generating latent trajectories however, the model uses the posterior agent latent states in the first timestep, and predicts forward using the transition model that generates prior agent latent states. Since BiLL performs behavior learning using these latent states, it is important that the trajectory of observations reconstructed using these latent states closely match what the corresponding trajectory of raw observations would have looked like in the environment. To test this, we use the combination of the posterior agent latent states in the first step and prior agent latent states in successive steps, which we call the conditioned prior (Lee et al., 2020) to reconstruct the sample trajectory of raw observations mentioned above. We observe from the third row of Figure 6 that our model generates accurate reconstructions using the conditioned prior. This suggests that the bi-level model accurately learns environment dynamics in complex multi-agent tasks, providing an answer to **RQ3**. ## 5 Related Work The majority of work in MARL focuses on the model-free setting (Lowe et al., 2017; Foerster et al., 2018; Ndousse et al., 2021). Despite their impressive performance, model-free MARL algorithms often suffer from a high sample complexity. Several approaches have been developed to address this issue. One line of work (Rashid et al., 2018; Son et al., 2019; Sunehag et al., 2017; Yang et al., 2020) uses the insight of value decomposition (Dietterich, 2000): value functions can be decomposed into simpler functions which can be learned more easily and (relatively) independently. They can then be recombined to approximate the original, more complex value function. Another line of work focuses on actor-critic based methods (Lowe et al., 2017; Foerster et al., 2018; Peng et al., 2020; Yu et al., 2021) that learn a centralized critic conditioned on global state and joint action to reduce non-stationarity, thus improving sample efficiency. Because we can use any model-free MARL algorithm for learning multi-agent policies in latent space, these techniques are complementary to our contributions. In MARL, latent variable models have shown promise in learning the reward function in inverse RL (Gruver et al., 2020) and representations of competing agents' strategies (Xie et al., 2020). While (Krupnik et al., 2020) uses a multi-step generative model in 2-player games to predict future joint observations and actions, it is only applicable in 2-agent scenarios, does not generate synthetic data, and does not follow the CTDE paradigm. Only very recently have latent variable models been used to learn environment dynamics in Markov games and to improve sample efficiency (Egorov and Shpilman, 2022). These approaches are based on direct extensions of single-agent methods that use latent variable models (Hafner et al., 2020). The dynamics of each agent is learned as though the agent is in a POMDP, and deep learning architectures, such as transformers (Vaswani et al., 2017; Lin et al., 2022), aggregate latent states from all agents to reduce non-stationarity and reduce model errors while predicting forward (Egorov and Shpilman, 2022). However, latent variable models designed this way cannot be used for decentralized execution, as they always require access to all latent states. They are also unable to incorporate any additional global information available during training other than the local observations of agents. In multi-agent tasks, encoding global information such as the global state, in SMAC, for example, is crucial for learning successful behaviors from the latent states, as we show through our empirical analysis. ## 6 Conclusion We presented a novel model-based MARL algorithm, BiLL, that learns behaviors purely using latent trajectories generated by a bi-level latent variable model. Our bi-level latent variable model effectively learns environment dynamics in multi-agent tasks by factorizing the latent space into high-level global latent states and low-level agent latent states that capture relevant global information. BiLL significantly outperforms state-of-the-art MARL methods in sample efficiency on SMAC and Flatland, including on two extremely challenging _Super Hard_ SMAC maps. While we achieve impressive gains in performance, the learned latent states are not interpretable. To deploy our method in a real world scenario, future work should involve improving representation learning to achieve interpretability of the latent space. ## 7 Acknowledgements This research was supported in part by NSF IIS-2046640 (CAREER). We thank Robert Bosch Center for Data Science Figure 6: **Top row**: A sample trajectory of ground truth observations from the SMAC Corridor map; **Middle row**: Trajectory reconstructed by the model from posterior agent latents; **Bottom row**: Trajectory reconstructed by model from conditioned prior agent latents. and AI for supporting author Aravind Venugopal's Post-Baccalaureate Fellowship for part of the duration of this work. We thank Rex Chen for his contributions towards setting up the computational resources for the experiments.
2307.10323
IncDSI: Incrementally Updatable Document Retrieval
Differentiable Search Index is a recently proposed paradigm for document retrieval, that encodes information about a corpus of documents within the parameters of a neural network and directly maps queries to corresponding documents. These models have achieved state-of-the-art performances for document retrieval across many benchmarks. These kinds of models have a significant limitation: it is not easy to add new documents after a model is trained. We propose IncDSI, a method to add documents in real time (about 20-50ms per document), without retraining the model on the entire dataset (or even parts thereof). Instead we formulate the addition of documents as a constrained optimization problem that makes minimal changes to the network parameters. Although orders of magnitude faster, our approach is competitive with re-training the model on the whole dataset and enables the development of document retrieval systems that can be updated with new information in real-time. Our code for IncDSI is available at https://github.com/varshakishore/IncDSI.
Varsha Kishore, Chao Wan, Justin Lovelace, Yoav Artzi, Kilian Q. Weinberger
2023-07-19T07:20:30Z
http://arxiv.org/abs/2307.10323v2
# IncDSI: Incrementally Updatable Document Retrieval ###### Abstract Differentiable Search Index is a recently proposed paradigm for document retrieval, that encodes information about a corpus of documents within the parameters of a neural network and directly maps queries to corresponding documents. These models have achieved state-of-the-art performances for document retrieval across many benchmarks. These kinds of models have a significant limitation: it is not easy to add new documents after a model is trained. We propose IncDSI, a method to add documents in real time (about 20-50ms per document), without retraining the model on the entire dataset (or even parts thereof). Instead we formulate the addition of documents as a constrained optimization problem that makes minimal changes to the network parameters. Although orders of magnitude faster, our approach is competitive with retraining the model on the whole dataset and enables the development of document retrieval systems that can be updated with new information in real-time. Our code for IncDSI is available at [https://github.com/varshakishore/IncDSI](https://github.com/varshakishore/IncDSI). Machine Learning, ICML ## 1 Introduction Information retrieval (IR) systems map user queries, often expressed in natural language, to relevant documents. They are the core technology underlying search engines, and are only becoming more critical as the information available to users grows in complexity and volume. Current retrieval methods largely align with one of two paradigms. The dual encoder methods train separate encoders for queries and documents that map the two into a shared embedding space. The training loss encourages that queries are closest to their respective target documents (Karpukhin et al., 2020; Xiong et al., 2020) and one can perform retrieval by conducting a nearest neighbor search given the query and document embeddings. The other paradigm that is gaining significant interest recently is differentiable search indexing (DSI; Tay et al., 2022), in which all information about a collection of documents is encoded in the parameters of a neural network model. Given a query, the model directly returns the ID of the relevant document, either via classification over all IDs or by generating the ID with a decoder. The two paradigms are quite different and have complementary advantages. It is straightforward to add new documents to dual encoder systems by mapping them into the joined space using the trained document encoder and including the resulting embedding vectors in the nearest neighbor search. DSI systems, on the other hand, shine in offering higher flexibility to learn the retrieval encoding of a document. Here, documents are not encoded through a shared encoder, but instead their implicit representation (i.e., within the network parameters) is induced during training. DSI methods are also relatively simple, consisting of a single unified model instead of different encoders and search procedures; DSI models perform retrieval with a single forward pass. However, DSI systems are harder to extend to new documents. Naively training the model with new document risks catastrophic forgetting of existing documents (McCloskey and Cohen, 1989; Toneva et al., 2018; Mehta et al., 2022) and retraining on old and new data on a regular basis is costly. Most search engines retrieve documents from dynamic corpora that can grow over time. Consider a search engine for arXiv papers or social media, for instance. As new documents are uploaded, they should become available as soon as possible--ideally in real time. Figure 1 illustrates the setting where a document retrieval model is first trained on an initial set of documents, after which new documents arrive and must be incorporated into the document index as soon as possible. Although a DSI system can be retrained periodically, its extension to the real-time setting has so far remained an open problem. We develop _IncDSI_, an approach that allows for rapidly adding new documents to a trained DSI model, while preserving the unmatched flexibility and performance of such models. Although in DSI the retrieval process happens inside the neural network, it is possible to formulate a constrained optimization problem that allows us to update and extend the number of document classes without retraining. Our approach leverages the fact that DSI networks have two main components: an encoder and a linear classification layer. The encoder embeds the queries and documents in a joint representation space, and the classification layer can be viewed as a matrix where each row corresponds to a _document vector_. Performing classification by finding the document vector that has maximal inner product with an embedded query is effectively a nearest neighbor search of the query and the document vectors. This is akin to the dual encoder setup, the main difference being that the document class embeddings (i.e. the document vectors) are not the output of a document encoder, but are instead _independently_ learned. This independence allows us to formulate adding a new document as a _constrained optimization_ problem that aims to find the optimal document vector for a new document. The independence also guarantees that this process does not modify any other existing document vectors and does not require broader updates to the query encoder. We evaluate our approach by incrementally adding up to 10k documents to a trained retrieval model, evaluating both retrieval performance and the speed of adding documents. Compared to retraining the model with the new documents, IncDSI retains retrieval performance on old documents while simultaneously achieving comparable performance on the new documents. It also has a significant advantage: IncDSI is extremely fast, and only requires about 50 milliseconds to add a new document. Our code for IncDSI is available at [https://github.com/varshakishore/IncDSI](https://github.com/varshakishore/IncDSI). ## 2 Related work Sparse and Dense retrieval methods.Document retrieval comprises of two main tasks- 1) _Indexing_, during which document representations are learned and 2) _retrieval_, during which the right document is found for a given query. Early approaches made use of sparse document and query representations due to their simplicity and effectiveness (Blanco and Lioma, 2012; Rousseau and Vazirgiannis, 2013; Zheng and Callan, 2015; Guo et al., 2016; Robertson et al., 1995). However, these methods often fail to capture rich semantic connections between documents and queries. Dense retrieval methods leverage the power of neural networks to learn dense representations of documents and queries in low dimensional space. The most common dense retrieval methods use biencoders to learn to encode documents and queries such that the queries are close to their corresponding documents. During retrieval, for any given query, documents are retrieved by using Approximate Nearest Neighbor (ANN) search (Xiong et al., 2020; Dehghani et al., 2017). Karpukhin et al. (2020) present DPR, which is a BERT-based biencoder, trained using contrastive loss with in-batch negatives. Improving upon this work, many others explore efficient negative sampling strategies to improve the contrastive loss performance (Xiong et al., 2020; Gao et al., 2021). ANCE (Xiong et al., 2020) is trained with two simultaneous processes--the first refreshes the document and query embeddings periodically and the second uses the latest embeddings to find hard negatives. Cross encoders, another class of dense retrieval methods, encode queries and documents together, in order to better model the interaction between them (Nogueira et al., 2019; Qu et al., 2020; Khattab and Zaharia, 2020; Luan et al., 2021). End-to-end Retrieval.In contrast with dense dual-encoder based retrieval approaches, which perform indexing and retrieval in two separate stages, DSI (Tay et al., 2022) aims to combine the two stages in an end-to-end manner. For indexing, a nerual network with parameters \(\theta\) is trained Figure 1: Overview of our proposed setting. IncDSI can index incoming documents immediately and begin serving them to users. to map document text to corresponding document identifiers (docids). For retrieval, the neural network is trained to map user queries to docids. These two tasks are simultaneously learned. Document ids can either be auto-regressively generated (string ids) or produced by a dot product with a classification layer (atomic ids). Unlike parametric dense-retrieval methods, DSI is non-parametric and document specific parameters are learned. Many other methods build on DSI and use other techniques to further improve the model performance. Wang et al. (2022) use generated queries and a novel auto-regressive decoder architecture to improve the DSI performance. They prepend each digit in the docid with a position number and propose a Prefix-Aware Weight Adaptive decoder. Zhou et al. (2022) use keyword based and semantic based docids to index the documents. Query Generation in document retrieval.Recent work has shown that using queries from a query generation model, in addition to the first few tokens of a document, to obtain its representations improves the results for document retrieval. This is because in traditional retrieval there is a mismatch between the two objectives of indexing and retrieval. Zhuang et al. (2022) show that performing indexing with generated queries significantly improves retrieval results. Similarly, Wang et al. (2022) also show that using generated queries boosts the performance of neural corpus indexer (NCI), which is a sequence to sequence retrieval and is explained in the paragraph above. Bonifacio et al. (2022) consider settings where queries are not available for training retrieval models. They show that generated queries are not only useful for indexing but are also useful for retrieval when human queries are unavailable; in these settings they can be used in place of human queries. Bonifacio et al. (2022) prompt a large language model with a few document-query pairs to generate additional synthetic queries, which are then used to train information retrieval systems. Preventing forgetting.One of the biggest challenges in continual learning is _catastrophic forgetting_(Parisi et al., 2019), a phenomenon in which old data is forgotten as a model is trained on new data. Alleviating forgetting is an active research area (Kirkpatrick et al., 2017; Riemer et al., 2018; Lee et al., 2017), in which memory-based approaches are popular (Hayes et al., 2019; Isele and Cosgun, 2018; Lopez-Paz and Ranzato, 2017; Chaudhry et al., 2018; Rolnick et al., 2019; Aljundi et al., 2018). Chaudhry et al. (2019) shows that repeating even a small part of old training data while the model is trained on new data can reduce forgetting to some extent. This technique is also applied in Mehta et al. (2022) where they use both generated and natural queries from old documents while training on new documents. Apart from using generated queries, they also apply Sharpness-Aware Minimization (Foret et al., 2020) in their training objective to optimize for a flatter loss basin instead of a minimal but potentially sharp loss. This method is shown to help alleviate forgetting (Foret et al., 2020). ## 3 Problem Setup and Notation We aim to have an up-to-date real time retrieval model that can be quickly and efficiently updated with information from new documents. At any given time both queries from old and new documents must correctly be mapped to their corresponding documents. This streaming setting is pictorially shown in Figure 1. Our method, IncDSI, broadly has two different stages. In the first stage, a document retrieval model \(M^{0}\) is trained on an initial set of of documents \(D^{0}=\{d_{1},\cdots,d_{n}\}\). Each of these documents has some number of associated queries that are used in training and we denote \(\mathbf{q}_{i,j}\) to be the \(i^{th}\) query associated with document \(j\). These queries can either be user queries or queries from a query generation model, both are used in the same manner in our method. In the second stage, additional documents become available in a streaming fashion. As each new document becomes available, the retrieval model is updated to include it. We denote the new documents as \(D^{\prime}=\{d_{n\neq 1},d_{n\neq 2},\cdots\}\) and use \(M^{t}\) to refer to the updated model after \(t\) new documents have been added. Like with the initial documents, we also have some variable number of queries \(\{\mathbf{q}_{1,n\neq t},\mathbf{q}_{2,n\neq t},\cdots\}\) corresponding to each new document \(d_{n\neq t}\). ## 4 IncDSI Before we introduce the constrained optimization problem used to obtain model \(M^{t}\), we first introduce how the initial model \(M^{0}\) is trained on the inital document corpus \(D^{0}\). ### Document Retrieval Model Our initial document retrieval architecture is a modified version of the DSI model (Tay et al., 2022). As introduced in Section 2, DSI is a new end-to-end paradigm for document retrieval in which a single model is trained to directly produce the corresponding document id (docid) for a given query. DSI makes use of a T5 model backbone that is trained with either a language model head to autoregressively generate docids as strings or a classification layer to output atomic docids (atomic docids are arbitrary unique docids that are assigned to each document). We focus on the setup of a DSI model with a classification layer, as prior work (Mehta et al., 2022) has shown that this approach is less prone to forgetting when compared to autoregressive methods. The atomic DSI network is trained with cross entropy loss to both index the documents and train the retrieval model. For indexing, the model is trained to map the first 32 tokens of a document to its corresponding docid, and for retrieval, it is trained to map user queries to corresponding docids. We make two main changes to the DSI as explained below. For indexing, instead of using the first 32 tokens as document representation, we use an off-the-shelf query generation model like docTTTTquery (Nogueira et al., 2019) to generate queries for every document, and train the model to map the generated queries to corresponding docids. Prior work (Wang et al., 2022; Zhuang et al., 2022) has demonstrated that using generated queries to index models yields better performance because it reduces the train-test gap between using extracted document text during training and user queries during test time. Wang et al. (2022) obtain results by using both generated queries and first few document tokens. Our experiments suggest that using document tokens provide only minor benefits, so for simplicity we only use the generated queries to index the documents. We present these experiments in Appendix B. Since we are not using performing autoregressive decoding, we replace the T5 backbone (an encoder-decoder model) in DSI with a BERT backbone (an encoder model). Additionally, most dual encoder based methods are built using pre-trained BERT models (Karpukhin et al., 2020; Xiong et al., 2020) and thus we can compare apples to apples by using a BERT. That said our method is invariant with respect to the choice of model; any other encoder (like encoder-only T5) can be used as well. To summarize, our document retrieval model \(M^{0}\), that is trained on the initial data \(D^{0}\), consists of a BERT based query encoder and an additional classification layer. Akin to DSI, the model \(M^{0}\) is trained with a cross entropy loss to perform classification and directly predict docids. ### Incremental Addition The document retrieval model \(M^{0}\) that is trained on documents \(D^{0}\), has a classification layer \(\mathbf{V}\in\mathbb{R}^{|D^{0}|\times h}\), where \(|D^{0}|\) is the number of already indexed documents and \(h\) is the output dimensionality of the query encoder. Each row in matrix \(\mathbf{V}\) can be interpreted as a document vector (in \(\mathbb{R}^{h}\)) that corresponds to a particular document. We can add a new document class to model \(M^{0}\) by introducing an additional class vector corresponding to the new document to \(\mathbf{V}\). To add the new document, we use the queries associated with that document and attempt to ensure that those queries are correctly mapped to new document. For reducing the mismatch between train and test time and to have a greater diversity of queries, we obtain additional queries with a query generation model as described in the previous section. In settings where natural queries are unavailable, just the generated queries can be used. Optimization Problem.We formulate the addition of a new document, as a constrained optimization problem over the document representation space. We first describe how to add one new document to the model trained on the initial set and then describe how to use a similar procedure to add more documents sequentially. Let's suppose that the current retrieval model has been trained on \(n\) documents (so the number of rows in \(\mathbf{V}\) is \(n\)). In order to add a new document \(d_{n+1}\) with associated queries \(\{\mathbf{q}_{0,n+1},\cdots,\mathbf{q}_{k,n+1}\}\), we want to find some document representation \(\mathbf{v}_{n+1}\in\mathbb{R}^{h}\) such that when \(\mathbf{v}_{n+1}\) is appended to the existing classification layer \(\mathbf{V}\), the resulting model _both_ correctly classifies queries corresponding to the Figure 2: An illustration of the process of adding a new document (shown in purple) with its associated queries. The queries are embedded using the encoder trained on initial documents. A single document vector is optimized to be closer to the query embeddings (all other document vectors are fixed). new document and the documents that were already indexed when \(M^{0}\) was trained. The first constraint we need to satisfy is to correctly classify any query from the new document. Because queries can be noisy, we average over the \(k\) available queries to develop a representative query embedding \(\bar{\mathbf{q}}_{n\!+\!1}=\frac{1}{k}\sum_{i=1}^{k}\mathbf{q}_{i,n\!+\!1}\) that should retrieve the new document (\(k\) is variable for every docuemnt). More formally, the constraint \[\bar{\mathbf{q}}_{n\!+\!1}^{T}\mathbf{v}_{n\!+\!1}>\max_{1\leq j\leq n}\bar{ \mathbf{q}}_{n\!+\!1}^{T}\mathbf{v}_{j} \tag{1}\] should hold, where \(\mathbf{v}_{j}\) is the \(j\)-th row of \(\mathbf{V}\), \(\bar{\mathbf{q}}_{n\!+\!1}^{T}\mathbf{v}_{n\!+\!1}\) is the score for the new document and \(\bar{\mathbf{q}}_{n\!+\!1}^{T}\mathbf{v}_{j}\) is the score for the \(j^{\text{th}}\) original document. The inequality in (1) ensures that the new document is scored higher than all the existing documents for the representative query embedding, and thus we ensure that the "query" \(\bar{\mathbf{q}}_{n\!+\!1}^{T}\) retrieves the new document. Although we want to retrieve the new document when appropriate, we do not want the addition of new documents to degrade retrieval performance for the original documents. So we need to minimize the probability that the queries corresponding to the original documents are also mapped to the new document. To achieve this, we use the set of queries used for indexing the original documents to introduce an additional set of constraints. For some original document \(j\), we denote the cached set of training queries as \(\{\mathbf{z}_{i,j}\}_{i=1}^{k}\); all training queries corresponding to the initial documents are cached after training the initial retrieval model \(M^{0}\) for efficiency. We compute a representative query embedding by averaging over the cached queries \(\bar{\mathbf{z}}_{j}=\frac{1}{k}\sum_{i=1}^{k}\mathbf{z}_{i,j}\). We can then construct a matrix \(\mathbf{Z}\in\mathbb{R}^{|D^{0}|\times h}\), that contains a representative query embedding for each original document. To preserve the performance of our system for the original documents, we find a new class vector \(\mathbf{v}_{n\!+\!1}\) that does not interfere with the retrieval of the existing documents. More formally, we enforce the following constraints \[\forall_{j}\mathbf{z}_{j}^{T}\mathbf{v}_{n\!+\!1}<\mathbf{z}_{j}^{T}\mathbf{ v}_{j}, \tag{2}\] where \(\mathbf{z}_{j}\) is the \(j\)-th row of \(\mathbf{Z}\), \(\mathbf{z}_{j}^{T}\mathbf{v}_{n\!+\!1}\) is the score for the new document and \(\mathbf{z}_{j}^{T}\mathbf{v}_{j}\) is the score for the \(j^{\text{th}}\) original document. The inequalities in (2) ensure that the queries for each original document will not retrieve the new document. Consequently, we find a \(\mathbf{v}_{n\!+\!1}\) that correctly classifies old and new queries with the following optimization problem: \[\boxed{\min\!\|\mathbf{v}_{n\!+\!1}\|_{2}^{2}} \tag{3}\] \[\text{s.t. }\bar{\mathbf{q}}_{n\!+\!1}^{T}\mathbf{v}_{n\!+\!1}> \max_{1\leq j\leq n}\bar{\mathbf{q}}_{n\!+\!1}^{T}\mathbf{v}_{j},\] \[\forall_{j}\mathbf{z}_{j}^{T}\mathbf{v}_{n\!+\!1}<\mathbf{z}_{j}^ {T}\mathbf{v}_{j}.\] We rewrite the violation of the first constraint in a form amenable for optimization using the hinge loss \[\ell_{1}(\mathbf{v}_{n\!+\!1})=\max(0,(\text{max}_{j}(\bar{ \mathbf{q}}_{n\!+\!1}^{T}\mathbf{v}_{j})-\bar{\mathbf{q}}_{n\!+\!1}^{T} \mathbf{v}_{n\!+\!1}))+\gamma_{1})^{2}, \tag{4}\] where \(\gamma_{1}>0\) is some margin. We minimize the squared hinge loss because smooth variants of the hinge loss can be easier to minimize with first-order optimization methods (Zhang and Oles, 2001; Rennie and Srebro, 2005). We found that this accelerated optimization while performing similarly to standard hinge loss. Minimizing Equation 4 satisfies the first constraint when the loss is low, finding some \(\mathbf{v}_{n\!+\!1}\) that is retrieved by the new queries. We can also similarly rewrite the second constraint using the hinge loss as \[\ell_{2}(\mathbf{v}_{n\!+\!1})=\sum_{j}\max(0,\mathbf{z}_{j}^{T}\mathbf{v}_{ n\!+\!1}-\mathbf{z}_{j}^{T}\mathbf{v}_{j}+\gamma_{2})^{2}, \tag{5}\] where \(\gamma_{2}>0\) is some margin. Minimizing Equation 5 satisfies the second set of constraints when the loss is low and ensures that we find some \(\mathbf{v}_{n\!+\!1}\) that does not interfere with the retrieval of the original documents. Our final optimization objective is a convex combination of \(\ell_{1}(\mathbf{v}_{n\!+\!1})\), which ensures that we retrieve the new document correctly, and \(\ell_{2}(\mathbf{v}_{n\!+\!1})\), which ensures that we maintain performance for the old documents. Therefore our final optimization objective becomes \[\mathcal{L}(\mathbf{v}_{n\!+\!1})=\lambda_{1}\ell_{1}(\mathbf{v}_{n\!+\!1})+(1 -\lambda_{1})\ell_{2}(\mathbf{v}_{n\!+\!1})+\lambda_{2}\|\mathbf{v}_{n\!+\!1} \|_{2}^{2},\] where \(\lambda_{1}\in(0,1)\) balances the objectives for accurately retrieving the new document and preserving the retrieval performance for the old documents, and \(\lambda_{2}\) controls the weight for L2 regularization. To solve the optimization problem, we utilize the L-BFGS optimizer (Fletcher, 2013). For all of our experiments we set the initial learning rate to \(1\) and utilize the strong Wolfe line search method (Nocedal and Wright, 2006) to compute the step sizes during optimization. Both the L-BFGS optimizer and the strong Wolfe line search method are implmented natively within Pytorch. We optimize the weight vector for a maximum of \(30\) iterations and terminate optimization early if the norm of the update is less than \(10^{-3}\). Algorithm.So far, we have outlined how to add a single new document to a model trained some initial documents (see Figure 2 for an overview). After finding \(\mathbf{v}_{n\!+\!1}\) for the new document, we add it as a new row to matrix \(\mathbf{V}\). We obtain an updated matrix of document representations \(\mathbf{V}^{\prime}=[\mathbf{V};\mathbf{v}_{n\!+\!1}]\in\mathbb{R}^{(|D|+1) \times h}\) and correspondingly an updated matrix of representative queries \(\mathbf{Z}^{\prime}=[\mathbf{Z};\bar{\mathbf{q}}]\in\mathbb{R}^{|D|\times h}\), where \(\bar{\mathbf{q}}=\frac{1}{m}\sum_{i=0}^{m}\mathbf{q}_{i}\) is the average query representation used to index the new document. We can now use the updated matrices \(\mathbf{V}^{\prime},\mathbf{Z}^{\prime}\), treat the new document as a part of the already indexed initial document set and another new document. Therefore, we can repeatedly use the same optimization method described above to continue adding a stream of new documents. We outline this procedure for adding a new set of documents in Algorithm 1. ``` Input: query embeddings \(\mathbf{Z}\), classification layer \(\mathbf{V}\), new document set \(D^{\prime}\), new queries \(\{q_{i,t}\}_{i=1}^{k}\) for every \(t\)-th new document (\(k\) is variable for each docuemnt) Hyperparameters: margins \(\gamma_{1}\) and \(\gamma_{2}\), loss weighting \(\lambda_{1}\), l2 regularization weight \(\lambda_{2}\) n = number of initial rows in \(\mathbf{V}\) for document number \(t\)in\(\{1,2,\cdots,|D^{\prime}|\}\)do x=n+t Initialize \(\mathbf{v}_{x}\) randomly \(\bar{\mathbf{q}}_{x}=\frac{1}{n}\sum_{i}^{n}\mathbf{q}_{i,x}\) optim \(\leftarrow\) LBFGS(\(\mathbf{v}_{x}\),lr = \(1\),line_search = True) repeat \(\ell_{1}(\mathbf{v}_{x})=\max(0,(\max_{j}(\bar{\mathbf{q}}_{x}^{T}\mathbf{v} _{j})-\bar{\mathbf{q}}_{x}^{T}\mathbf{v}_{x}))+\gamma_{1})^{2}\) \(\ell_{2}(\mathbf{v}_{x})=\sum_{j}\max(0,\mathbf{z}_{j}^{T}\mathbf{v}_{x}- \mathbf{z}_{j}^{T}\mathbf{v}_{j}+\gamma_{2})^{2}\) \(\mathcal{L}(\mathbf{v}_{x})=\lambda_{1}\ell_{1}(\mathbf{v}_{x})+(1-\lambda_ {1})\ell_{2}(\mathbf{v}_{x})+\lambda_{2}\|\mathbf{v}_{x}\|_{2}^{2}\) step optim(\(\mathcal{L}(\mathbf{v}_{x})\)) to minimize loss until 30 iterations or \(\|\Delta\mathbf{v}_{x}\|_{2}^{2}<10^{-3}\) \(\mathbf{V}\leftarrow[\mathbf{V};\mathbf{v}_{x}]\) \(\mathbf{Z}\leftarrow[\mathbf{Z};\bar{\mathbf{q}}_{x}]\) endfor ``` **Algorithm 1**IncDSI ## 5 Experiments Datasets.We conduct our experiments on two publicly available datasets--Natural Questions 320K (Kwiatkowski et al., 2019) and MS MARCO Document Ranking (Nguyen et al., 2016). We construct new benchmark datasets from Natural Questions and MS MARCO to facilitate research in building update-able document retrieval models. The NQ320K dataset consists of query-document pairs, where the queries are natural language questions and the documents are Wikipedia articles that contain answers to the queries. MS MARCO is another popular question answering dataset that contains Bing questions and corresponding web page documents. The original dataset contains 3.2 million documents, but only a subset of these documents have associated queries. In each dataset, we assign a unique docid to each document. The documents in NQ320K and MS MARCO are each split into three sets--the initial document set \(D^{0}\) that is available at the start, the new document set \(D^{\prime}\) that is available in a streaming fashion after a model is trained on the initial data and the tuning document set \(D^{*}\) that is used to tune the parameters for IncDSI. We randomly sample 90% of the documents to form the initial set \(D^{0}\), 9% of the documents to form the new set \(D^{\prime}\) and 1% of the documents to form the tuning set \(D^{*}\). Each dataset also has natural human queries that are associated with the documents. We use the official NQ and MSMARCO train-validation splits to divide the queries into train/val/test splits as follows: the train split is divided into 80% train/ 20% validation data and the validation split is used as test data. For each document in the train set, 15 additional queries are generated using docTTTTquery (Nogueira et al., 2019). Since query generation models sometimes produce the same generic query for multiple documents, we filter out queries that are linked to multiple documents. As a result, a few documents might Figure 3: Time taken to add documents for different methods. Numbers on the bars are hit@1 for new documents. Lighter shades in stacked bars indicate later checkpoints (epochs 1,5,10). DPR, which only requires embedding queries and computing inner products, is not shown because it uses a model trained on just the original data and results in worse performance (when compared to the models here). have fewer than 15 generated queries. The final statistics of the two datasets is shown in Table 5. Baselines.We compare our method with the following three baselines: * DPR (Karpukhin et al., 2020): We train a standard dual-encoder DPR model on the initial dataset. The frozen encoder from the trained DPR model is used to obtain representations for documents and queries from the original and new dataset. Nearest neighbor search is then used to classify the queries. * Continual training with frozen DPR (DSI-DPR): A model consisting of a frozen DPR encoder and a trainable classification layer is continually fine-tuned with cross-entropy loss on natural and generated queries from both the old and new documents. * Continual training (DSI-Scratch): A DSI model is first trained to map generated and natural queries from the initial documents to their corresponding docids (to make a fair comparison, we use BERT as the backbone for the DSI model). The model is then continually fine-tuned with queries from the old and new documents. This method is similar to DSI++ (Mehta et al., 2022). During continual training, we utilize the same hyperparameters as the model trained on old documents. Experimental Setting.We use the BERT model (Devlin et al., 2018) and initialize it with publicly available bertbase-uncased weights for all our experiments. The classification layer is randomly initialized. For the DPR baseline, we use the offcial implementation (Karpukhin et al., 2020). For the continual training baselines, the document retrieval model is trained for 20 epochs on the initial set of documents and for an additional 10 epochs on both the initial and new documents. A learning rate of 1e-5 and 5e-5 and a batch size of 128 and 1024 are used for NQ320K and MSMARCO respectively. The results are reported for the epoch with the best validation accuracy. For all our experiments, we use one A6000 GPU. Metrics.In line with previous work (Tay et al., 2022; Wang et al., 2022), we measure Hits@k, where \(k=\{1,5,10\}\), and Mean Reciprocal Rank@10 (MRR@10) to evaluate our method and the baselines. Hits@k (also denoted as H@k) measures how often the desired document is one of the top-k retrieved documents and MRR@k calculates the reciprocal of the rank at which the correct document is retrieved (the rank is set to infinity if the desired document is not in the top \(k\)). We measure these metrics on both queries belonging to the initial documents \(D^{0}\) and the newly added documents \(D^{\prime}\). We also measure the amount of time required to add the new documents to an already trained retrieval model. Hyperparameter tuning.To tune the four hyperparameters for IncDSI (that is objective trade-off weight \(\lambda_{1}\), L2 regularization weight \(\lambda_{2}\) and margins \(\gamma_{1},\gamma_{2}\)), we utilize the Ax library (Bakshy et al., 2018) to perform bayesian optimization with our tuning set \(D^{*}\). We run hyperparameter optimization for 50 trials with the default Ax library settings and use the best hyperparameters to add the heldout documents in the new document set \(D^{\prime}\). We optimize the hyperparamters over \(\lambda_{1}\in\text{Uniform}(.05,.95),\gamma_{1},\gamma_{2}\in\text{Uniform}(0,10)\), and \(\lambda_{2}\in\text{LogUniform}(1e-8,1e-3)\). For bayesian optimization, we set the target objective to the F-beta score and compute the weighted harmonic mean of the validation MRR@10 for the original documents and the tuning documents. Formally, given the validation MRR@10 for the original documents, \(y_{\text{orig}}\), and the validation MRR@10 for the tuning documents, \(y_{\text{tune}}\), the target metric is \[y_{\text{target}}=(1+\beta^{2})\frac{y_{\text{tune}}\cdot y_{\text{orig}}}{( \beta^{2}\cdot y_{\text{tune}})+y_{\text{orig}}},\] where setting \(\beta>1\) emphasizes the retrieval performance for the original documents. Because the the original document set is generally much larger than the set of new documents, we set \(\beta=5\) to emphasize the preservation of the retrieval performance for the existing documents (this choice is ablated in the next section). The target objective can be modified to include time or hits@k information depending on the specific use case. ## 6 Results and Discussions. Performance.We add \(k\) documents, where \(k\in\{10,100,1000,10000\}\), to a model trained with the initial documents \(D^{0}\) and evaluate the retrieval accuracy and time required to add documents with IncDSI and the baselines introduced previously. We empirically observe that every new document can be added by satisfying all the constraints in Equation 3 for the datasets we use. However, there might exist cases when the optimization problem fails to find a feasible solution. In such a case, the optimization problem can be re-started after altering it by tweaking the initialization/hyperparameters or by removing some queries and re-computing the representative query \(\bar{\mathbf{q}}\) in IncDSI. Figure 3 and Figure 4 show time and accuracy plots for a adding different number of documents with the NQ320K dataset. The raw numbers are presented in Appendix C and Appendix D. The trends for the MS MARCO dataset are similar and due to space constraints the results on the MS MARCO dataset are presented in Appendix D. With IncDSI, we can add previously unseen documents to the index in less than 50 milliseconds. This means that our approach can efficiently index a stream of documents as they become available. We observe that retraining the DSI models takes orders of magnitudes longer to achieve comparable performance on the new documents. For example, IncDSI indexes 1000 documents in roughly 16 seconds and achieves a H@1 of 62.0 for those documents. The baseline DSI-Scratch, on the other hand, needs over \(513\times\) longer (2hr17m) to achieve the H@1 of 63.4. Moreover, on the MS MARCO dataset, IncDSI outperforms the baselines despite requiring orders of magnitude lesser time; Table 8 shows that H@1 for the new documents is 61.0 with IncDSI and at most 51.8 for the baselines. The learned baselines need to be trained for much longer than 10 epochs (which already takes about 6 hours) to achieve better performance on the new documents because the initial MS MARCO document set is much bigger. As a result, using such methods in a streaming setting is impractical. Our constrained optimization formulation, however, is able to find an effective new document representation in a fraction of a second. Compared to the dual-encoder DPR baseline that can also encode new documents in a streaming setting in milliseconds, we observe that our method is similarly fast while consistently achieving greater retrieval performance (see Table 7 and Table 8). For all settings with a reasonable sample size (i.e. \(\geq 100\) document additions), IncDSI on NQ320K achieves a H@1 greater than 61.0 on the new documents while DPR never exceeds 48.0. This is due to leveraging the strengths of DSI and decoupling the document representations from a parametric model like a BERT encoder. By directly optimizing over the representation space, our model has much greater capacity to incorporate information from new documents. We present the retrieval performance of our system in a streaming setting where documents are added incrementally to the index in Figure 4. Our approach is capable of indexing thousands of documents effectively with limited interference with the original documents. Notably, the H@10 for the original documents is nearly constant during indexing, although the Hits@1 for the original documents does degrade slowly over time. These results show that IncDSI is very effective for adding documents in close to real time and yields close to the same performance as retraining. While IncDSI is not a replacement for the standard paradigm of retraining, particularly in settings where many documents must be added to the index, it offers a solution for indexing documents in real-time and can potentially reduce the frequency at which resource-intensive retraining is required. Using only generated queries.There are scenarios where no natural queries are available for new documents. For instance, when a new paper is uploaded to arXiv, human queries corresponding to that paper might not be available. In this scenario, we can add documents by using only generated queries. In table Table 1, we report the performance from using only generated queries to add 1000 new documents and we see that IncDSI achieves comparable performance to the baselines. When these numbers are compared to those obtained from using both natural and generated queries, we observe that significant gains are achieved by using natural queries. This is likely because natural queries are more diverse in structure and content than generated queries from the docTTTTTquery generation model. Using a better query generation model will help improve performance, especially when only using generated queries. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & \multicolumn{4}{c}{NQ320K (Original/New)} \\ \cline{2-5} & H@1 & H@5 & H@10 & MRR@10 & Time (min) \\ \hline IncDSI & \(67.7/53.5\) & \(84.6/77.5\) & \(87.9/81.7\) & \(75.1/63.7\) & 13.85s \\ DSI-DPR & 63.4/53.5 & 83.3/74.6 & 87.3/78.9 & 72.0/61.8 & 45m12s \\ DSI-Scratch & 68.0/53.5 & 84.4/76.1 & 87.7/78.9 & 75.2/62.7 & 215m36s \\ \hline \hline \end{tabular} \end{table} Table 1: Impact of only using generated queries Figure 4: We present the retrieval performance for the original documents and new documents as increasing numbers of documents are indexed. The IncDSI performance represents the average over 10 random document orderings. Ablations.We ablate a number of the different design choices made in developing our framework. For our ablation studies, we report results for adding 1000 new documents from the NQ320k dataset. The Bayesian optimization target.We ablate the impact of the weighting term, \(\beta\), used during hyperparameter tuning. Increasing \(\beta\) places more emphasis on maintaining retrieval performance for the original documents. We report results over a sweep of different \(\beta\) values in Table 2. As expected, increasing \(\beta\) monotonically improves the performance on the original documents at the expense of performance on the new documents. We selected \(\beta=5\) as it strikes a reasonable balance, but different values may be adviseable depending on the application. Number of generated queries.We ablate the number of generated queries used in Table 3. The results show that IncDSI is not sensitive to the number of generated queries for NQ320k. There might be other datasets that benefit from a greater number of generated queries due to greater diversity. Using many generated queries leads to obtaining a more robust representation of the document class that leads to better generalization. Loss function.We report the effect of minimizing the standard hinge loss instead of the squared hinge loss in Table 4. We observe that they achieve similar performance. However, using the squared hinge loss is almost \(3.3\times\) faster and we therefore use the squared hinge loss as the loss function of IncDSI. ## 7 Limitations and Future work Despite enabling a real time document retrieval system with good retrieval accuracy, our method has some limitations. As we add an increasing number of new documents, the performance on the original set of documents degrades slightly, and we eventually need to retrain the model (as is standard practice). It is possible that alternative formulations of the optimization objective would be more effective at preserving performance for longer. To embed new queries, we use a frozen query encoder that is trained on the set of initial documents and thus rely on strong representations from the query encoder to generalize effectively. In the future, we would like to explore pretraining tasks or other methods to improve the generalizability of the query encoder. We can also further improve the performance of IncDSI by training a query generation model on in-domain data, instead of using an off-the-shelf model. In this work we only consider the setting of adding new documents. However, our proposed method can also be used to edit information in existing documents. If we want to edit the information in document, we can formulate new constraints that encode the information that needs to be associated with the document and optimize its corresponding document vector using IncDSI. We leave the exploration of editing docuemntts to future work. ## 8 Conclusion We present, IncDSI, a novel document retrieval system that can index new documents as soon as they are available in roughly 50 milliseconds. We accomplish this by formulating the problem of indexing a new document as a constrained optimization problem over the document representation space. By holding the rest of our system fixed and optimizing only the document representation, we can rapidly introduce new documents to our system. IncDSI is orders of magnitudes faster when compared to retraining the document retrieval model and yet produces comparable performance. ## Acknowledgements This research is supported by a gift from the Simons Foundation, and grants from DARPA AIE program, Geometries of Learning (HR00112290078), the National Science Foundation NSF (IIS-2107161, III1526012, IIS-1149882, and IIS-1724282), and the Cornell Center for Materials Research with funding from the NSF MRSEC program (DMR1719875). We thank Oliver Richardson, Katie Luo and all the reviewers for their feedback. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & \multicolumn{5}{c}{NQ320K} \\ \cline{2-6} Loss Function & H@1 & H@5 & H@10 & MRR@10 & Time \\ \hline Hinge & \(68.1/59.2\) & \(84.8/74.6\) & \(88.0/80.3\) & \(75.4/66.4\) & 53.7s \\ Squared Hinge & \(67.8/62.0\) & \(84.6/76.1\) & \(87.9/81.6\) & \(75.1/68.9\) & 16.1s \\ \hline \hline \end{tabular} \end{table} Table 4: Impact of loss function. \begin{table} \begin{tabular}{l c c c c} \hline \hline & \multicolumn{5}{c}{NQ320K (Original/New)} \\ \cline{2-5} & H@1 & H@5 & H@10 & MRR@10 \\ \hline \(\beta=1\) & \(65.4/76.1\) & \(83.6/85.9\) & \(87.3/87.3\) & \(73.4/80.3\) \\ \(\beta=3\) & \(67.3/67.6\) & \(84.4/81.7\) & \(87.7/84.5\) & \(74.7/73.7\) \\ \(\beta=5\) & \(67.8/63.4\) & \(84.8/73.2\) & \(88.0/76.1\) & \(75.2/68.1\) \\ \(\beta=10\) & \(68.2/52.1\) & \(84.9/66.2\) & \(88.1/70.4\) & \(75.5/58.5\) \\ \hline \hline \end{tabular} \end{table} Table 2: Impact of different values of \(\beta\). \begin{table} \begin{tabular}{l c c c c} \hline \hline & \multicolumn{5}{c}{NQ320K (Original/New)} \\ \cline{2-5} & H@1 & H@5 & H@10 & MRR@10 \\ \hline \(5\) & \(68.1/56.3\) & \(84.7/71.8\) & \(87.9/80.3\) & \(75.4/64.3\) \\ \(10\) & \(68.0/60.6\) & \(84.8/77.5\) & \(87.9/78.9\) & \(75.3/66.9\) \\ \(15\) & \(67.8/62.0\) & \(84.6/76.1\) & \(87.9/81.6\) & \(75.1/68.9\) \\ \hline \hline \end{tabular} \end{table} Table 3: Impact of using a different number of generated queries.
2305.05631
Seismological Understanding of Accelerogram Amplitude Scaling for Engineers with Implications to Seismic Risk Analysis
Due to the paucity of strong recorded accelerograms, earthquake engineering analysis relies on accelerogram amplitude scaling for structural damage/collapse assessment and target spectrum matching. This paper investigates seismological characteristics of scaled accelerograms so as to inform future ground motion selection and seismic risk assessment methods. If a recorded accelerogram is scaled linearly by multiplying it with a positive factor, it is shown using the Representation theorem and the accelerogram Fourier spectrum that moment magnitude scales logarithmically and static stress drop scales linearly. Other seismic parameters such as the Joyner-Boore distance metric and the effective rupture area are invariant to scaling an accelerogram. This proposed interpretation of scaling is validated in the time as well as the frequency domains using a hybrid method for ground motion simulation. Finally, a discussion is made over the seismological correctness of accelerogram scaling. It is suggested that a suite of scaled accelerograms can be considered as being seismologically correct if this suite's magnitude given rupture area and stress drop distributions are similar to empirical observations.
Somayajulu L. N. Dhulipala
2023-05-09T17:21:37Z
http://arxiv.org/abs/2305.05631v1
Seismological Understanding of Accelerogram Amplitude Scaling for Engineers with Implications to Seismic Risk Analysis ###### Abstract Due to the paucity of strong recorded accelerograms, earthquake engineering analysis relies on accelerogram amplitude scaling for structural damage/collapse assessment and target spectrum matching. This paper investigates seismological characteristics of scaled accelerograms so as to inform future ground motion selection and seismic risk assessment methods. If a recorded accelerogram is scaled linearly by multiplying it with a positive factor, it is shown using the Representation theorem and the accelerogram Fourier spectrum that moment magnitude scales logarithmically and static stress drop scales linearly. Other seismic parameters such as the Joyner-Boore distance metric and the effective rupture area are invariant to scaling an accelerogram. This proposed interpretation of scaling is validated in the time as well as the frequency domains using a hybrid method for ground motion simulation. Finally, a discussion is made over the seismological correctness of accelerogram scaling. It is suggested that a suite of scaled accelerograms can be considered as being seismologically correct if this suite's magnitude given rupture area and stress drop distributions are similar to empirical observations. ## 1 Introduction Accelerogram amplitude scaling entails multiplying a recorded accelerogram by a scale factor \(\lambda\in(0,+\infty)\) so as to intensify the amplitudes without changing neither the duration nor the frequency content of the recording. Figure 1 demonstrates this scaling procedure from which it is observed that both recorded (i.e., unscaled) and scaled accelerograms have the same start and end times with zero phase shift between their amplitudes. Why are recorded accelerograms scaled? Researchers and engineers intend to assess infrastructure performance under extreme accelerograms that have the potential to inflict structural damage and to cause structural collapse; however, such extreme recordings are scarce [1; 2; 3]. Therefore, accelerogram scaling is practiced to assess the damage and collapse capacities of structures [4; 5; 6; 7; 8; 9] and to select appropriate ground motions for seismic structural response analysis that match with a target response spectrum [10; 11]. Such ground motion selection is also extremely important for assessing the seismic resilience of structures [12; 13; 14; 15; 16], and particularly, critical infrastructures like nuclear power plants [17; 18; 19]. Research on accelerogram amplitude scaling can be divided into two camps. One camp focuses on procedures to more efficiently scale and select accelerograms that match a target response spectrum (for e.g., see [20]). The other camp focuses on the assessment of bias in nonlinear structural seismic responses by comparing results from unscaled and scaled accelerograms (for e.g., see [21]). In contrast, this paper takes a step back and focuses on the seismological characteristics of scaled accelerograms. The intent for conducting such an investigation is to map the scale factor \(\lambda\) to seismic variables such as magnitude, distance, rupture area, stress drop, and corner frequency1. This would then, in the future, enable researchers to compare these seismic variables of the scaled accelerograms with empirical observations to ascertain the seismological bias in structural responses, and further, to develop algorithms or procedures for scaling accelerograms in a seismologically consistent manner. Scaled accelerograms represent potential earthquake events that are yet to be realized. The ground motions resulting from such earthquake events (or, in general, any earthquake event) are governed by the Representation theorem in Seismology [22]. This theorem combines the source and the path effects and mathematically describes the ground motion at a site given rupture over a fault plane. Complementing the Representation theorem is the Fourier spectrum of an accelerogram, the models for which aim to describe the frequency content and the duration characteristics of this accelerogram using parameters such as stress drop and corner frequency [23]. Because the Representation theorem and the Fourier spectrum provide a complete picture of an accelerogram, these are used as means to understand the seismology of accelerogram scaling. In the next section, the Representation theorem is used to derive the magnitude, distance, and rupture area values for a scaled accelerogram. Following this, accelerogram Fourier spectrum is used for linking the scale factor \(\lambda\) to seismic parameters such as corner frequency and static stress drop. Ground motion simulations are used to then validate the proposed interpretation of scaling. Finally, a discussion is made on seismological correctness of accelerogram scaling along with its applicability to multiple sites. It should be noted that accelerogram scaling also scales the amplitudes of velocity and displacement records, Fourier spectrum, and response spectrum linearly. This is apparent conclusion will be implicitly used in the sections that follow. Additionally, scaling in the context of this paper implies accelerogram amplitude scaling as presented in Figure 1. ## 2 Insights from the Representation Theorem The Representation theorem connects rupture on the fault plane to displacement at a site. It is the cornerstone of seismology providing theoretical as well as observational insights into earthquake rupture processes and their effects. Computational techniques to synthesize accelerograms such as the Empirical Green's function method [24] or the UCSB technique [25] are developed on the basis of this theorem. In this subsection, a theoretical investigation into accelerogram scaling is carried out by revisiting the different terms in the Representation theorem and then understanding how these terms behave upon scaling a recorded ground motion. Figure 1: Comparison of unscaled and scaled accelerograms from the Northridge earthquake recorded at Saticoy station. While the scaled accelerogram has amplitudes five times as the unscaled one, both accelerograms are dependent on time in the same manner and have a zero phase shift between them. Let \(\mathbf{X}\) and \(\boldsymbol{\xi}\) be position vectors describing locations on the ground and the fault plane, respectively, with respect to a common coordinate system (also refer to Figure 2). Let \(t\) and \(\tau\) be variables keeping track of time on the ground and the fault plane, respectively. The Representation theorem for a scaled ground motion in the absence of body forces and traction is [22]: \[U_{i}^{\lambda}(\mathbf{X},t)=\lambda\ U_{i}(\mathbf{X},t)=\lambda\int_{\tau}d \tau\iint_{\Sigma}\left[U_{j}(\boldsymbol{\xi},\tau)\right]\,c_{jkpq}\ \mathbf{G}_{ip,q}(\mathbf{X},t;\boldsymbol{\xi},\tau)\ \nu_{k}\ d\Sigma( \boldsymbol{\xi}) \tag{1}\] where \(U_{i}^{\lambda}(\mathbf{X},t)\) is the scaled ground displacement in direction \(i^{2}\), \(\left[U_{j}(\boldsymbol{\xi},\tau)\right]\) is the displacement discontinuity across the fault plane \(\Sigma\), \(c_{jkpq}\) are the elastic moduli which can depend on \(\boldsymbol{\xi}\), \(\mathbf{G}_{ip,q}(\mathbf{X},t;\boldsymbol{\xi},\tau)\) is the Green's function differentiated by \(\xi_{q}\), \(\nu_{k}\) is the unit normal to \(\Sigma\). It is noted that \(U_{i}(\mathbf{X},t)\) and the terms in its expansion \(\left(\left[U_{j}(\boldsymbol{\xi},\tau)\right],c_{jkpq},\mathbf{G}_{ip,q}( \mathbf{X},t;\boldsymbol{\xi},\tau),\text{and }\nu_{k}\right)\) correspond to ground displacement of an unscaled accelerogram resulting from rupture on a fault plane \(\Sigma\). The displacement discontinuity \(\left(\left[U_{j}(\boldsymbol{\xi},\tau)\right]\right)\) describes the rate of relative slip between the two faces of a fault plane as a function of space and time. As only a portion of the fault plane might experience rupture during an earthquake, it is expected that \(\left[U_{j}(\boldsymbol{\xi},\tau)\right]\) is not non-zero on the entire fault plane and the area within which \(\left[U_{j}(\boldsymbol{\xi},\tau)\right]\) is non-zero defines the rupture area. \(c_{jkpq}\) represents a collection of 81 elastic coefficients that can be a function of spatial location on the fault plane under non-homogeneity. Due to symmetries of the stress and the strain tensors, only 21 of these 81 coefficients are independent. If the fault plane's material is assumed to be isotropic, the number of independent elastic coefficients further reduce to 2. The Green's function term (\(\mathbf{G}_{ij}(\mathbf{X},t;\boldsymbol{\xi},\tau)\)) describes the displacement in direction \(i\) at a location \(\mathbf{X}\) on the earth's surface in response to a concentrated impulse force in direction \(j\) applied at \(\boldsymbol{\xi}\) on the fault plane (see Figure 2). The fault's time variable \(\tau\) keeps a track of when this impulse force is applied while the ground surface time variable \(t\) tracks how the ground displacement varies. This Green's function term: (1) is the earth's response to a unit point force (concentrated in space as well as time) applied on the fault plane; (2) propagates changes in displacement on the fault plane to those at the ground surface; (3) maintains appropriate time delays between displacements at the source and the site. The ground displacement of a scaled accelerogram can be further expressed as: \[\begin{split} U_{i}^{\lambda}(\mathbf{X},t)=\int_{\tau}d\tau \iint_{\Sigma}&\lambda_{1}\left[U_{j}(\boldsymbol{\xi},\tau) \right]\,\lambda_{2}c_{jkpq}\ \lambda_{3}\mathbf{G}_{ip,q}(\mathbf{X},t;\boldsymbol{\xi},\tau)\ \lambda_{4}\nu_{k}\ d\Sigma( \boldsymbol{\xi})\\ &\text{and }\prod_{k=1}^{4}\lambda_{k}=\lambda\end{split} \tag{2}\] where the scale factor \(\lambda\) has been split as sub-factors \(\lambda_{1},\lambda_{2},\lambda_{3},\text{and }\lambda_{4}\) which influence the terms \(\left[U_{j}(\boldsymbol{\xi},\tau)\right]\), \(c_{jkpq}\), \(\mathbf{G}_{ij}(\mathbf{X},t;\boldsymbol{\xi},\tau)\)3, and \(\nu_{k}\), respectively, and which are constants with respect to space and time. It is noted that these sub-factors should satisfy the product condition provided in the second line of the above equation. In addition, if a sub-factor \(\lambda_{k}\) does not influence its corresponding term in the representation theorem, its value will be unity. A physical interpretation on the behavior of each of these four sub-factors upon scaling a recorded accelerogram (i.e. when \(\lambda>1\) or \(\lambda<1\)) is next made. In order to initiate this investigation, it is postulated that both the unscaled and the scaled accelerograms originate from the same seismic source; in other words, these accelerograms represent two separate seismic activities at the same fault plane. A scaled accelerogram represents a potential earthquake event that is yet to be realized. The above postulate attributes a seismic source to this unrealized earthquake event and thus provides a starting point for analyzing the seismology of a scaled accelerogram. Footnote 3: Scaling the first derivative of the Green’s function by \(\lambda\) also scales the Green’s function by \(\lambda\). ### Fault geometry and fault strength terms are invariant to accelerogram scaling The terms \(\nu_{k}\) and \(c_{jkpq}\) in equation (2) establish a physical presence of the fault plane by attributing geometry and strength, respectively. The same seismic source postulate will be used discuss the scaling of these two terms. erm \(\nu_{k}\) in equation (2), representing the unit normal vector of the fault plane and depicting the fault's geometry, cannot be freely scaled. As seen from Figure 2, for an example fault plane, the components of \(\nu_{k}\) are \(\{0.4,0.25,0.8818\}\)4 satisfying the condition that the vector magnitude \(\sqrt{\nu_{k}\nu_{k}}=1\). All these components cannot arbitrarily be multiplied by some sub-factor \(\lambda_{4}\) as such an operation will violate the definition of unit vector; consequently, \(\lambda_{4}\) has to be unity and \(\nu_{k}\) cannot be scaled. Additionally, the same seismic source postulate supports this argument because \(\nu_{k}\), being a physical property of an existing fault, should be invariant to scaling. Footnote 4: In a right-handed coordinate system. The term \(c_{jkpq}\) in equation (2), depicting the elasticity of fault, converts displacements into forces. As per the same seismic source postulate, \(c_{jkpq}\), being a physical property of an existing fault should again be invariant to scaling. Hence, the sub-factor \(\lambda_{2}\) in equation (2) must be unity. Anyhow, in order to facilitate further discussion on the same seismic source postulate, some mathematical implications concerning the elasticity coefficients and wave propagation velocities when \(\lambda_{2}\neq 1\) are presented. Assuming isotropy of the fault's material, \(c_{jkpq}\), when multiplied by \(\lambda_{2}\), implies the following relation [22]: \[\lambda_{2}\ c_{jkpq}=(\lambda_{2}L_{1})\delta_{jk}\delta_{pq}+(\lambda_{2}L_ {2})(\delta_{jp}\delta_{kq}+\delta_{jq}\delta_{kp}) \tag{3}\] where \(L_{1}\) and \(L_{2}\) are the Lame constants and \(\delta_{xy}\) is a Kronecker delta which is equal to one when \(x=y\) and zero otherwise. The above equation states that when the elasticity tensor \(c_{jkpq}\) is multiplied by \(\lambda_{2}\), both the Lame constants are to be multiplied by the same sub-factor as the Kronecker delta, by definition, is either zero or one. As the Young's, Rigidity, and Bulk moduli (\(E,\ G,\ K\), respectively) can be expressed using the two Lame constants, it can be further shown that these elastic moduli are also multiplied by \(\lambda_{2}\), while the Poisson's ratio (\(\nu\)) is held constant. Furthermore, it can be shown that material P-wave and S-wave velocities (\(\alpha,\ \beta\), respectively) are multiplied by \(\sqrt{\lambda_{2}}\) while the material density is held constant. For example, following is the relation between scaled Lame constants, P-wave velocity, and density (\(\rho\)): \[\sqrt{\frac{\lambda_{2}\ L_{1}+2\lambda_{2}\ L_{2}}{\rho}}=\sqrt{\lambda_{2} }\ \alpha \tag{4}\] These mathematical implications suggest that if \(\lambda_{2}\neq 1\), there exists another seismic source which has the same (\(\nu_{k}\), \(\rho\), \(\nu\))5, and spatial distribution of the elastic properties as the seismic source that generated the Figure 2: Schematic of the earthquake process depicting rupture on the fault-plane, propagation of seismic waves, and reception at a recording station. unscaled accelerogram, but \((E,\ G,\ K,\ \alpha,\) and \(\beta)\) are modified by \(\lambda_{2}\). Since it is intractable to ascertain the physical existence of this alternative seismic source for every arbitrary value of \(\lambda_{2}\), the same seismic source postulate has been made. This postulate, while constraining \(c_{jkpq}\) to be the same for both unscaled and scaled accelerograms, facilitates an investigation of the scaling of the other terms in the Representation theorem [equation (2)]. ### The Green's function term: scaled and unscaled accelerograms are recorded at the same site The Green's function term \(\big{(}\mathbf{G}_{ij}(\mathbf{X},t;\boldsymbol{\xi},\tau)\big{)}\) in equation (2) propagates displacements on the fault plane to those at the earth's surface and serves as a link between the source and the receiver. Figure (a)a presents a pictorial depiction of \(\mathbf{G}_{ij}(\mathbf{X},t;\boldsymbol{\xi},\tau)\) scaling using Empirical Green's functions6 (EGF; [24]) or approximate Green's functions. It is noted from this figure that both the unscaled and the scaled Green's functions have the same temporal shape in terms of start and end times, and polarities given any time. The only difference is, the scaled function has amplitudes greater by sub-factor \(\lambda_{3}\) at all the times. \(\mathbf{G}_{ij}(\mathbf{X},t;\boldsymbol{\xi},\tau)\) scaling will be investigated using the Uniqueness theorem described below. Footnote 6: Empirical Green’s Functions are ground motions generated from small earthquakes whose sources may be characterized as impulsive point sources. An EGF thus approximates a component in the Green’s function \(\mathbf{G}_{ij}(\mathbf{X},t;\boldsymbol{\xi},\tau)\). It should be noted that EGFs of different sites used here not only have the fault rupture (i.e., applied force) in the same direction but also the resulting motions are recorded in the same direction as well. In other words, the same component in \(\mathbf{G}_{ij}(\mathbf{X},t;\boldsymbol{\xi},\tau)\) for different sites will be compared. **Theorem A (Uniqueness):**_Given an initial force distribution over the fault plane, the displacement at a site \(\mathbf{X}\) is unique. In other words, given some initial conditions on the fault plane and \(\mathbf{X}\), a single solution to the Representation theorem can exist._ **Proof:** Interested readers may refer to Chapter 2, Section 2.3.1 in [22] for a mathematical proof of Theorem A. \(\square\) **Corollary A1:**\(\mathbf{G}_{ij}(\mathbf{X},t;\boldsymbol{\xi},\tau)\)_in the Representation theorem is unique._ **Proof:** Given that a solution to the Representation theorem is unique, it is sufficient to show that \(\mathbf{G}_{ij}(\mathbf{X},t;\boldsymbol{\xi},\tau)\) is also a solution to the Representation theorem. For an applied force on the fault plane, the Representation theorem takes the form [22]: \[U_{i}(\mathbf{X},t)=\int_{\tau}d\tau\iiint_{V}f_{j}(\boldsymbol{\xi},\tau)\ \mathbf{G}_{ij}(\mathbf{X},t;\boldsymbol{\xi},\tau)\ dV(\boldsymbol{\xi}) \tag{5}\] Figure 3: (a) Depiction of Green’s function scaling using a simulated Empirical Green’s function (EGF; [24]). This EGF is generated by a magnitude 3.0 earthquake using the SCEC BBP tool [26]. (b) EGFs for a magnitude 3 earthquake at three different sites. Angle \(\theta\) here is the orientation of the recording station with respect to a horizontal through the epicenter. where \(f_{j}(\boldsymbol{\xi},\tau)\) is force per unit volume. By definition, Green's function is the displacement at \(\mathbf{X}\) for a unit impulse force applied at some point \(\boldsymbol{\xi}_{1}\) in space and \(\tau_{1}\) in time. \(f_{j}(\boldsymbol{\xi},\tau)\) therefore is \(\delta(\boldsymbol{\xi}-\boldsymbol{\xi}_{1})\ \delta(\tau-\tau_{1})\) applied in direction \(j\). Consequently: \[U_{i}(\mathbf{X},t)=\int_{\tau}d\tau\iiint_{V}\delta(\boldsymbol{\xi}- \boldsymbol{\xi}_{1})\ \delta(\tau-\tau_{1})\ \delta_{ij}\ \mathbf{G}_{ij}(\mathbf{X},t;\boldsymbol{\xi},\tau)\ dV( \boldsymbol{\xi})=\mathbf{G}_{ij}(\mathbf{X},t;\boldsymbol{\xi}_{1},\tau_{1}) \tag{6}\] \(\square\) Two additional corollaries that assign a spatial location to the scaled accelerogram are presented below. To discuss these corollaries, the inequivalence condition needs to be first introduced. **Inequivalence condition:** Mathematically, this condition is given by \(\mathbf{G}_{ij}(\mathbf{X},t;\boldsymbol{\xi},\tau)\neq\mathbf{G}_{ij}( \mathbf{X}_{1},t;\boldsymbol{\xi},\tau)\)\(\forall\ \mathbf{X}_{1}\ (\mathbf{X}_{1}\neq\mathbf{X})\). The inequivalence condition suggests that given two different sites \(\mathbf{X}\) and \(\mathbf{X}_{1}\), not all components of their respective Green's function tensors \(\left(\mathbf{G}_{ij}(\mathbf{X},t;\boldsymbol{\xi},\tau)\right.\) and \(\mathbf{G}_{ij}(\mathbf{X}_{1},t;\boldsymbol{\xi},\tau))\) can be the same. The reason is, two different sites will have different orientations (direction cosines) and/or distances with respect to the point of application of the unit impulse force \(\boldsymbol{\xi}\). This results in \(\mathbf{G}_{ij}(\mathbf{X},t;\boldsymbol{\xi},\tau)\) and \(\mathbf{G}_{ij}(\mathbf{X}_{1},t;\boldsymbol{\xi},\tau)\) having either different polarities and/or durations. Figure 3b demonstrates these differences in Green's functions for three sites with different orientations and distances with respect to \(\boldsymbol{\xi}\). The solid black and dotted blue plots in Figure 3b demonstrate that for the same distance but different orientations of sites with respect to \(\boldsymbol{\xi}\), the corresponding Green's functions have dissimilar polarities. Alternatively, the solid black and dashed red plots in Figure 3b demonstrate that for the same orientation but different distances of sites with respect to \(\boldsymbol{\xi}\), the corresponding Green's functions have dissimilar start and end times, and hence durations. **Corollary A2:**_Given the uniqueness theorem and the inequivalence condition, the only value of \(\lambda_{3}\) that is physically admissible in the operation \(\lambda_{3}\mathbf{G}_{ij}(\mathbf{X},t;\boldsymbol{\xi},\tau)\) is unity._ **Proof:** Let us assume that \(\lambda_{3}\) is anything but unity. Corollary A1 then prohibits attributing this scaled Green's function \(\left(\lambda_{3}\mathbf{G}_{ij}(\mathbf{X},t;\boldsymbol{\xi},\tau)\right)\) to a site with some value of \(\mathbf{X}_{1}\). This is because, given \(\mathbf{X}_{1}\), its Green's function \(\mathbf{G}_{ij}(\mathbf{X}_{1},t;\boldsymbol{\xi},\tau)\) is unique. The \(\lambda_{3}\mathbf{G}_{ij}(\mathbf{X},t;\boldsymbol{\xi},\tau)\) additionally cannot equal \(\mathbf{G}_{ij}(\mathbf{X}_{1},t;\boldsymbol{\xi},\tau)\) because the inequivalence condition suggests that the elements of these Green's functions will have different polarities and/or durations. Therefore, \(\lambda_{3}\) cannot be anything but unity. \(\square\) **Corollary A3:**_Given the inequivalence condition, both unscaled and scaled accelerograms correspond to the same location \(\mathbf{X}\)._ **Proof:** By constraining \(\lambda_{3}\) value to unity in equation (2), Corollary A2 implies that both unscaled and scaled accelerograms have the same Green's functions \(\left(\mathbf{G}_{ij}(\mathbf{X},t;\boldsymbol{\xi},\tau)\right)\). Additionally, since \(\mathbf{G}_{ij}(\mathbf{X},t;\boldsymbol{\xi},\tau)\neq\mathbf{G}_{ij}( \mathbf{X}_{1},t;\boldsymbol{\xi},\tau)\) under \(\mathbf{X}\neq\mathbf{X}_{1}\), the only location \(\mathbf{G}_{ij}(\mathbf{X},t;\boldsymbol{\xi},\tau)\) can correspond to for the scaled accelerogram is \(\mathbf{X}\). \(\square\) ### The rupture term: magnitude and distance values for a scaled accelerogram The term \(\left[U_{j}(\boldsymbol{\xi},\tau)\right]\) in equation (2) represents rupture on the fault plane and has a dimension of a length. With the other three sub-factors in equation (2) (\(\lambda_{2}\), \(\lambda_{3}\), and \(\lambda_{4}\)) equal to unity, if a recorded motion is to be scaled by \(\lambda\), then, the sub-factor corresponding to \(\left[U_{j}(\boldsymbol{\xi},\tau)\right]\) (\(\lambda_{1}\)) should be equal to \(\lambda\). In other words, scaling of the recorded motion must be attributed to scaling of the rupture amplitudes on the fault plane. The degree to which this attribution can be made depends on the dynamics of the earthquake source process. Source dynamics is a complicated topic that requires an understanding of advanced mathematics along with fracture mechanics. So, an empirical approach is used to discuss \(\left[U_{j}(\boldsymbol{\xi},\tau)\right]\) scaling. First, this term's scaling is mapped here to seismic parameters of engineering interest such as rupture extent, distance, and magnitude. Later on in this paper, empirical relationships will be discussed to constrain the scaling of these seismic parameters, which also implicitly constrains \(\left[U_{j}(\boldsymbol{\xi},\tau)\right]\) scaling. From a kinematic viewpoint, multiplying \(\lambda\) to \(\left[U_{j}(\boldsymbol{\xi},\tau)\right]\) increases the amplitudes of the rupture without influencing either the spatial distribution of rupture (rupture extent) or the dependence of rupture on time. Figure 4a presents an illustration of the rupture dependence on time for unscaled and scaled rupture functions using a Gamma like function. It is observed that both the functions depend on time in the same manner. Invariance of the rupture extent to accelerogram scaling has implications on the effective rupture dimensions. Consider the one-dimensional rupture function \(\left[u(\tilde{\xi}_{n})\right]\) computed by summing the time-averaged rupture values across the down-dip (or along-strike) direction ([27])7. Linearly scaling \(\big{[}U_{j}(\boldsymbol{\xi},\tau)\big{]}\) also scales \([u(\widetilde{\xi}_{n})]\) linearly because the coordinate transformation from \(\boldsymbol{\xi}\) to \(\widetilde{\boldsymbol{\xi}}\) is linear; in other words, \([u^{\lambda}(\widetilde{\xi}_{n})]=\lambda\ [u(\widetilde{\xi}_{n})]\). The following theorem demonstrates an implication of linearly scaling \([u(\widetilde{\xi}_{n})]\) on the effective rupture dimension (\(\mathcal{D}\)). A corollary then shows the invariance of the Joyner-Boore distance metric to accelerogram scaling. Footnote 7: The notation \(\widetilde{\boldsymbol{\xi}}\) indicates a fault plane coordinate system. In this system, while two directions are along-strike and down-dip of the fault plane, the third direction along which rupture amplitudes do not vary is normal to the fault plane. The subscript \(n\) in \([u(\widetilde{\xi}_{n})]\) therefore denotes either the down-dip or along-strike direction. Additionally, the fault plane system can be different from the general coordinate system \(\boldsymbol{\xi}\) to represent a point on the fault plane. **Theorem B:**_Provided that the effective rupture dimension (\(\mathcal{D}\)) is defined as per [27], \(\mathcal{D}\) is invariant to accelerogram scaling._ **Proof:** Given that \([u^{\lambda}(\widetilde{\xi}_{n})]=\lambda\ [u(\widetilde{\xi}_{n})]\), it is straightforward to show that the [27] definition for \(\mathcal{D}\) is invariant to accelerogram scaling since the scale factors in the numerator and the denominator cancel each other out: \[\mathcal{D}=\frac{\int_{\mathcal{L}}\ d\mathcal{L}\int_{\widetilde{\xi}_{n}} \ [u(\widetilde{\xi}_{n})]\ [u(\widetilde{\xi}_{n}-\mathcal{L})]\ d \widetilde{\xi}_{n}}{\int_{\widetilde{\xi}_{n}}\ [u(\widetilde{\xi}_{n})]\ [u( \widetilde{\xi}_{n})]\ d\widetilde{\xi}_{n}}=\frac{\int_{\mathcal{L}}\ d \mathcal{L}\int_{\widetilde{\xi}_{n}}\ [u^{\lambda}(\widetilde{\xi}_{n})]\ [u^{\lambda}( \widetilde{\xi}_{n}-\mathcal{L})]\ d\widetilde{\xi}_{n}}{\int_{\widetilde{\xi}_ {n}}\ [u^{\lambda}(\widetilde{\xi}_{n})]\ [u^{\lambda}(\widetilde{\xi}_{n})]\ d \widetilde{\xi}_{n}} \tag{7}\] where \(\mathcal{L}\) is the lag-length in an autocorrelation function. \(\square\) **Corollary B:**_The Joyner-Boore distance metric \((R_{JB})\) is invariant to accelerogram scaling._ **Proof:**\(R_{JB}\) is dependent on two quantities: the location of a site \(\mathbf{X}\) and the surface projection of the effective rupture dimension \(\mathcal{D}\). Corollary A3 suggests that both unscaled and scaled accelerograms are recorded at the same site \(\mathbf{X}\). Theorem B suggests that both unscaled and scaled accelerograms have the same effective rupture dimension \(\mathcal{D}\), and therefore its surface projection. As a result of this invariance of the two quantities upon which \(R_{JB}\) depends upon, \(R_{JB}\) is also invariant to accelerogram scaling. \(\square\) Figure (b)b presents an illustration of the constancy of \(\mathcal{D}\) to accelerogram scaling by assuming a Gaussian function for rupture. Coming to earthquake strength, the following theorem and corollaries are proposed. **Theorem C:**_The seismic moment tensor \((\mathbf{M}_{pq})\) scales linearly with accelerogram scaling._ Figure 4: (a) Illustration of the rupture dependence on time for unscaled and scaled rupture functions using a Gamma like function. It is observed that both the functions depend on time in the same manner. (b) Illustration of the rupture dependence on space for unscaled and scaled rupture functions using a Gaussian function for rupture distribution. It is observed that both the functions have the same spatial distribution of rupture. As a result, the effective rupture dimension computed using the [27] definition (\(\mathcal{D}=2.506\) Km) is the same for both the unscaled and the scaled functions. **Proof:** The proof follows from the definition of \({\bf M}_{pq}\)[22]8: Footnote 8: By averaging over time, the time dependence of the moment tensor has been excluded. \[{\bf M}_{pq}^{\lambda}=\frac{\int_{\tau}d\tau\iint_{\Sigma}\lambda\big{[}U_{i}( \boldsymbol{\xi},\tau)\big{]}\ \nu_{j}\ c_{ijpq}\ d\Sigma(\boldsymbol{\xi})}{\int_{\tau}d\tau}=\lambda\ {\bf M}_{pq} \tag{8}\] where \({\bf M}_{pq}^{\lambda}\) is the moment tensor for the scaled accelerogram. It is observed from the above equation that linearly scaling the rupture amplitudes scales the moment tensor linearly. \(\Box\) **Corollary C1:**_Seismic moment \((M_{o})\) scales linearly with accelerogram scaling._ **Proof:** The proof follows from equation (8) and the definition of \(M_{o}\)[28]: \[M_{o}^{\lambda}=\frac{1}{\sqrt{2}}\ \big{(}{\bf M}_{pq}^{\lambda}{\bf M}_{pq}^{ \lambda}\big{)}^{\frac{1}{2}}=\lambda\ M_{o} \tag{9}\] where \(M_{o}^{\lambda}\) is the seismic moment for the scaled accelerogram. It is observed from the above equation that linearly scaling the moment tensor scales the seismic moment linearly. \(\Box\) **Corollary C2:**_Moment magnitude \((M_{w})\) scales logarithmically with accelerogram scaling._ **Proof:** The proof follows from equation (9) and the definition of \(M_{w}\)[29]: \[M_{w}^{\lambda}=\frac{2}{3}\ log_{10}\big{(}M_{o}^{\lambda}\big{)}-6.07=M_{w} +\frac{2}{3}\ log_{10}(\lambda) \tag{10}\] where \(M_{w}^{\lambda}\) is the moment magnitude for the scaled accelerogram. It is observed from the above equation that linearly scaling the moment tensor scales the moment magnitude logarithmically. \(\Box\) ## 3 Insights from Fourier Amplitude spectrum of an accelerogram Previously, the Representation theorem was used to gain insights into ground motion scaling in the time domain. Whereas magnitudes of the unscaled and scaled accelerograms were shown to be related by equation (10), it was concluded that effective rupture dimensions, location of the recording station, and hence the Joyner-Boore distance metric remain unchanged upon scaling an accelerogram. In this section, accelerogram scaling will be investigated in the frequency domain so as to map such scaling to parameters such as the Brune's static stress drop and the corner frequency. The additional insights that are gained in this section will be in reconciliation with those obtained from the Representation theorem. Figure 5a presents Fourier amplitude spectra of unscaled and scaled accelerograms from the Northridge earthquake. It can be noticed that accelerogram scaling influences neither the shape nor the frequency content of the Fourier amplitude spectrum. The amplitudes corresponding to the different frequencies, however, are scaled uniformly by the scale factor \(\lambda\). As an aside, an overarching insight provided by the Representation theorem is: ground motion scaling influences the source kinematics in terms of amplifying the rupture amplitudes by \(\lambda\) without changing neither the Green's function (the propagator, accounts for path effects) nor the source geometric and materialistic characteristics. Combining this insight with the observations made regarding the shape of the Fourier spectra in Figure 5a, the following expression models the Fourier spectrum of a scaled accelerogram [30]: \[\boldsymbol{\mathcal{F}}^{\lambda}(f,M_{o}^{\lambda},R)=I(f)\ P(R,f)\ E^{ \lambda}(M_{o}^{\lambda},f)\ G(f) \tag{11}\] where \(I(f)\), \(P(R,f)\), \(E^{\lambda}(M_{o}^{\lambda},f)\), and \(G(f)\) are the ground motion type, path, scaled source terms, and site-response, respectively. \(I(f)\) is invariant to accelerogram scaling as it only accounts for whether the resulting ground motion is acceleration, velocity, or displacement. The scaling of the other three terms is discussed subsequently. ### The path term is invariant to accelerogram scaling The path term \(P(R,f)\) in the model for Fourier spectrum of an accelerogram [equation (11)] connects seismic activity on the fault plane to displacements at a site. This term's purpose is qualitatively similar to that of the Green's function in the Representation theorem. Since the Green's function was shown to remain invariant to accelerogram scaling, one may speculate that this invariance holds for \(P(R,f)\) as well. As shown below, such a speculation is supported by the manner in which \(P(R,f)\) is usually defined [30]: \[P(R,f)=e^{\frac{-\pi fR}{Q(f)\beta}}\ Z(R) \tag{12}\] where \(R\) is a distance metric between the source and the receiver, \(\beta\) is the shear-wave velocity in the source region, and \(Q(f)\) is the quality factor which accounts for inelastic attenuation of the seismic waves. \(Z(R)\) in the above equation accounts for geometric attenuation and is proportional to \(R^{-1}\) in the simplest case of linear geometric attenuation. In summary, \(P(R,f)\), apart from frequency, is seen to depend on the distance metric \(R\), which is usually defined using the Joyner-Boore distance metric. Hence, \(P(R,f)\) should be invariant to accelerogram scaling since \(R_{JB}\) was noted to remain unchanged upon scaling an accelerogram. ### The scaled source term: Stress drop scales linearly with accelerogram scaling The scaled source term in equation (11) can be expanded using a source model. An \(\omega\)-squared source model [23; 30], which characterizes the fault plane as a point source, is adopted here. The scaled source term is: \[E^{\lambda}(M_{o}^{\lambda},f)=\frac{\langle R_{\Theta\Phi}\rangle}{2\sqrt{2} \pi\rho_{s}\beta_{s}^{3}}\ M_{o}^{\lambda}\ \frac{f^{2}}{1+(\frac{f}{f_{c}})^{2}} \tag{13}\] where \(\langle R_{\Theta\Phi}\rangle\) is the average radiation coefficient which depends on the geometry of the source; \(\rho_{s}\) and \(\beta_{s}\) are respectively the density and the shear-wave velocity of the material at the source. Because accelerogram scaling will not alter the source geometric and materialistic properties (_Same Seismic Source_ axiom), \(\langle R_{\Theta\Phi}\rangle\), \(\rho_{s}\), and \(\beta_{s}\) will not change for unscaled and scaled accelerograms. In equation (13), \(M_{o}^{\lambda}\) is the scaled seismic moment given by equation (9); \(f\) is the frequency of interest and \(f_{c}\) is the corner frequency which distinguishes the low and high frequency parts of the Fourier spectrum. \(f_{c}\) remains unchanged upon scaling Figure 5: (a) Fourier amplitude spectra of an unscaled and a scaled accelerogram from the Northridge earthquake. The shape of the spectra are observed to be the same with a corner frequency (\(f_{c}\)) of 0.16Hz. (b) Illustration of the influence of \(f_{c}\) on the Fourier spectrum shape. Only by keeping \(f_{c}\) the same for unscaled and scaled spectra, we obtain a scaled Fourier spectrum that is uniformly scaled at all frequencies. an accelerogram. This is because, as per Figure 5a, the shape of the Fourier spectrum should remain the same for both unscaled and scaled accelerograms. Any deviations in the \(f_{c}\) value between unscaled and scaled accelerograms would result in differences in the spectral shapes and hence non-uniform scaling of the spectrum across the frequencies. Figure 5b presents an illustration of the influence of \(f_{c}\) on the Fourier spectrum shape. It is observed from this figure that, only by keeping \(f_{c}\) for the scaled spectrum equal to that of the unscaled spectrum, we obtain a scaled Fourier spectrum that is uniformly scaled at all frequencies (which is in corroboration with Figure 5a). Same \(f_{c}\) for unscaled and scaled accelerograms has two important implications. First, \(f_{c}\) is related to the effective rupture dimensions by some source models. For example, the Brune's and the Haskell's models relate \(f_{c}\) to the radius and the length/width of the rupture, respectively [28]. Constancy of \(f_{c}\) as per these models implies both unscaled and scaled accelerogram result from the same effective rupture dimensions. Insights from the Representation theorem also led to the same conclusion. Second, interpreting the Fourier spectrum of a scaled accelerogram using the Brune's source model, \(f_{c}\) is related to \(M_{o}^{\lambda}\) as [23; 31]: \[\begin{split} f_{c}\propto\Big{(}\frac{\Delta\sigma^{\lambda}}{ M_{o}^{\lambda}}\Big{)}^{\frac{1}{3}}\\ \text{and, }\Delta\sigma^{\lambda}=\lambda\ \Delta\sigma;M_{o}^{ \lambda}=\lambda\ M_{o}\end{split} \tag{14}\] where \(M_{o}^{\lambda}\) is the scaled seismic moment and \(\Delta\sigma^{\lambda}\) is the scaled Brune's stress drop. To ensure constancy of \(f_{c}\) for both unscaled and scaled accelerograms, \(\Delta\sigma^{\lambda}\) must be equal to \(\lambda\ \Delta\sigma\) as scaling an accelerogram scales the seismic moment linearly. That is, Brune's stress drop of the scaled accelerogram (\(\Delta\sigma^{\lambda}\)) is scale factor (\(\lambda\)) times the Brune's stress drop of the unscaled accelerogram (\(\Delta\sigma\)). Synthesis of accelerogram scaling in the Fourier domain, which was made using a point seismic source until now, can be also extended to finite seismic sources. The scaled source term for a finite source can be expressed as [32]9: Footnote 9: It is noted that there is an implicit summation over indices \(ij\) in the expression for \(E^{\lambda}(M_{o}^{\lambda},f)\) in equation (15). \[\begin{split} E^{\lambda}(M_{o}^{\lambda},f)=\frac{\langle R_{ \Theta\Phi}\rangle}{2\sqrt{2}\pi\rho_{s}\beta_{s}^{3}}\ M_{oij}^{\lambda}\ \frac{f^{2}}{1+(\frac{f}{f_{cij}})^{2}}\\ \text{and, }f_{cij}\propto\Big{(}\frac{\Delta\sigma^{\lambda}}{ \overline{M}_{o}^{\lambda}}\Big{)}^{\frac{1}{3}};\ M_{oij}^{\lambda}=\lambda\ M_{o}\ \mathcal{W}_{ij};\ \overline{M}_{o}^{\lambda}=\lambda\ M_{o}/N\end{split} \tag{15}\] where \(ij\) are the indices of a sub-fault among the \(N\) sub-faults into which the seismic source is discretized and \(\mathcal{W}_{ij}\), the slip weight of the \(ij\) sub-fault, satisfies \(\mathbf{1}_{ij}\mathcal{W}_{ij}=1\). To ensure constancy of the sub-fault corner frequency \(f_{cij}\) when \(M_{o}\) is multiplied by \(\lambda\), it is again seen that \(\Delta\sigma^{\lambda}=\lambda\ \Delta\sigma\). That is, even in the case of a finite seismic source, stress drop scales linearly with accelerogram scaling. ### The site term: Relative nonlinearity of site response must not be large Sites where earthquake ground motion is recorded are usually resting on a few tens of meters of soil layers that lie above bedrock (see Figure 2). Although seismic waves travel a large distance through rock and a relatively small distance through soil, the soil plays an important role in characterizing the surface ground motion [33]. The soil's role in influencing the surface ground motion is termed as site response, which is explicitly modeled using the \(G(f)\) term in equation (11). Site response is categorized as nonlinear or linear depending upon whether it is affected by the strength of the bedrock accelerogram or not. With respect to accelerogram scaling, it is important that the relative nonlinearity of site response is not very large. Relative nonlinearity of site response is defined as how much more (or less) nonlinear the site response is for the scaled accelerogram as compared to the unscaled one. A scaled accelerogram can be generated by scaling the seismic parameters \(M_{w}\) and \(\Delta\sigma\) using scale factor \(\lambda\) as discussed previously. However, this scaled accelerogram will not be \(\lambda\) times the unscaled one should the relative nonlinearity of site response be large. For example, if the soil underlying a site is very dense, its site response will be close to linear and accelerograms generated by both unscaled and scaled seismic parameters will have the same linear site response function (the solid plot in Figure 5(a)). There is no relative nonlinearity of site response in this case. On the other hand, if the soil underlying a site is weak, its site response will be nonlinear and thus will differ for the unscaled and the scaled values of seismic parameters (the dashed and the dotted plots in Figure 5(a), respectively). The reason for such difference is, the scaled seismic parameters (assuming scaled upwards) will produce a stronger bedrock motion as compared to the unscaled seismic parameters, and the nonlinear soil response cannot transmit both the intense and the less intense bedrock motions to the surface in the same manner. Mathematically, relative nonlinearity of site response \(\mathcal{R}_{NL}(f)\) is defined in terms of a percentage error as: \[\mathcal{R}_{NL}(f)\text{ or }\%\text{ Error}=\frac{\mid G(f)-G^{\lambda}(f)\mid}{G(f)}\times 100 \tag{16}\] where \(G(f)\) and \(G^{\lambda}(f)\) are the site response terms for the unscaled and the scaled accelerograms, respectively. These site response terms, in addition to frequency, also depend upon the quality of soil at a site and the strength of the bedrock motion. Selecting the site model \(F_{site}(.)\) of the [34] ground motion prediction model, site response terms for the unscaled and the scaled seismic parameters can be expressed as10: Footnote 10: While the [34] site model strictly applies to response spectra, its use for Fourier spectra is justified because the model varies smoothly with frequency [35]. \[G(f)=\frac{F_{site}(f,Vs30,PGA_{rock})}{F_{site}(f,Vs_{ref},PGA_{rock})};\ \ \ \ G^{ \lambda}(f)=\frac{F_{site}(f,Vs30,PGA_{rock}^{\lambda})}{F_{site}(f,Vs_{ref}, PGA_{rock}^{\lambda})} \tag{17}\] where \(Vs30\) is the shear-wave velocity averaged over the top 30 meters depth, \(Vs_{ref}\) is the reference shear-wave velocity set to 760 m/s, and \(PGA_{rock}\) and \(PGA_{rock}^{\lambda}\) are the Peak Ground Accelerations of the bedrock under unscaled and scaled values of the seismic parameters, respectively. It is noted that while \(Vs30\) serves as a proxy for the quality of soil at a site, \(PGA_{rock}\) and \(PGA_{rock}^{\lambda}\) serve as proxies for describing the strength of the bedrock motion. Furthermore, \(PGA_{rock}^{\lambda}=\lambda\)\(PGA_{rock}\) because the scaled seismic parameters are expected to generate a bedrock accelerogram that is \(\lambda\) times the bedrock accelerogram generated by the unscaled seismic parameters. Figure 6: (a) Schematic of linear and nonlinear site response functions \(G(f)\). It is noted that nonlinear site response, due to its dependence on the strength of the bedrock motion, differs for unscaled and scaled bedrock motions. (b) and (c): Relative nonlinearity of site response \(\mathcal{R}_{NL}(f)\) defined in terms of a percentage error; see equation (16). \(\mathcal{R}_{NL}(f)\) values are presented for two sites with \(Vs30\) as (b) \(400\)\(m/s\) and (c) \(500\)\(m/s\) considering four levels of bedrock accelerogram scaling. Figures (b)b and (c)c present relative nonlinearity of site response \(\mathcal{R}_{NL}(f)\) for two sites with \(Vs30\) values as \(400\ m/s\) and \(500\ m/s\), respectively. These values of \(Vs30\) indicate that the former site has weaker soil as compared to latter site. Bedrock PGA generated by the scaled seismic parameters (\(PGA_{rock}^{\lambda}\)) is set to \(2g\), and four levels of unscaled bedrock PGA (\(PGA_{rock}\)) are considered: \(0.25g\), \(0.5g\), \(0.75g\), and \(1g\). In other words, \(\mathcal{R}_{NL}(f)\) is computed by selecting scale factors (\(\lambda\)) as 8, 4, 2.67, and 2. From both Figures (b)b and (c)c, it is observed that as the scale factor increases, \(\mathcal{R}_{NL}(f)\) also increases. This is expected because higher scale factors result in more intense bedrock motions which in turn induce more nonlinearity in the soil. The relative nonlinearity of site response is also observed to be more profound for the weaker soil (Figure (b)b) as compared to the stiffer soil (Figure (c)c). Considering this joint role played by \(\lambda\) and \(Vs30\), it can be generally stated that accelerograms recorded on stiff soil (or better) sites facilitate accelerogram scaling to a greater degree. The reason is, such sites have a better capacity to linearly transmit intense bedrock motions than weak soil sites. ## 4 Validating the seismological interpretation of accelerogram scaling Accelerogram scaling can be described in terms of similarities and differences between certain seismological variables. Both unscaled and scaled accelerograms will have the same: spatial and temporal variation of rupture at the same seismic source, effective rupture dimensions (and hence the rupture area), Joyner-Boore distance metric, and corner frequency. Brune's stress drops and magnitudes, however, will be different. Stress drop of the scaled accelerogram is \(\lambda\) times stress drop of the unscaled one, and magnitude of the scaled accelerogram is: \(M_{w}^{\lambda}=M_{w}+2/3\ log_{10}(\lambda)\). If two accelerograms satisfy these seismological conditions under similar wave attenuation and site response effects, then in principle, one accelerogram should be \(\lambda\) times the other. ### Use of ground motion simulations for validation In order to validate the proposed interpretation of scaling, we need to compare two accelerograms that satisfy the above-mentioned conditions. In practice, since it is difficult to find two recorded accelerograms that satisfy these conditions, reliance will be made upon ground motion simulations. First, an accelerogram corresponding to a particular seismic source and distance will be simulated with appropriate input values for \(M_{w}\), \(f_{c}\), and rupture dimensions. This will be treated as an unscaled accelerogram. Next, a second accelerogram with magnitude given by equation (10) and all other inputs same as the previous run will be simulated. This will be treated as a scaled simulated accelerogram. Finally, a comparison will be made between these two accelerograms after explicitly scaling the first accelerogram by factor \(\lambda\) (termed as explicitly scaled). The Southern California Earthquake Center Broadband Platform (SCEC BBP; [26]) is used for simulating the accelerograms. Of the available simulation methods, the UCSB method [25] is used as it allows \(f_{c}\) to be specified explicitly. It is important to set \(f_{c}\) to the same value for both unscaled and scaled-simulated motions. Only then, as per the Brune's source model, stress drop will scale linearly with seismic moment scaling; see equation (14) or (15). Other available simulation methods in the SCEC BBP such as [35] and EXSIM [36] pre-specify a constant value for stress drop which might depend upon the tectonic region. As per equation (14) or (15) then, \(f_{c}\) decreases as the seismic moment scales up linearly. While a reducing \(f_{c}\) with increasing seismic moment is consistent with the self-similarity assumption or the constant stress drop assumption [30], it will violate the seismological conditions necessary for simulating a scaled accelerogram that is \(\lambda\) times an unscaled one11. Studies have observed variability in earthquake stress drop values (for e.g., see [37] or [38]), and later on in this paper, this stress drop variability will be utilized to discuss the seismological correctness of accelerogram scaling. ### Comparing explicitly scaled and scaled simulated accelerograms Figure 7 presents a comparison of the explicitly scaled and the scaled simulated ground motions in the time domain considering the three earthquake scenarios: Northridge, Loma Prieta, and South San Andreas. The magnitude, distance, and scale factor combinations used for these three scenarios are (6.73, 15Km, 10), (6.94, 15Km, 5), and (7.9, 30Km, 2.5). Acceleration and velocity recordings are presented on the left and right panes, respectively, in Figure 7. While it is noted that there are some differences in the comparisons provided due to random noise, the explicitly scaled and the scaled simulated motions are seen to be in good general agreement and look very identical. In order to facilitate a stricter comparison between the explicitly scaled and the scaled simulated motions, a similarity metric based on the Normalized Cross-Correlation between two time series is used [39]: \[\mathcal{S}=\max_{\mathcal{L}}\ \frac{\int u^{\lambda}(t)\ \widetilde{u}^{ \lambda}(t-\mathcal{L})\ dt}{\sqrt{\big{[}\int u^{\lambda}(t)\ u^{\lambda}(t)\ dt\big{]}\big{[}\int\widetilde{u}^{\lambda}(t)\ \widetilde{u}^{\lambda}(t)\ dt\big{]}}} \tag{18}\] where \(u^{\lambda}(t)\) and \(\widetilde{u}^{\lambda}(t)\) are treated as explicitly scaled and the scaled simulated motions, respectively, and \(\mathcal{L}\) is the lag-length. The notation \(\mathcal{S}\) is used for indicating that the above metric is a "strict" similarity measure and it compares two time series at every discrete time step. It is noted that the value of \(\mathcal{S}\) lies between \(-1\) and \(+1\), and a value equal to \(+1\) implies that two time series are the same. Values of \(\mathcal{S}\) presented in Figure 7 are generally seen to be large with the exception of Figure 7c where the differences in random noise are noted to be large. Overall, the average value of \(\mathcal{S}\) across the six sets of results is 0.78, and excluding Figure 7c, the average is 0.84. Figure 8 presents a comparison of the explicitly scaled and the scaled simulated ground motions in the frequency domain for the same three earthquake scenarios and the same magnitude, distance, and scale factor combinations as before. Fourier and Response spectra are presented on the left and right panes, respectively, in Figure 8. The results are again observed to be in good general agreement, which may be expected because the time domain validations were quite successful. Figure 7: Time domain validation of the seismological interpretation of ground motion scaling considering three earthquake scenarios: Northridge, Loma Prieta, and South San Andreas. Validation is made using simulated ground motions. The black plots are ground motions scaled by explicitly multiplying a scale factor. The green plots are scaled motions implicitly generated using the seismological interpretation developed in this study. The magnitude of the unscaled motions are presented above each sub-figure. The scale factor (\(\lambda\)) and the magnitude of the scaled motion are presented within each sub-figure. The strict similarity measure (\(\mathcal{S}\)) values are also shown within each sub-figure. Figure 8: Frequency domain validation of the seismological interpretation of ground motion scaling considering three earthquake scenarios: Northridge, Loma Prieta, and South San Andreas. Validation is made using simulated ground motions. The black plots are ground motions scaled by explicitly multiplying a scale factor. The green plots are scaled motions implicitly generated using the seismological interpretation developed in this study. The magnitude of the unscaled motions are presented above each sub-figure. The scale factor (\(\lambda\)) and the magnitude of the scaled motion are presented within each sub-figure. ## 5 Discussion Two aspects of the proposed seismological interpretation of accelerogram scaling are additionally discussed. The first one is its applicability to multiple sites under the same set of scaled seismic parameters. And the second one is a discussion over whether linear scaling of stress drop and logarithmic scaling of magnitude under a fixed rupture area is seismologically correct or not. ### Applicability of the proposed interpretation of scaling to multiple sites The proposed interpretation of accelerogram scaling modifies two seismic source parameters \(M_{w}\) and \(\Delta\sigma\) by scale factor \(\lambda\). All other parameters such as rupture area and corner frequency, and conditions such as seismic wave attenuation and site response effects are set to be the same for both the unscaled and the scaled accelerograms. Due to modifying source parameters only, the proposed interpretation of accelerogram scaling is applicable to multiple sites. That is, if an accelerogram at site \(\mathbf{X}\) is scaled by \(\lambda\), the proposed interpretation suggests that an accelerogram recorded at any other site \(\mathbf{X}_{1}\) (\(\mathbf{X}_{1}\neq\mathbf{X}\)) is also scaled by \(\lambda\). This physically seems plausible because when an accelerogram at \(\mathbf{X}\) is scaled, some aspects of the seismic source are modified and these modifications also influence the accelerograms recorded at some other site \(\mathbf{X}_{1}\). Figure 9 presents an example demonstrating the applicability of the proposed interpretation of scaling to multiple sites. In this example, all the conditions to simulate the unscaled and the scaled Northridge accelerograms are fixed to be the same as previous (see Figure 6(a)), except that these accelerograms are recorded at 5Km distance instead of 15Km. Figures 8(a) and 8(b), presenting the accelerograms and spectral accelerations, respectively, suggest that the proposed interpretation of scaling is valid even at this new site. ### Seismological correctness of accelerogram amplitude scaling A question that has interested both seismologists and earthquake engineers alike concerns the seismological correctness of accelerogram scaling. In this paper, it was proposed that linearly scaling an accelerogram scales \(\Delta\sigma\) linearly and \(M_{w}\) logarithmically. Among the other seismic parameters, it was noted that the effective rupture area \(\mathcal{A}\) is invariant to scaling an accelerogram. It is further suggested that the physical admissibility of modifying \(\Delta\sigma\) and \(M_{w}\) in this manner may not be treated as a binary question. The reason Figure 9: Applicability of the proposed interpretation of accelerogram scaling to multiple sites. Explicitly scaled and scaled simulated (a) Accelerograms and (b) spectral accelerations from the Northridge earthquake recorded at a distance of 5Km (as opposed to 15Km used in Figures 6(a) and 7(b)). is, given \(\mathcal{A}\), \(M_{w}\) and \(\Delta\sigma\) are empirically observed to be uncertain. As presented below, probability theory can be used to quantify this uncertainty: \[\begin{split} p(M_{res},\Delta\sigma|\mathcal{A})& \approx p(M_{res}|\mathcal{A})\ p(\Delta\sigma|\mathcal{A})\\ &\approx p(M_{res}|\mathcal{A})\ p(\Delta\sigma)\end{split} \tag{19}\] where \(p(.|.)\) denotes a conditional probability distribution and \(M_{res}\) is the magnitude residual obtained by taking a difference between the predicted and the observed magnitudes given \(\mathcal{A}\). It is noted in the above equation that the independence between \(M_{w}\) and \(\Delta\sigma\)[37, 38] is used to split the joint conditional probability \(p(M_{res},\Delta\sigma|\mathcal{A})\) into marginals \(p(M_{res}|\mathcal{A})\) and \(p(\Delta\sigma|\mathcal{A})\). Furthermore, in line with the general notion in seismology, \(\Delta\sigma\) is assumed to be independent of \(\mathcal{A}\). Through the independence of \(M_{res}|\mathcal{A}\) and \(\Delta\sigma\) distributions demonstrated by equation (19), it is suggested that one way to scale accelerograms in a seismologically correct manner is to ensure that a suite of scaled accelerograms have the same \(p(M_{res}|\mathcal{A})\) and \(p(\Delta\sigma)\) distributions as empirical observations. These distributions are straightforward to compute for the suite of scaled accelerograms because \(\Delta\sigma^{\lambda}=\lambda\ \Delta\sigma\) and \(M_{res}=M_{w}^{\lambda}-M_{w}\); rupture area for a scaled accelerogram does not change from what it was for the unscaled one. The empirical \(p(M_{res}|\mathcal{A})\) distribution can be computed using the [40] magnitude-rupture area relationship: \[p(M_{res}|\mathcal{A})=erf\Big{(}\frac{M_{res}}{\sigma\ \sqrt{2}}\Big{)} \tag{20}\] where \(M_{res}\) is the difference between predicted and observed magnitudes, \(\sigma\) is the standard deviation of the empirical relation, and \(erf\) is an error function. It is noticed that the above equation represents the cumulative distribution function of a Half-Normal distribution and is valid when accelerograms are scaled upwards (i.e., \(\lambda>1\) and \(M_{res}>0\)). The empirical \(p(\Delta\sigma)\) distribution can be obtained from [37]. Figure 10 presents an illustration of seismological correctness of accelerogram scaling from which it is noted that while a suite of scaled accelerograms (represented by solid green plots) obey the empirically observed \(M_{res}|\mathcal{A}\) and \(\Delta\sigma\) distributions, another scaled suite (represented by dashed green plots) does not. Figure 10: Depiction of seismological correctness of accelerogram scaling through matching (a) \(p(M_{res}|\mathcal{A})\) and (b) \(\Delta\sigma\) distributions of the scaled accelerograms with empirical observations. It is noted that while a suite of scaled accelerograms (represented by solid green plots) obey the empirically observed \(M_{res}|\mathcal{A}\) and \(\Delta\sigma\) distributions, another scaled suite (represented by dashed green plots) does not. Accelerograms here are assumed to be scaled upwards (i.e., \(\lambda>1\)). ## 6 Summary and Conclusions Due to the paucity of recorded accelerograms that are intense enough to cause damage/collapse of structural models, accelerogram amplitude scaling is employed in earthquake engineering analysis. If scaled accelerograms are being used for target spectrum matching, seismic response analysis, and seismic risk assessment, then these accelerograms should represent potential earthquake events that are yet to be realized. What are the magnitudes of such earthquake events that would result in scaled accelerograms? At what distances such earthquake events happen from the recording sites? And what about parameters such as rupture area and stress drop? These are the type of questions this paper has attempted to answer by conducting a theoretical investigation into the seismology of accelerogram scaling. Representation theorem and accelerogram Fourier spectrum were used to investigate the seismology of scaled accelerograms. The following deductions were made: * Unscaled and scaled accelerograms have the same spatial as well as temporal distribution of rupture over the fault plane. The rupture amplitudes (given a position or a time instant at the fault) of the scaled accelerogram, however, are scaled by factor \(\lambda\). Consequently, magnitude of a scaled accelerogram becomes: \(M_{w}^{\lambda}=M_{w}+2/3\ log_{10}(\lambda)\) * The effective rupture dimensions are the same for unscaled and scaled accelerograms. These accelerograms are further recorded at the same site, leading the Joyner-Boore distance metric to be invariant to accelerogram scaling. * Unscaled and scaled accelerograms have similar Fourier spectrum shape. This necessitates the corner frequency of these accelerograms to be the same, leading the static stress drop to scale linearly with accelerogram scaling (\(\Delta\sigma^{\lambda}=\lambda\ \Delta\sigma\)). Additionally, it should be noted that the soil underlying a site should transmit the bedrock motion to the surface in the same manner for both unscaled and scaled accelerograms; that is, relative nonlinearity of site response between these accelerograms must not be large. The proposed seismological interpretation of accelerogram amplitude scaling was validated using the UCSB hybrid method for ground motion simulation by comparing the explicitly scaled and the scaled (implicitly) simulated motions. Three earthquake scenarios were considered: Northridge, Loma Prieta, and South San Andreas. In the time domain, these ground motions are compared using a similarity measure \(\mathcal{S}\in[-1,\ +1]\) (higher value desirable). Across six sets of accelerogram and velocity recordings, the explicitly scaled and the scaled simulated motions were found to be in good agreement, and the average value of \(\mathcal{S}\) was found to be 0.78. These motions were also found to compare well in the frequency domain, particularly under representations such as the Fourier amplitude spectrum and the response spectrum. A key feature of the proposed interpretation of scaling is its recommendation that if an accelerogram at a site is scaled by \(\lambda\), accelerograms recorded at other sites are also scaled by \(\lambda\). This is because, accelerogram scaling modifies the source parameters thereby influencing the realized the ground motions at all the sites surrounding a seismic source. Finally, a discussion was made on the seismological correctness of scaling. Ensuring that a suite of scaled accelerograms has magnitude given rupture area and stress drop distributions similar to empirical observations is suggested as a way to scale accelerograms in a seismologically consistent manner.
2308.05815
On exponentiation, $p$-automata and HNN extensions of free abelian groups
For every prime $p$ it is shown that a wide class of HNN extensions of free abelian groups admit faithful representation by finite $p$-automata.
Andriy Oliynyk, Veronika Prokhorchuk
2023-08-10T18:31:19Z
http://arxiv.org/abs/2308.05815v1
# On exponentiation, \(p\)-automata and HNN extensions of free abelian groups ###### Abstract For every prime \(p\) it is shown that a wide class of HNN extensions of free abelian groups admit faithful representation by finite \(p\)-automata. ## 1 Introduction Natural action of the wreath product of permutation groups \((G,\mathsf{X})\) and \((H,\mathsf{Y})\) on the Cartesian product \(\mathsf{X}\times\mathsf{Y}\) is widely used. The other action of this wreath product on the set \(\mathsf{Y}^{\mathsf{X}}\) of all functions from \(\mathsf{X}\) to \(\mathsf{Y}\) is called exponentiation. This action was defined in [4] as a formalization of group actions used in [12] to enumerate types of Boolean functions (cf. [11, 14]). Its basic properties as a permutation group were obtained in [8]. Exponentiation was applied to construct and study new strongly regular graphs ([16, 15]), to study automorphism group of the \(n\)-dimensional cube ([5, 6]), to construct new finite Gelfand pairs ([2]) and to construct new finitely generated profinite groups ([17]). We observe that exponentiation can be applied to construct groups defined by finite automata. Finite automata and groups defined by automata form a valuable direction in modern mathematics (see e.g. [3, 9]). Given a finite (permutational) automaton over a finite alphabet \(\mathsf{X}\) a naturally defined permutation group, the group of this automaton, acting by automorphisms on free monoid \(\mathsf{X}^{*}\) as on a rooted tree is defined. The other way to define a group by an automaton is to generate it by a subset of the generating set of the group of this automaton. Given a group defined by an automaton over some alphabet it is naturally to decrease size of the alphabet such that this group can be defined by an automaton over minimized alphabet. Moreover, additional restrictions on permutations defined at states of automata can be applied ([10]). In the present paper we consider this problem for HNN extensions of free abelian groups to extend results of [13]. We use automata constructed in [1] such that their groups are required HNN extensions. As the main result for any prime \(p\) we give sufficient conditions on an ascending HNN extension of a free abelian group of finite rank under which this HNN extension can be defined by a finite automaton over an alphabet of cardinality \(p\) and all permutations on the alphabet at the states are powers of a certain cycle of length \(p\). The paper organized as follows. In Section 2 we recall required definitions regarding wreath products and exponentiation. We observe in Theorem 1 a sufficient condition to represent an exponentiation as a permutation group in terms of a permutational wreath product acting on a finite rooted tree. Here we also give a constructive example of such a representation. In Section 3 we briefly recall required notions on finite automata and groups defined by them. In Theorem 2 we give a sufficient condition for a group defined by an automaton to decrease the size of the alphabet such that it can be defined by an automaton over minimized alphabet. In Section 4 we use these statements to prove the main result of the paper. All used notions and properties about trees, automata and groups are standard and can be found e.g. in [3, 9]. ## 2 Wreath products and their actions ### Wreath products Let \((G,\mathsf{X})\) and \((H,\mathsf{Y})\) be permutation groups. The wreath product of \((G,\mathsf{X})\) with \(H\) is defined as the semidirect product \[G\ltimes H^{\mathsf{X}}\] where the action of \(G\) on the set of functions \(H^{\mathsf{X}}\) is induced by its action on \(\mathsf{X}\). It is denoted by \(G\wr H\) and consists of the pairs \([g,h(x)]\), where \(g\in G\), \(h(x):\mathsf{X}\to H\). Such a pair acts on the Cartesian product \(\mathsf{X}\times\mathsf{Y}\) by the rule \[(x,y)^{[g,h(x)]}=(x^{g},y^{h(x)}),\quad x\in\mathsf{X},y\in\mathsf{Y}.\] The permutation group \((G\wr H,\mathsf{X}\times\mathsf{Y})\) is called the permutational wreath product of \((G,\mathsf{X})\) and \((H,\mathsf{Y})\). Being an associative operation on permutation groups their wreath product can be defined for arbitrary finite sequence \[(G_{1},\mathsf{X}_{1}),(G_{2},\mathsf{X}_{2}),\ldots,(G_{n},\mathsf{X}_{n})\] of permutation groups. In this case it is denoted by \(\wr_{i=1}^{n}G_{i}\) and consists of tuples \[[g_{1},g_{2}(x_{1}),\ldots,g_{n}(x_{1},\ldots,x_{n-1})],\] where \[g_{1}\in G_{1},\quad g_{2}(x_{1}):\mathsf{X}_{1}\to G_{2},\ldots,g_{n}(x_{1}, \ldots,x_{n-1}):\mathsf{X}_{1}\times\ldots\times\mathsf{X}_{n-1}\to G_{n}.\] It acts on the sets \(\varnothing\), \(\mathsf{X}_{1}\), \(\mathsf{X}_{1}\times\mathsf{X}_{2}\), \(\ldots,\mathsf{X}_{1}\times\ldots\times\mathsf{X}_{n}\) preserving the natural structure of a rooted tree on their union. Hence, the permutational wreath product \(\wr_{i=1}^{n}G_{i}\) can be viewed as an automorphism group of a homogeneous rooted tree acting on the set of its leaves. ### Exponentiation For permutation groups \((G,\mathsf{X})\) and \((H,\mathsf{Y})\) the action of the wreath product of \((G,\mathsf{X})\) with \(H\) on the set \(\mathsf{Y}^{\mathsf{X}}\) can be defined as follows. The exponentiation of \((H,\mathsf{Y})\) by \((G,\mathsf{X})\) is the permutation group \[H\uparrow G=(G\wr H,\mathsf{Y}^{\mathsf{X}})\] such that every \([g,h(x)]\in G\wr H\) acts on \(f(t)\in\mathsf{Y}^{\mathsf{X}}\) by the rule \[f(t)^{[g,h(x)]}=(f(t^{g}))^{h(t)}.\] Since for finite \(\mathsf{X}\) and \(\mathsf{Y}\) the degrees of permutation groups \((G\wr H,\mathsf{X}\times\mathsf{Y})\) and \((G\wr H,\mathsf{Y}^{\mathsf{X}})\) are not equal these groups are not isomorphic as permutation groups. Moreover, exponentiation is not an associative operation on permutation groups. However, it is natural to ask about existence of an isomorphism between the permutation group \(H\uparrow G\) and an automorphism group of a homogeneous rooted tree acting on the set of its leaves. In general such an isomorphism does not exist. For instance, consider the exponentiation \(\mathbb{Z}_{2}\uparrow\mathbb{Z}_{3}\) of the regular cyclic group of order \(2\) by the regular cyclic group of order \(3\). It is a permutation group of degree \(8\) and contains elements of order \(3\). From the other hand the automorphism group of the \(2\)-regular rooted tree of depth \(3\) has exactly \(8\) leaves and no elements of order \(3\). We give a sufficient condition under which a required isomorphism exists. **Theorem 1**.: _Let \(p\) be a prime, \(G\) and \(H\) be finite \(p\)-groups faithfully acting on sets \(\mathsf{X}\) and \(\mathsf{Y}\) of cardinalities \(p^{n}\) and \(p^{m}\) correspondingly, \(n,m\geq 0\). Then the exponentiation \(H\uparrow G\) is isomorphic as a permutation group to the wreath product of \(m\cdot p^{n}\) copies of the regular cyclic group or order \(p\) acting by automorphisms on the set of leaves of the \(p\)-regular rooted tree of depth \(m\cdot p^{n}\)._ Proof.: Under conditions of the Theorem the exponentiation \(H\uparrow G\) is a permutation group of order \(p^{n+m\cdot p^{n}}\) and degree \(p^{m\cdot p^{n}}\). Then it can be considered as a \(p\)-subgroup of the symmetric group of degree \(p^{m\cdot p^{n}}\). Hence, by Sylow's Theorem it is contained in a Sylow \(p\)-subgroup of this symmetric group. Sylow \(p\)-subgroup of the symmetric group of degree \(p^{m\cdot p^{n}}\) is isomorphic to the wreath product of \(m\cdot p^{n}\) copies of the regular cyclic group or order \(p\) ([7]). The statement immediately follows. ### Example Let \(p=3\). Consider the additive group \(\mathbb{Z}_{3}=\{0,1,2\}\) of residues modulo \(3\) regularly acting on itself. Theorem 1 implies that the exponentiation \(\mathbb{Z}_{3}\uparrow\mathbb{Z}_{3}\) as a permutation group is isomorphic to a subgroup of the permutation group \[(\mathbb{Z}_{3}\wr\mathbb{Z}_{3}\wr\mathbb{Z}_{3},\mathbb{Z}_{3}^{3}).\] An example of an isomorphism can be constructed as follows. Let \[a=[1,(0,0,0)],\quad b=(0,(1,0,0))]\] be elements of the wreath product \(\mathbb{Z}_{3}\wr\mathbb{Z}_{3}\). They form a generating set of this group. Denote by \(\sigma_{a}\) and \(\sigma_{b}\) permutations on \(\mathbb{Z}_{3}^{3}\) defined by \(a\) and \(b\) correspondingly. Consider elements \[c=[c_{1},c_{2}(x_{1}),c_{3}(x_{1},x_{2})],\quad d=[d_{1},d_{2}(x_{1}),d_{3}(x_{1 },x_{2})]\in\mathbb{Z}_{3}\wr\mathbb{Z}_{3}\wr\mathbb{Z}_{3}\] such that \[c_{1}=0,\quad c_{2}(x_{1})=\begin{cases}2,&\text{ if }x_{1}=0\\ 1,&\text{ if }x_{1}=1\;,\\ 0,&\text{ if }x_{1}=2\end{cases},\quad c_{3}(x_{1},x_{2})=\begin{cases}2,& \text{ if }x_{1}=2,x_{2}=0\\ 1,&\text{ if }x_{1}=2,x_{2}=1\;,\\ 0,&\text{ otherwise}\end{cases}\] \[d_{1}=1,\quad d_{2}(x_{1})=0,x_{1}\in\mathbb{Z}_{3},\quad d_{3}(x_{1},x_{2})= \begin{cases}1,&\text{ if }x_{1}=0,x_{2}\neq 2\\ 0,&\text{ otherwise}\end{cases}.\] Denote by \(\sigma_{c}\) and \(\sigma_{d}\) permutations on \(\mathbb{Z}_{3}^{3}\) defined by \(c\) and \(d\) correspondingly. Define a bijection \(\pi:\mathbb{Z}_{3}^{3}\to\mathbb{Z}_{3}^{3}\) as follows: \[(0,0,0)\mapsto(0,0,1), (0,0,1)\mapsto(1,1,2), (0,0,2)\mapsto(2,2,0),\] \[(0,1,0)\mapsto(0,1,0), (0,1,1)\mapsto(1,2,1), (0,1,2)\mapsto(2,0,2),\] \[(0,2,0)\mapsto(1,0,0), (0,2,1)\mapsto(2,1,1), (0,2,2)\mapsto(0,2,2),\] \[(1,0,0)\mapsto(0,2,0), (1,0,1)\mapsto(1,0,1), (1,0,2)\mapsto(2,1,2),\] \[(1,1,0)\mapsto(0,0,2), (1,1,1)\mapsto(1,1,0), (1,1,2)\mapsto(2,2,1),\] \[(1,2,0)\mapsto(2,0,0), (1,2,1)\mapsto(0,1,1), (1,2,2)\mapsto(1,2,2),\] \[(2,0,0)\mapsto(1,2,0), (2,0,1)\mapsto(2,0,1), (2,0,2)\mapsto(0,1,2),\] \[(2,1,0)\mapsto(1,0,2), (2,1,1)\mapsto(2,1,0), (2,1,2)\mapsto(0,2,1),\] \[(2,2,0)\mapsto(0,0,0), (2,2,1)\mapsto(1,1,1), (2,2,2)\mapsto(2,2,2).\] Then for arbitrary \(\alpha\in\mathbb{Z}_{3}^{3}\) the following equalities hold: \[\sigma_{a}(\alpha)=\pi(\sigma_{c}(\pi^{-1}(\alpha))),\quad\sigma_{b}(\alpha)= \pi(\sigma_{d}(\pi^{-1}(\alpha))).\] The required isomorphism now directly follows. ## 3 Automata and groups defined by automata ### Words and automata Let \(\mathsf{X}\) be a finite set, \(|\mathsf{X}|>1\). The set \(\mathsf{X}^{*}=\cup_{i=0}^{\infty}\mathsf{X}^{i}\) of all finite words over \(\mathsf{X}\), including the empty word \(\Lambda\), form a free monoid with basis \(\mathsf{X}\) under concatenation. The length of a word \(w\in\mathsf{X}^{*}\) is denoted by \(|w|\), i.e. \(w\in\mathsf{X}^{|w|}\). The right Cayley graph of \(\mathsf{X}^{*}\) with respect to basis \(\mathsf{X}\) defines on \(\mathsf{X}^{*}\) as on the vertex set the structure of a regular rooted tree. Two words \(u,v\in\mathsf{X}^{*}\) are joined by an edge if and only if \(u=vx\) or \(v=ux\) for some \(x\in\mathsf{X}\). For every \(n\geq 0\) the set \(\mathsf{X}^{n}\) form the \(n\)th level of this tree and the union \(\cup_{i=0}^{n}\mathsf{X}^{n}\) is the vertex set of a regular rooted subtree of depth \(n\). A finite automaton over alphabet \(\mathsf{X}\) is a triple \(\mathcal{A}=(Q,\lambda,\mu)\) such that \(Q\) is a finite non-empty set, the set of states, \(\lambda:Q\times\mathsf{X}\to Q\) is the transition function, \(\mu:Q\times\mathsf{X}\to Q\) is the output function. Automaton \(\mathcal{A}\) is called permutational if for every \(q\in Q\) the restriction \(\mu_{q}:\mathsf{X}\to\mathsf{X}\) of the output function at state \(q\) is a permutation on \(\mathsf{X}\). We will consider finite permutational automata only. In case \(|\mathsf{X}|=p\) for some prime \(p\) and all permutations \(\mu_{q}\), \(q\in Q\), are powers of a fixed cycle of length \(p\) on \(\mathsf{X}\) automaton \(\mathcal{A}\) is called \(p\)-automaton. ### Groups defined by automata Let \(\mathcal{A}=(Q,\lambda,\mu)\) be a finite permutational automaton over \(\mathsf{X}\). The set of permutations \(\{\mu_{q},q\in Q\}\) generate a subgroup in the symmetric group on \(\mathsf{X}\). We call this group the permutation group defined at states of \(\mathcal{A}\). For a \(p\)-automaton the permutation group defined at its states is the regular cyclic group of order \(p\). For every \(q\in Q\) the permutation \(\mu_{q}\) can be recursively extended to the permutation on the set \(\mathsf{X}^{*}\) as follows: \[\mu_{q}(\Lambda)=\Lambda,\quad\mu_{q}(xw)=\mu_{q}(x)\mu_{\lambda(q,x)}(w), \quad x\in\mathsf{X},w\in\mathsf{X}^{*}.\] Obtained in this way permutation \(\mu_{q}\) is length preserving, i.e. \(|\mu_{q}(w)|=|w|\), \(w\in\mathsf{X}^{*}\), and prefix preserving, i.e. if for \(w,w_{1}\in\mathsf{X}^{*}\) and some \(x\in\mathsf{X}\) the equality \(w=w_{1}x\) holds then some \(x_{1}\in\mathsf{X}\) the equality \(\mu_{q}(w)=\mu_{q}(w_{1})x_{1}\) holds. Hence, \(\mu_{q}\) preserves the structure of a rooted tree on \(\mathsf{X}^{*}\). It is called an automaton permutation defined by \(\mathcal{A}\) at state \(q\). The permutation group, generated by the set \(\{\mu_{q},q\in Q\}\) is called the group of the automaton \(\mathcal{A}\) and denoted by \(G(\mathcal{A})\). The restriction of its action on \(\mathsf{X}\) is the permutation group defined at states of \(\mathcal{A}\). More generally, a group \(G\) is called a group generated by a finite automaton over \(\mathsf{X}\) if there exists a finite permutational automaton \(\mathcal{A}\) over \(\mathsf{X}\) such that \(G\) is isomorphic to the group generated by a subset of automaton permutations defined at states of \(\mathcal{A}\). In this case \(G\) is isomorphic to a subgroup of automaton \(\mathcal{A}\) generated by a subset of its generating set. In terms of [10] a group generated by a finite automaton over \(\mathsf{X}\) is a finitely generated subgroup of the finite state wreath power of the permutation group at states of an automaton over \(\mathsf{X}\). In some cases the order and the degree of such a permutation group can be minimized. **Theorem 2**.: _Let \((G,\mathsf{X})\) and \((H,\mathsf{Y})\) be finite permutation groups such that \((G,\mathsf{X})\) is isomorphic as a permutation group to a subgroup of the wreath product of finitely many copies of \((H,\mathsf{Y})\). Then every group generated by a finite automaton such that the permutation group _defined at its states is \((G,\mathsf{X})\) can be generated by a finite automaton, the permutation group defined at its states is \((H,\mathsf{Y})\)._ Proof.: Assume that \((G,\mathsf{X})\) is isomorphic as a permutation group to a subgroup of the wreath product of \(r\) copies of \((H,\mathsf{Y})\) for some \(r\geq 1\). Denote by \(\varphi\) an isomorphic embedding of \(G\) into \(\wr_{i=1}^{r}H^{(i)}\), \(H^{(i)}\simeq H\), \(1\leq i\leq r\), and by \(\psi\) an injection from \(\mathsf{X}\) to \(\mathsf{Y}^{r}\) such that \[\psi(x^{g})=(\psi(x))^{\varphi(g)},\quad x\in\mathsf{X},g\in G.\] Denote by \(\mathsf{Y}_{1}\) the image of \(\mathsf{X}\) under \(\psi\). Then \((G,\mathsf{X})\) is isomorphic as a permutation group to \((\varphi(G),\mathsf{Y}_{1})\). Recall that the wreath product \(\wr_{i=1}^{r}H^{(i)}\) acts on the union \(\cup_{i=0}^{r-1}\mathsf{Y}^{i}\). This action is length preserving and prefix preserving. Hence, \(\varphi(G)\) acts on the set \(\mathsf{Y}_{2}\) that consists of all prefixes of all words from \(\mathsf{Y}_{1}\). Let \(\mathcal{A}=(Q,\lambda,\mu)\) be a finite permutational automaton over \(\mathsf{X}\) such that the permutation group defined at its states is \((G,\mathsf{X})\). It is sufficient to show that the group \(G(\mathcal{A})\) of this automaton can be generated by a finite automaton over \(\mathsf{Y}\) such that the permutation group defined at its states is \((H,\mathsf{Y})\). We define a corresponding automaton \(\mathcal{B}=(Q_{1},\lambda_{1},\mu_{1})\). The set of sates \(Q_{1}\) of \(\mathcal{B}\) is the set of all possible pairs of the form \((q,w)\), where \(q\) is a state of \(\mathcal{A}\) and \(w\) is a word from \(\mathsf{Y}\) of length not greater than \(r-1\). In other words, it is defined as the Cartesian product \[Q_{1}=Q\times\left(\cup_{i=0}^{r-1}\mathsf{Y}^{i}\right).\] The transition function \(\lambda_{1}\) is defined by the equality \[\lambda_{1}((q,w),y)=\begin{cases}(q,wy),&\text{ if }|w|<r-1\text{ and }wy\in\mathsf{Y}_{2}\\ (\lambda(q,\psi^{-1}(wy)),\Lambda),&\text{ if }|w|=r-1\text{ and }wy\in\mathsf{Y}_{1} \\ (q,w),&\text{ otherwise}\end{cases}.\] Since \(\psi\) is an injection the definition is correct. The output function \(\mu_{1}\) is defined by the equality \[\mu_{1}((q,w),y)=\begin{cases}y_{1},&\text{ if }wy\in\mathsf{Y}_{2}\text{ and }(wy)^{\varphi(\mu_{q})}=(w)^{\varphi(\mu_{q})}y_{1}\\ y,&\text{ otherwise}\end{cases}.\] Since \(\varphi(\mu_{q})\), \(q\in Q\), is length preserving and prefix preserving on \(\mathsf{Y}_{2}\) the definition is correct. It is required to find a subset \(S\subset Q_{1}\) such that the group \(G(\mathcal{A})\) is isomorphic to the group \(G_{S}\) generated by the set \(\{\mu_{1_{s}},s\in S\}\). We will show that one can take the subset \(S=\{(q,\Lambda),q\in Q\}\). It is enough to prove that the mapping \(\mu_{q}\mapsto\mu_{1_{(}q,\Lambda)}\), \(q\in Q\), defines a required isomorphism between \(G(\mathcal{A})\) and \(G_{S}\). Consider the monoid monomorphism \(\Phi:\mathsf{X}^{*}\to\mathsf{Y}^{*}\) that extends injection \(\varphi\). Then the image \(\Phi(\mathsf{X}^{*})\) is a free monoid with basis \(\mathsf{Y}_{1}\), i.e. \(\Phi(\mathsf{X}^{*})=\mathsf{Y}_{1}^{*}\). For arbitrary \(q\in Q\), \(x\in\mathsf{X}\) and \(w\in\mathsf{X}^{*}\) we have the equalities \[\Phi(xw)^{\mu_{1_{(}q,\Lambda)}}=(\varphi(x)\Phi(w))^{\mu_{1_{(}q,\Lambda)}}=( \varphi(x))^{\mu_{1_{(}q,\Lambda)}}\Phi(w)^{\mu_{1_{(}\lambda(q,x),\Lambda)}}= \varphi(x^{\mu_{q}})\Phi(w)^{\mu_{1_{(}\lambda(q,x),\Lambda)}}.\] Since \(\varphi(x)\in\mathsf{Y}_{1}\) the last equality follows from the definition of the output function \(\mu_{1}\). Therefore, permutations groups \((G(\mathcal{A}),\mathsf{X}^{*})\) and \((G_{S},\mathsf{Y}_{1}^{*})\) are isomorphic as permutation groups. Consider arbitrary \(w\in\mathsf{Y}^{*}\). Then there exist unique \(w_{1}\in\mathsf{Y}_{1}^{*}\), \(w_{2}\in\mathsf{Y}_{2}\setminus\mathsf{Y}_{1}\), \(w_{3}\in\mathsf{Y}^{*}\) such that \(w=w_{1}w_{2}w_{3}\) and the word \(w_{2}\) is the longest prefix of \(w_{2}w_{3}\) from \(\mathsf{Y}_{2}\). Let \(q\in Q\). Then \[w^{\mu_{1}(q,\Lambda)}=w_{1}^{\mu_{1}(q,\Lambda)}w_{4}w_{3}\] for some \(w_{4}\in\mathsf{Y}_{2}\setminus\mathsf{Y}_{1}\) such that for arbitrary \(w_{1}w_{2}u\in\mathsf{Y}_{1}^{*}\), \(u\in\mathsf{Y}^{*}\), the word \(w_{1}^{\mu_{1}(q,\Lambda)}w_{4}\) is a prefix of \((w_{1}w_{2}u)^{\mu_{1}(q,\Lambda)}\). Hence, the action of the automaton permutation \(\mu_{1}(q,\Lambda)\) on \(\mathsf{Y}^{*}\) is completely defined by its action on \(\mathsf{Y}_{1}^{*}\). The proof is complete. ## 4 HNN extensions and \(p\)-automata ### HNN extensions of free abelian groups Let \(A_{r}=\langle a_{1},\ldots,a_{r}\mid a_{i}a_{j}=a_{j}a_{i},1\leq i<j\leq r\rangle\) be a free abelian group of rank \(r\geq 1\). For a non-degenerate integer matrix \(M=(m_{ij})_{i,j=1}^{r}\) consider the group \[\mathbb{G}_{M}=\langle A_{r},t\mid a_{i}^{t}=a_{1}^{m_{i1}}\ldots a_{r}^{m_{ ir}},1\leq i\leq r\rangle.\] Then \(\mathbb{G}_{M}\) is an ascending HNN extension of \(A_{r}\). **Proposition 3** ([1]).: _Let the order of \(M\) is infinite and for positive integer \(n\geq 2\) the determinant of \(M\) is relatively prime to \(n\). Then there exist a finite permutational automaton \(\mathcal{A}_{M}\) over \(\mathsf{X}=\{0,\ldots,n-1\}^{r}\) such that the group of \(\mathcal{A}_{M}\) is isomorphic to \(\mathbb{G}_{M}\)._ Let us recall the construction of automaton \(\mathcal{A}_{M}=(Q,\lambda,\mu)\) from [1]. Denote by \(n(M)\) the max norm of the matrix \(M\), i.e. \[n(M)=\max_{i}\sum_{j}|m_{ij}|.\] Then the set of states \(Q\) is defined as \[Q=\{(v_{1},\ldots,v_{r})^{\top}\in\mathbb{Z}^{r}:-n(M)\leq v_{i}\leq n(M)-1,1 \leq i\leq r\}\] and for \(v=(v_{1},\ldots,v_{r})^{\top}\in Q\), \(x=(x_{1},\ldots,x_{r})^{\top}\in\mathsf{X}\) we have \[\lambda(v,x)=\operatorname{Div}_{n}(v+Mx),\quad\mu(v,x)=\operatorname{Mod}_{n} (v+Mx),\] where \(\operatorname{Div}_{n}\) and \(\operatorname{Mod}_{n}\) denote operations of taking coordinate-wise quotients and reminders from division by \(n\). ### \(p\)-automata defining HNN extensions The main result of this section is the following **Theorem 4**.: _Let \(p\) be a prime, \(r=p^{k}\) for some \(k\geq 1\) and integer \(r\times r\) matrix \(M\) has the form_ \[M=pN+C\] _for some \(r\times r\) integer matrix \(N\) and permutation matrix \(C\) that correspond to a permutation of order \(p^{m}\) for some \(m\geq 0\). Then_ 1. _the group_ \(\mathbb{G}_{M}\) _is generated by a finite_ \(p\)_-automaton;_ 2. _the group_ \(\mathbb{G}_{M}\) _is residually_ \(p\)_-finite._ Proof.: Since \(|\det M|=1\) this determinant is relatively prime to \(p\) and matrix \(M\) satisfy conditions of Proposition 3. Then the group of the automaton \(\mathcal{A}_{M}\) is isomorphic to \(\mathbb{G}_{M}\). Let us describe the permutation group \((G,\mathsf{X})\) defined at states of this automaton. Denote by \(\mathsf{X}_{1}\) the set \(\{1,\ldots,r\}\) and by \(\mathsf{X}_{2}\) the set \(\{0,\ldots,p-1\}\). Then the alphabet \(\mathsf{X}\) can be identified with the set \(\mathsf{X}_{2}^{\mathsf{X}_{1}}\) of all functions from \(\mathsf{X}_{1}\) to \(\mathsf{X}_{2}\). Denote by \(\sigma\) the permutation on \(\mathsf{X}_{1}\) such that the matrix \(C\) corresponds to \(\sigma\). Since \(\sigma\) has order \(p^{m}\) the lengths of its independent cycles are powers of \(p\) not greater than \(p^{m}\) and at least one of them is \(p^{m}\). Denote by \(G_{1}\) the cyclic group generated by \(\sigma\). Then \(|G_{1}|=p^{m}\). Let \(G_{2}\) be a cyclic group of order \(p\). It acts on \(\mathsf{X}_{2}\) by additions modulo \(p\). Consider arbitrary state \(v=(v_{1},\ldots,v_{r})^{\top}\) of \(\mathcal{A}_{M}\). Then the vector \(\mathrm{Mod}_{p}(v)\) can be regarded as a function from \(\mathsf{X}_{1}\) to \(G_{2}\). For arbitrary vector \(x=(x_{1},\ldots,x_{r})^{\top}\in\mathsf{X}\) we have \[\mu_{v}(x)=\mathrm{Mod}_{p}(v+Mx)=\mathrm{Mod}_{p}(v+(pN+C)x)=\mathrm{Mod}_{p }(v+Cx)\] and coordinate-wise \[\mu_{v}(x)=((x_{\sigma(i)}+v_{i})\mod p,1\leq i\leq r).\] It means that the permutation \(\mu_{v}\) acts on \(\mathsf{X}_{2}^{\mathsf{X}_{1}}\) by the rule, that defines a permutation from the exponentiation \(G_{2}\uparrow G_{1}\), i.e. the permutation \(\mu_{v}\) is defined by the element \([\sigma,\mathrm{Mod}_{p}(v)]\) of the wreath product \(G_{1}\wr G_{2}\), the wreath product of cyclic groups of orders \(p^{m}\) and \(p\). Since \(n(M)\geq 1\) all vectors \[u_{1}=(-1,0,\ldots,0),\ldots,u_{r}=(0,\ldots,0,-1)\] belong to the set \(Q\) of states of \(\mathcal{A}_{M}\). Hence, the set \[\{\mu_{u_{1}},\ldots\mu_{u_{r}}\}\] generates the wreath product \(G_{1}\wr G_{2}\). Therefore, the permutation group \((G,\mathsf{X})\) is the exponentiation \(G_{2}\uparrow G_{1}\). Theorem 1 now implies that \((G,\mathsf{X})\) is isomorphic as a permutation group to the wreath product of copies of the regular cyclic group or order \(p^{m}\) acting by automorphisms on the set of leaves of the \(p\)-regular rooted tree of depth \(p^{m}\). Then Theorem 2 implies that the group \(\mathbb{G}_{M}\) can be generated by a finite automaton with regular cyclic group as the permutation group defined at its states, i.e. by a \(p\)-automaton. This completes the proof of the first statement of the theorem. Since all groups defined by finite \(p\)-automata are residually \(p\)-finite (see e.g. [13]) the second statement now immediately follows. ## Acknowledgements The research presented in the paper was done during the fellowship of the second author at the Institute of Mathematics of the Polish Academy of Sciences supported by the European Research Council (ERC) under the European Union'sHorizon 2020 research and innovation programme (Grant Agreement No. 677120-INDEX) and Grant Norweski UMO-2022/01/4/ST1/00026.
2306.13503
Two derivations of Principal Component Analysis on datasets of distributions
In this brief note, we formulate Principal Component Analysis (PCA) over datasets consisting not of points but of distributions, characterized by their location and covariance. Just like the usual PCA on points can be equivalently derived via a variance-maximization principle and via a minimization of reconstruction error, we derive a closed-form solution for distributional PCA from both of these perspectives.
Vlad Niculae
2023-06-23T14:00:14Z
http://arxiv.org/abs/2306.13503v1
# Two derivations of Principal Component Analysis ###### Abstract In this brief note, we formulate Principal Component Analysis (PCA) over datasets consisting not of points but of distributions, characterized by their location and covariance. Just like the usual PCA on points can be equivalently derived via a variance-maximization principle and via a minimization of reconstruction error, we derive a closed-form solution for distributional PCA from both of these perspectives. ## 1 Introduction Most commonly in data science we are concerned with datasets that consist of points \(\{x_{1},\ldots,x_{n}\}\subset\mathbb{R}^{d}\). In this note we focus on datasets of _random variables_\(\{\mathsf{x}_{1},\ldots,\mathsf{x}_{n}\}\) where each \(\mathsf{x}_{i}\) has a probability distribution summarized by its mean and variance \[\mathbb{E}[\mathsf{x}_{i}]=\mu_{i},\qquad\mathbb{V}[\mathsf{x}_{i}]=\Sigma_{i}.\] This scenario fully subsumes the standard point dataset in the limit of \(\Sigma_{i}\to 0\), but allows us to further model situations such as: * Uncertainty or measurement noise (_i.e._, inherent variability of the \(\mathsf{x}_{i}\)s), * Hierarchical data (_e.g._, psychometric data where each \(\mathsf{x}_{i}\) is a study participant for whom several measurements are taken). In this work we extend the usual pointwise PCA to distributional data. We first recap the definition and derivation of PCA. Then, we show two different derivations of its distributional counterpart. ## 2 Background PCA (Murphy, 2022, section 20.1) is a workhorse of statistical analysis, data science, and visualization. It is a dimensionality reduction technique that summarizes a (point) dataset by linearly transforming it to the most important dimensions of variability. There are two typical ways to define _most important_, and it turns out they both lead to the same result. For this section we assume a centered point dataset \(\{x_{1},\ldots,x_{n}\}\), i.e., \(\sum_{i}x_{i}=0\).1 Footnote 1: PCA is typically defined after centering, but in some scenarios (_e.g._, high-dimensional sparse data) centering is sometimes skipped. While centering is important for some statistical interpretation of the method, it makes no difference for our derivation. Directions of maximal varianceOne road toward PCA starts with the question: what is the direction \(u\) that maximizes the variance of the projected dataset? In other words, we seek: \[\operatorname{argmax}\left\{\sum_{i}(u^{\top}x_{i})^{2}\,:u\in\mathbb{R}^{d}, \|u\|=1\right\}. \tag{1}\] This is because \(z_{i}=u^{\top}x_{i}\) is the projection of \(x_{i}\) along the direction of \(u\), and since we assume the \(x_{i}\)s are centered then so are the \(z_{i}\)s, and so the objective of eq. (1) is the empirical variance of the \(z_{i}\)s. We may rewrite the objective of eq.1 as \(u^{\top}Su\) where \(S=\sum_{i}x_{i}x_{i}^{\top}\), and therefore we recognize that the solution of eq.1 is the eigenvector of \(S\) corresponding to the largest eigenvalue. This view readily extends to seeking the top-k principal components \(u_{1},\ldots,u_{k}\) by requiring additional orthogonal constraints, _i.e._, \(U^{\top}U=I\), and the solution is likewise given by the top-k eigenvectors of \(S\). Minimizing the reconstruction errorIf we view \(z_{i}=u^{\top}x_{i}\) as a 1-d encoded representation of \(x_{i}\), we can map \(z_{i}\) back into \(\mathbb{R}^{n}\) as the vector \(z_{i}u\) in the span of \(u\). This process will lose information. We may then ask the question: which direction \(u\) minimizes the reconstruction error of this encoding-decoding process? Or, \[\arg\min\left\{\sum_{i}\|x_{i}-uu^{\top}x_{i}\|^{2}:u\in\mathbb{R}^{d},\|u\|=1 \right\}. \tag{2}\] Denote \(Q=uu^{\top}\) and remark \(Q\) is a projection matrix, thus idempotent and so is \(I-Q\). Then \[\begin{split}\sum_{i}\|x_{i}-Qx_{i}\|^{2}&=\sum_{i }\|(I-Q)x_{i}\|^{2}\\ &=\sum_{i}x_{i}^{\top}(I-Q)x_{i}\\ &=-\sum_{i}x_{i}^{\top}(uu^{\top})x_{i}+\text{const}\\ &=-u^{\top}Su+\text{const},\end{split} \tag{3}\] where the last step uses the same rearranging of the dot product as in the paragraph above. So, minimizing the reconstruction error, or maximizing projected variance, are equivalent views that lead to the same principal component solution. ## 3 Deriving distributional PCA We propose the following formulation for PCA over a dataset of random variables \(\{x_{1},\ldots,x_{n}\}\): **Definition 1** (Distributional PCA).: _Given a dataset of random variables, denoted \(\{x_{1},\ldots,x_{n}\}\), with means \(\mu_{i}\) and covariance matrices \(\Sigma_{i}\), the principal components of this dataset are the leading eigenvectors of the matrix:_ \[\sum_{i}\mu_{i}\mu_{i}^{\top}+\Sigma_{i}.\] We shall give two justifications of this definition. **Proposition 1**.: _Distributional PCA, as in definition 1, maximizes the expected projected variance:_ \[\arg\max\left\{\mathbb{E}_{x_{1},\ldots,x_{n}}\left[\sum_{i}(u^{\top}x_{i})^{ 2}\right]:u\in\mathbb{R}^{d},\|u\|=1\right\}. \tag{4}\] Proof.: Rearranging and using linearity, we may rewrite the objective as \[\begin{split}\mathbb{E}\left[\sum_{i}(u^{\top}x_{i})^{2}\right]& =\mathbb{E}\left[\sum_{i}u^{\top}(x_{i}x_{i}^{\top})u\right]\\ &=\sum_{i}u^{\top}\left(\mathbb{E}[x_{i}x_{i}^{\top}]\right)u\\ &=u^{\top}\left(\sum_{i}\mu_{i}\mu_{i}^{\top}+\Sigma_{i}\right)u.\end{split}\] **Proposition 2**.: _Distributional PCA, as in definition 1, minimizes the total squared \(2\)-Wasserstein reconstruction error under the linear projection:_ \[\operatorname{argmin}\left\{\sum_{i}W_{2}^{2}(\varsigma_{i},uu^{\top}\varsigma_{ i}):u\in\mathbb{R}^{d},\|u\|=1\right\}. \tag{5}\] To prove this result, we need the following lemma: **Lemma 1** (Masarotto).: _Let \(\kappa\) be a random variable with mean \(\mu\) and variance \(\Sigma\), and \(Q\) be a projection matrix. Then_ \[W_{2}^{2}(\kappa,Q\kappa)=\|\mu-Q\mu\|^{2}+\operatorname{tr}((I-Q)\Sigma).\] Proof.: (of the lemma). This is a slight extension of the unnumbered result given by Masarotto et al. (2019) in their section 5. First, we use the translation property of \(W_{2}\)(Peyre and Cuturi, 2019, Remark 2.19) to reduce the problem to a distance between zero-mean measures: \[W_{2}^{2}(\kappa,Q\kappa)=\|\mu-Q\mu\|^{2}+W_{2}^{2}(\bar{\kappa},Q\bar{ \kappa})\] where \(\bar{\kappa}=\kappa-\mu\). If \(\alpha\) is the probability measure associated with \(\bar{\kappa}\), then \(\beta=Q_{\sharp}\alpha\) is the probability measure of the pushforward \(Q\bar{\kappa}\). Since \(Q\) is a projection matrix, it is symmetric positive semidefinite and therefore it is the gradient of a convex mapping \(x\to\frac{1}{2}x^{\top}Qx\) by Brenier's theorem (Peyre and Cuturi, 2019, Remark 2.24), Q is the optimal transport plan between \(\alpha\) and \(\beta\). This implies \[W_{2}^{2}(\bar{\kappa},Q\bar{\kappa}) =\int_{x}d\alpha\ \|x-Qx\|^{2}\] \[=\int_{x}d\alpha\ x^{\top}(I-Q)x\] \[=\int_{x}d\alpha\ \operatorname{tr}\left((I-Q)xx^{\top}\right)\] \[=\operatorname{tr}((I-Q)\Sigma).\] Proof.: (of the proposition). Let \(Q=uu^{\top}\) denote the projection operator onto the span of \(u\). Applying the lemma, \[\sum_{i}W_{2}^{2}(\kappa_{i},Q\kappa_{i}) =\left(\sum_{i}\mu_{i}^{\top}(I-Q)\mu_{i}+\operatorname{tr}((I-Q) \Sigma_{i})\right) \tag{6}\] \[=-\sum_{i}\operatorname{tr}\left(Q(\mu_{i}\mu_{i}^{\top}+\Sigma_ {i})\right)+\operatorname{const}\] \[=-u^{\top}\left(\sum_{i}\mu_{i}\mu_{i}^{\top}+\Sigma_{i}\right)u +\operatorname{const}.\] We have thus shown that distributional PCA can also be viewed equivalently from a variance-maximization and error-minimization angle, just like usual pointwise PCA. In addition, in the limit of all \(\Sigma_{i}\to 0\), we recover usual PCA. Finally we remark that while we use a single principal component in the above derivations, everything holds for \(k\) orthogonal principal components as well. ## 4 Discussion Visualization.To demonstrate how distributional PCA works, we construct a dataset with four Gaussian random variables. Their locations are \(\mu_{1}=(-0.5,-2),\mu_{2}=(0.5,-1),\mu_{3}=(-0.5,0),\mu_{4}=(-0.5,1)\), and their covariances are all equal to \(\Sigma=\text{diag}(1,0.5)\). Figure 1 shows the principal component direction obtained by performing the usual PCA on the four means, performing distributional PCA, and performing PCA on a dataset obtained by drawing 1000 samples from each of the four distributions. Our proposed formula indeed characterizes the limit case of sampling from the distributional dataset. Related work.Masarotto et al. (2022) recently proposed a transportation-based PCA between covariance matrices. Their formulation applies PCA in the tangent space of a manifold of covariance operators and therefore leads to a different algorithm, somewhat more computationally intensive as it requires calculating a Frechet mean. While our formulation only depends on covariance matrices through their sum, their formulation seems more suited for capturing differences between individual covariances. On the other hand, transportation PCA does not take into account means, just covariances. We shall explore the relationship and tradeoffs between the two formulations in the future. Acknowledgements.This work is partly supported by NWO VI.Veni.212.228 and the European Union's Horizon Europe research and innovation programme via UTTER 101070631.
2304.05518
Understanding Creep in Vitrimers: Insights from Molecular Dynamics Simulations
Vitrimers offer a promising sustainable alternative to conventional epoxies due to their recyclability. Vitrimers are covalent adaptive networks where some bonds can break and reform above the vitrimer transition temperature. While this can lead to desirable behavior such as malleability, this also leads to undesirable rheological behavior such as low-temperature creep. In this work, we investigate the molecular mechanisms of the creep of vitrimers using molecular dynamics simulations. The interplay between dynamic bonding with mechanical loading is modeled using a topology-based reaction scheme. The creep behavior is compared against cross-linked epoxies with dynamic reactions to understand the unique aspects related to dynamic bonding. It is found that the free volume that arises from tensile loads is reduced in vitrimers through dynamic bond rearrangement. An important feature that explains the difference in secondary creep behavior between conventional epoxies and vitrimers is the orientation of the dynamic bonds during loading. In vitrimers, the dynamic bonds preferentially align orthogonal to the loading axis, decreasing the axial stiffness during secondary creep, resulting in larger creep strain compared to epoxies. Over longer timescales, such increased strain leads to void growth, resulting in tertiary creep. Thus, chemistry changes or additives that can prevent the initial realignment of dynamic bonds, and therefore subsequent void growth, can be an effective strategy to mitigate creep in vitrimers.
Gurmeet Singh, Vikas Varshney, Veera Sundararaghavan
2023-04-11T21:52:05Z
http://arxiv.org/abs/2304.05518v1
# Understanding Creep in Vitrimers: Insights from Molecular Dynamics Simulations ###### Abstract Vitrimers offer a promising sustainable alternative to conventional epoxies due to their recyclability. Vitrimers are covalent adaptive networks where some bonds can break and reform above the vitrimer transition temperature. While this can lead to desirable behavior such as malleability, this also leads to undesirable rheological behavior such as low-temperature creep. In this work, we investigate the molecular mechanisms of the creep of vitrimers using molecular dynamics simulations. The interplay between dynamic bonding with mechanical loading is modeled using a topology-based reaction scheme. The creep behavior is compared against cross-linked epoxies with dynamic reactions to understand the unique aspects related to dynamic bonding. It is found that the free volume that arises from tensile loads is reduced in vitrimers through dynamic bond rearrangement. An important feature that explains the difference in secondary creep behavior between conventional epoxies and vitrimers is the orientation of the dynamic bonds during loading. In vitrimers, the dynamic bonds preferentially align orthogonal to the loading axis, decreasing the axial stiffness during secondary creep, resulting in larger creep strain compared to epoxies. Over longer timescales, such increased strain leads to void growth, resulting in tertiary creep. Thus, chemistry changes or additives that can prevent the initial realignment of dynamic bonds, and therefore subsequent void growth, can be an effective strategy to mitigate creep in vitrimers. Creep disulfide bond exchange reactions Molecular dynamics simulations Vitrimers Deformation mechanisms ## 1 Introduction Epoxy is a thermoset polymer that is widely used in automobile, aerospace, robotics, and wind energy industries[1, 2]. Due to their thermal and chemical stability, they have played an important role in the emergence of advanced high-performance composites. However, the thermoset nature of epoxies has limited their life cycle due to the lack of damage mitigation or recycling capabilities[3, 4]. Another class of polymers, thermoplastics, are easy to recycle but are limited by their thermomechanical performance in critical structural applications[5, 6]. Vitrimers are a new class of polymers that offer the best of both thermosets and thermoplastics via dynamic cross-link reactions in the polymer network. They behave like a cross-linked thermoset at room temperatures and demonstrate malleable properties of a thermoplastic when heated beyond a temperature where dynamic bonds become active[7, 8, 9]. This ability of vitrimers makes them a promising candidate towards damage mitigation during operation and recyclability afterwards[10, 11]. A major challenge associated with the usage of vitrimers is associated with low-temperature creep[12, 13, 14]. Creep is the deformation of the material with time under the application of constant stress. Creep strain in polymers is simplified to Findley's power law, \(\epsilon=\epsilon_{0}+\epsilon^{+}t^{n}\), where \(t\) is time and \(\epsilon_{0}\), \(\epsilon^{+}\) and \(n\) are constants for a given stress level[15]. Polymers, whose glass transition (glassy to rubbery transition) temperatures are relatively closer to room temperature, are prone to experiencing creep at room temperature. Often, creep can lead to an undesirable amount of deformations that compromise the integrity and function of a structure[16, 17, 18]. Therefore, it is important to understand the creep behavior and underlying mechanisms for improved molecular design of vitrimers. Creep in polymers and fiber composites is a well-studied phenomenon both experimentally[16, 17, 18, 19] and computationally[20, 21, 22, 23]. Creep under uniaxial tension is found to follow three stages. In primary creep, the strain increases at a rapid rate during initial loading but continues to slow down over time. The second stage, termed secondary creep, is characterized as a region of uniform strain rate. Tertiary creep is the final stage of creep where the material strain accelerates and leads to a rupture. Molecular scale experiments by Lee et al. on crosslinked poly(methyl methacrylate) reinforce the notion that stress-induced chain mobility allows polymer glasses to flow during creep[24]. In addition, Bradley et al. found that the creep in vinylester resins reduces with the duration of resin curing due to higher cross-linking[19]. While adding reinforcing fibers was found to reduce creep, the exponent \(n\) was found to be largely unchanged. Unlike conventional epoxies, understanding of the creep in vitrimers is rather limited. It is difficult to probe and investigate the underlying dynamic bonding mechanisms in vitrimers experimentally and this is where molecular simulations provide an exciting alternative to further such understandings[25, 26]. A recent experimental study by Hubbard et al. sheds light on the possible molecular mechanisms in the stages of vitrimer creep, where it is postulated that secondary creep in vitrimers is associated with network rearrangement due to dynamic reactions[27]. It was also noted that at low temperatures and catalyst concentrations, vitrimers simply behave as a traditional epoxy material. Dynamic cross-linking reactions tend to accelerate beyond the topology freezing temperature (\(T_{V}\)), however, a small extent of these reactions at lower temperatures can influence creep behavior[14, 28]. In this work, we employ all-atom molecular dynamics (MD) simulations to study the creep behavior in vitrimers, which is carried out for the first time to the best of the authors' knowledge. The MD framework has been widely utilized to predict the properties of metals[29, 30] and polymers including mechanical[31], thermal expansion[32], thermal conductivity[33, 34], heat capacity[34], and glass transition[31, 35] properties. In particular, MD simulations have also been utilized to model the creep behavior of metals[29, 36, 37, 38] as well as polymers[20, 22, 39, 40]. Several of these molecular models, specifically polymers, simplify atomic interactions using coarse-grained/bead-spring models to model chain dynamics. While such models capture conformational changes, they poorly describe chain-to-chain interactions that determine the free volume evolution. A recent all-atom MD study of creep shows that secondary creep is mechanistically related to void nucleation, while tertiary creep is related to void growth and coalescence[41]. Furthermore, MD simulations by Li et al. describe creep in epoxies in the context of the free-volume change theory of Fox and Flory. With increasing stress and temperature, the creep correlated with increases in the free volume in the simulation cell[42]. It is to be noted that the time scale of creep at the macro scale is in the range of hours or even days. However, for the purpose of understanding the underlying deformation mechanism of creep, MD simulations of creep need to be performed using elevated stress and/or temperature, and at high strain rates. The primary challenge for the MD approach is the modeling of the temperature-dependent reversible cross-link reactions. Exchange reactions have been modeled in the past via methods such as embedding Monte Carlo (MC) moves into molecular dynamics, fully MD (using specialized reactive potentials), or fully MC simulations to simulate bond swaps[43, 44, 45, 46, 47]. These simulations have typically employed coarse-grained (bead-spring) models that provide high computational efficiency while approximating the mechanical response. For more quantitative modeling, all-atom MD methods are attractive[25, 48]. Previously, bond exchange reactions in all-atom MD have been implemented using distance-based reaction schemes based on pre and post-reaction templates[35, 49]. The approach accelerates the slow reaction dynamics and allows the modeling of mechanical property changes in vitrimers during thermal cycling. Typically, creep in epoxies is simulated by applying stress and letting the system evolve over time under NPT dynamics[42]. However, in a vitrimer system with dynamic bond exchange reactions, NPT dynamics can become unstable due to energy fluctuations caused by local topology modification whenever dynamic cross-linking reactions occur. In this work, we devise a new strategy to simulate the creep behavior of vitrimer with dynamic reactions by alternating loading and equilibration steps. In parallel to the experimental work by Hubbard et al., the approach is used to differentiate creep response in vitrimers and traditional epoxies to identify mechanisms fundamentally derived from their dynamic bond exchange reactions[7, 8, 27, 50]. ## 2 Methods We employ classical MD simulations to simulate the creep behavior of vitrimer system, composed of diglycidyl ether of bisphenol-A (DGEBA) cross-linked with 4-aminophenyl disulfide (AFD), the chemical structures shown in Figure 1(a). Large-scale atomic/molecular massively parallel simulator (LAMMPS)[51, 52] is used to carry out all of the simulations in this study, consistent valence force field (CVFF) is assigned to all the atoms with energy contributions from the pair, bond, angle, dihedral, and improper interaction terms[53]. Additionally, the energy contributions from the non-bonded interactions are modeled using Lennard-Jones (LJ) and Coulombic pairwise interaction with a cutoff of \(12\) A. A time step of 1 fs is used for all of the MD simulations in this work. ### Polymer system preparation The polymer system for MD simulations is prepared using polymerization reactions (primary and secondary amine reactions) for curing the monomer mixture. In this approach, the _fix bond/react_ feature in LAMMPS is utilized that enables the modeling of reactions by changing the local topology[49, 54]. First, a mixture of monomers with two DGEBA units and one AFD unit is constructed. The typical synthetic epoxy to hardener stoichiometric ratio of 2:1 is employed[4, 55]. The repeating unit that contains the monomer mixture is shown in Figure S.1 of supporting information. We repeat this unit by \(8\times 8\times 8\) to get a simulation box with 1,024 DGEBA and 512 AFD monomer units, with a total of 68,608 atoms. Periodic boundary conditions are applied in all three directions. Then, the constructed monomer mixture is equilibrated using the NVT ensemble (constant number of particles, volume, and temperature). Subsequently, the mixture is equilibrated to a density of \(1.0\) gcm\({}^{-3}\) using an NPT ensemble (constant pressure and temperature). Nose-Hoover thermostat and barostat are used to maintain the temperature and pressure in the simulation box, respectively. The pre- and post-reaction templates are prepared for both primary and secondary amine reactions and can be found in our previous work[35]. The cutoffs between N and C atoms are set to 3.5 A and 5.0 A for initiating the primary and secondary amine reactions, respectively. The mixture is then allowed to have primary and secondary amine reactions under the NVT ensemble. After the system is cured to \(95\%\), it results in a system with internal stresses in the box. Therefore, to relieve these stresses, the system is annealed by undertaking the box to cooling and heating cycles of 1 K and 600 K, respectively with a simulation time of 50 ps at each of these temperatures under NPT (pressure of 1 bar), as also reported previously[35]. This process releases the stresses built in the box and brings the system to an equilibrated density of \(\rho=1.184\) gcm\({}^{-3}\) at 1 K, and the converged density at 300 K of \(\rho=1.159\) gcm\({}^{-3}\) which is in agreement with values for a typical epoxy system from literature[56]. The resulting MD simulation box is shown in Figure2(a), where the inset shows a section of the chain segment highlighting the presence of singly and doubly reacted Nitrogen atoms as a result of curing. The inset also shows a potential site for a disulfide bond exchange reaction where two S\(-\)S bond chain segments come in close vicinity to result in an exchange of chains as illustrated in Figure2(b). These are the characteristic reactions of this vitrimer system which accelerate above its topology transition temperature and result in the rearrangement of the network. The disulfide reactions are modeled using the topology-based update, and the pre- and post-reaction templates for these vitrimer reactions are shown in Figure S.2 of the supplementary information. The reaction occurs when any two sulfur atoms from two disulfide pairs come within a cutoff of 4.12 A. Then the reaction can proceed with an assigned probability, where, the probability of 0.0 implies no S\(-\)S reactions while 1.0 means all such eligible disulfide pairs can have a bond exchange reaction. Figure 1: (a) Monomer structures, and (b) Methodology for simulating creep with dynamic S\(-\)S reactions ### Simulating creep Computational experiments of creep are simulated under constant stress applied in the loading axis while other transverse directions are kept stress-free[57, 58, 38]. In this work, this is achieved using Nose-Hoover anisotropic barostat and thermostat under NPT conditions on the box. An anisotropic barostat allows individual stress components to be prescribed on the box. For simulating uniaxial creep, the prescribed state of true stress is: \(\sigma_{yy}=\sigma_{o},\ \sigma_{xx}=\sigma_{zz}=0\) where the constant true stress \(\sigma_{o}\) is maintained in \(y-\)axis. All three shear stress components are left unspecified which implies the shear strains on the box are kept to zero, therefore, the box stays orthogonal during the creep simulations. For an epoxy system, creep could be simulated by applying a constant value of stress and letting the system evolve over time under NPT dynamics. However, in a vitrimer system with dynamic bond exchange reactions, NPT dynamics can become unstable under applied pressure due to local fluctuations in temperature and pressure tensor caused by local molecular topology changes during reaction events[49, 54]. Furthermore, under stress-controlled loading (such as in creep), the box can change the dimensions suddenly and the dynamic cross-linking reactions can result in the loss of bonds or atom images across box boundaries which can lead to simulations failure. In the case of strain-controlled loading, given it is small, the dynamic bond reactions can take place while deforming the box slowly as reported in our previous work[35]. In the present study, to simulate the creep behavior of vitrimers, which is a stress-controlled loading, we have devised a new strategy as shown in Figure 1(b). Here, we break the application of stress into two parts, first, an NPT is run for 5 ps followed by an NVE for 4 ps under which the dynamic S\(-\)S reactions happen. The S\(-\)S reactions are allowed to happen only once in this reaction step, and which is controlled by specifying the reaction frequency in _fix bond/react_ card in LAMMPS. The NVE step allows the system to have reactions when there is no deformation happening (constant volume) and adequate time to relax the system after the reactions occur. The NPT step is then invoked where the system deforms under creep, and these steps are repeated. For comparison with the case of epoxy, the effect of the additional NVE step on creep response is discussed in Figure S.3 in supporting information. To accelerate the creep phenomenon in MD simulations, we apply a high value of uniaxial stress at high temperatures. The value of the applied stress dictates the resulting creep response of vitrimers, the role of which is also discussed in the later sections. The resulting response is characterized by the stretch ratio (\(\lambda\)) in the loading direction, which is defined as: \[\lambda=\frac{l}{l_{o}} \tag{1}\] where, \(l\) is the current length and \(l_{o}\) is the initial or undeformed length of the simulation box along the loading direction. In this work, the following studies are conducted: (a) the comparison of the creep response of a vitrimer and an epoxy; (b) the influence of applied stress or loading on vitrimer creep; and (c) the effect of S\(-\)S reaction probability on the creep response of vitrimers. ### Free volume of voids The free volume or the void volume is the region of space where atoms are not present and it is computed using an _alpha-shape_ method[59] implemented in OVITO software[60]. All the simulations are visualized using OVITO by performing surface mesh construction analysis. The probing sphere has a finite radius and for the analysis in the present study, we have considered it to be 3.5 A and the details of which can be found from OVITO documentation[60]. This approach helps us identify the actual material volume as well as the void volume for any time instance during the simulation of creep. The volume fraction of the void is computed using a Python script for OVITO. The percent volume fraction of the void (\(V_{f}^{void}\)) is defined as: \[V_{f}^{void}=\frac{V_{void}}{V_{cell}}\times 100 \tag{2}\] where, \(V_{void}\) and \(V_{cell}\) are the volume of the empty region and the total volume of the MD simulation cell at a given time instance, respectively. ### Bond orientation The vitrimer undergoes large deformation under creep in these simulations, therefore, it is important to probe the molecular mechanisms contributing to different aspects of the creep deformation and system evolution. MD simulations offer the capabilities to look into various quantities during the system evolution. One such quantity of interest is the orientation of the disulfide bonds when they undergo dynamic cross-linking reactions during deformation. Bond orientations of the S\(-\)S bonds are computed for the entire simulation box and used to analyze the results of the vitrimer with both, dynamic reactions and with reactions turned off as a baseline. The bond vector is computed using the coordinates of the two sulfur atoms: \[\mathbf{v}^{\text{S-S}}=\mathbf{x}_{1}-\mathbf{x}_{2} \tag{3}\] Since the simulation box is periodic in all dimensions, the coordinates of the two sulfur atoms are corrected based on the image flag of the atom. The bond vector projection of an S\(-\)S bond pair on the loading axis (\(y-axis\) in this case) is computed as: \[v_{y}^{\text{S-S}}=\left|\mathbf{v}^{\text{S-S}}\cdot\mathbf{y}\right| \tag{4}\] The average value of the bond vector projection on the loading axis is evaluated as: \[\overline{v}_{y}^{\text{S-S}}=\frac{1}{N}\sum_{i=1}^{N}\left|\mathbf{v}_{i}^{\text {S-S}}\cdot\mathbf{y}\right| \tag{5}\] where, \(<\cdot>\) denotes the dot product of two vectors, \(\mathbf{y}=[0\;1\;0]^{T}\) is the loading axis vector and \(N\) is the total number of S\(-\)S bonds in the simulation box. This computation is implemented in OVITO using the python scripting interface. First, all the S\(-\)S (disulfide) bonds are selected and then their connecting atoms are identified. The coordinates of the connecting sulfur atoms are used to evaluate the S\(-\)S bond orientation vector. ## 3 Results and Discussion Creep is an inherently slow phenomenon, beyond the time-scales of MD simulations when simulated at ambient conditions. Simulations of creep using atomistic simulations make an inherent assumption that the mechanisms that drive creep under ambient conditions also exist at extreme conditions at which accelerated creep occurs. Accelerated creep occurs at extreme conditions such as at high tensile loads close to or exceeding the yield stress and at higher temperatures where vitrimer reactions occur rapidly. In order to test all regimes of vitrimer creep, we employ extreme conditions in our simulations such that creep can occur rapidly and the mechanisms can be systematically analyzed. Another important assumption is that bond rupture of crosslinked epoxy bonds is not simulated. While such bond rupture occurs at high strains during extreme loading, it is necessary to suppress such mechanism of failure to study the actual mechanisms that might occur in ambient conditions where creep processes are much slower. In later subsections, we systematically reduce the extremity of loading to study the trends in the creep behavior as we move toward ambient conditions. ### Creep in vitrimer vs. epoxy Figure 2(c) shows the time-dependent deformation response of the vitrimer with and without the dynamic bond exchange reactions under uniaxial stress of \(500\) MPa at \(600\) K. This is an extreme case where the temperatures are well above the topological transition temperature, hence, we specify the probability of the S\(-\)S bond reactions to be \(p=1.0\). When the dynamic reactions are switched off, the material behaves like a conventional epoxy. Henceforth, the case with no reactions is thus referred to as the 'epoxy' and the case with reactions is referred to as the 'vitrimer'. With the application of tensile stress, both vitrimer and epoxy show an immediate increase in the strain, referred to as the primary creep response. The vitrimer is slightly more compliant than epoxy in this regime. The interesting difference arises after the primary creep wherein the epoxy does not show a significant increase in the creep strain. However, the vitrimer shows a marked increase in the strain after the primary regime. We refer to this regime as the secondary creep, which occurs in vitrimers due to dynamic reactions. A third regime called 'tertiary creep' indicated as '_f_' in Figure 2(c) is also observed at which void growth is observed to occur (shown and discussed in detail later in the manuscript). These three regimes of creep are shown using different colors which a gradual transition between them. For now, we focus on the mechanistic aspects of the secondary creep effect due to dynamic reactions in a vitrimer. Here, the types of chain arrangements are idealized into three configurations based on the orientation of the S\(-\)S bonds pairs in the vicinity (\(<\) 4.12 A) of each other, where the exchange between these configurations could lead to different outcomes on the bulk behavior of the vitrimer. To simplify the chain arrangements, we assume that the S\(-\)S bonds in the two chains are aligned in parallel, perpendicular, or in a crossed formation with respect to the loading axis. Figure 3 shows the possible chain rearrangements due to S\(-\)S reactions between these three configurations. The dynamic exchange between configurations 1 and 2 leads to neither void healing nor growth since the axial loading can be accommodated by the chains in both of these configurations. On the other hand, in case of a bond exchange from configuration 1 to 3 or 2 to 3, the chains can no longer bear the axial loads and this exchange will lead to a growth in the voids in 3. In the bulk, such reactions will lead to a reduction in the load-bearing ability and a stretch along the loading axis. The reverse of this exchange will heal the gap between the two S\(-\)S chains. This configuration switch (from 1 to 1 or 2), however, can become increasingly less likely as void growth progresses under the application of stress. Based on the alignment of two S\(-\)S bonds with respect to the loading axis, configuration 3 will result in the incremental stretching of the box, which is the responsible factor towards high secondary creep in vitrimers. To evaluate the possibilities of these transformations quantitatively, we look into the values of the S\(-\)S bond orientation with respect to the loading axis and their evolution over time during creep. Note that the bonds are aligned normal to the loading axis in the void growth configurations 3 while in all other configurations, there is a significant component of the bond vector oriented along the loading. The average value of the bond vector projected onto the loading axis is used to quantify the differences in these configurations. A high value indicates that most of the chains are aligned to the loading axis and a relatively lower value indicates the prevalence of type 3 configuration which leads to a decrease in stiffness along the loading axis and hence, an increase in creep strain. Figure 4(a) shows the time variation of the Figure 3: Possibilities of reaction pathways of the idealized chains opening due to S\(-\)S reactions under creep in vitrimers: 1–parallel, 2–crossed, and 3–perpendicular to the loading axis Figure 2: (a) Cured and annealed periodic polymer system shown with H atoms hidden, (b) schematic of a disulfide bond exchange reaction in the vitrimer system, and (c) comparison of stretch ratio (\(\lambda\)) response vs. time for vitrimer and epoxy under constant uniaxial stress of \(500\) MPa at \(600\) K, the points from \(a(a^{\prime})\) to \(f(f^{\prime})\) refer to key chosen snapshots during creep deformation as discussed in the text. Three creep regimes of a vitrimer are marked with different colors. mean value of the S\(-\)S bond projection on the loading axis (\(\overline{v}_{y}^{\text{S-S}}\), from Eqn. 5). In the case of epoxy where these reactions do not occur and any realignment is due only to the chain mobility. This differentiates the projected bond length magnitude from that due to chain mobility versus those occurring due to dynamic reactions. In Figure 4(a), the mean bond orientations for vitrimers with and without dynamic bond reactions increase together in the initial phases of the primary creep regime (upto time \(b(b^{\prime})\)). The vitrimer follows the epoxy initially because the total number of dynamic reactions occurring during the initial window is limited due to the small time window and rapid stretch due to high applied stress. The time period from \(b(b^{\prime})\) to \(c(c^{\prime})\) shows a clear difference between epoxies and vitrimers, at which point the effect of dynamic bonds show up. In epoxies, this region has a relatively stable state of the mean bond orientation with time concomitant with a decrease in free volume occurring due to chain mobility and accommodation. However, in the case of vitrimers, the mean bond projection decreases as the dynamic bond reactions result in the S\(-\)S bonds aligning towards a plane transverse to the loading axis. Figure 4(b) shows the void fraction (in percentage) as a function of time for both epoxy and the vitrimer cases. Initially, both epoxy and vitrimer show a rapid rise in void fraction (\(a(a^{\prime})\) to \(b(b^{\prime})\)) which is attributed to the observation that the simulation box is not able to instantaneously relax lateral dimensions (perpendicular to loading direction) in response to high applied stress. In the region \(b(b^{\prime})\)) to \(c(c^{\prime})\), the vitrimer case shows a steeper drop in the void fraction and lower void fraction indicating void healing due to dynamic bonding. In vitrimers, the realignment of bonds reaches its peak at time \(c\) (Figure 4(b)). The time period between \(c\) and \(e\) is categorized as the secondary creep stage in the vitrimer. In epoxy, the void distribution achieves a stable state around time \(c^{\prime}\) and slow changes in free volume beyond \(c^{\prime}\) (Figure 4(c) - bottom) are driven by limited chain mobility. In vitrimers, this regime is related to the formation of smaller voids throughout the volume as shown in Figure 4(c) at times \(d^{\prime}\) and \(e^{\prime}\). The mean orientation increases in this regime as compared to time \(c\) indicating that the chains are preferring to align along the loading axis, while the void fraction remains somewhat constant (similar to the epoxy). In spite of this, large increases in strain are seen in the secondary creep stage in vitrimers. This can be explained using two processes that are acting in concert. First, there is elongation due to chain rearrangement (similar to epoxy), and second, there is a sudden burst in dynamic bond reactions driving the realignment of S\(-\)S bonds orthogonal to the loading direction (as seen in the rise and drop in projected bond length between \(c\) to \(d\) in Figure 4(a)), which is decreasing the stiffness along the loading axis. This, in turn, further increases the creep strain. In Figure 4(b), the points for vitrimers are colored Figure 4: Comparison of creep in vitrimer and epoxy: (a) mean value of bond vector projection on loading axis (b) evolution of free volume fraction and number of S\(-\)S reactions in vitrimer, and (c) snapshots of the simulations box with free volume (green region) at different time instances of creep according to the computed number of reactions occurring at each time step. The reactions are seen to accelerate in the secondary creep regime from \(c\) to \(d\), which led to a significant increase in the creep strain as seen in Figure 2(c). The stability of void fraction from \(c\) to \(e\) in vitrimers follows from the fact that the voids created during loading are balanced by the healing of voids via dynamic bonding and subsequent chain rearrangement in new configurations. The effect of dynamic bonding can be more clearly seen from a probability distribution function of the bond projection at each time step. While the plot in Figure 4(a) only contained the mean at each time step, Figures 5(a) and (b) show time-varying probability density of the bond projection (\(v_{y}^{S-S}\)) for epoxy and vitrimer, respectively, showing the complete distribution at each time step. We can see that for epoxy, more number of bonds are clustered towards a bond value of \(2.0\) A and hence showing a higher mean value with a smaller standard deviation over the \(c^{\prime}\)-\(e^{\prime}\) regime. On the other hand, for vitrimer (Figure 5(b)), the probability is less than epoxy near (\(v_{y}^{S-S}=2.0\) A) which is the equilibrium bond length of the S\(-\)S bonds. It also shows a larger spread in the probability throughout the duration of creep. Therefore, the bond exchange reactions lead to more transversely oriented S\(-\)S bond chains and can result in a configuration 2 and 3. The former will contribute towards healing of the voids whenever they appear and the latter will result in a global strain increment as well as void growth. In vitrimers, the region between \(e\) and \(f\) is characterized by an increase in the volume fraction of voids. Such behavior is typically associated with tertiary creep. The increase in void-fraction in the vitrimer is related to one large void shown in Figure 4(c) that begins to grow and cannot be healed anymore. The number of dynamic bond exchange reactions also declines in this regime and the dynamic bond alignment becomes stable. However, the large void grows resulting in an increase in the creep strain. Figure 6 plots the volume of the largest void in the unit cell normalized to the cross-sectional area. The \(y\)-axis corresponds to the effective height of the void if the largest void were to span the entire cross-section. An increase in the largest void (non-healable) is seen from \(e\) to \(f\) in the case of the vitrimer Figure 5: Probability distribution function of S\(-\)S bond vector projection on loading axis (\(v_{y}^{S-S}\)) at each time step for (a) epoxy and (b) vitrimer. The vertical lines mark the key time stamps pertaining to the creep response discussed in the text Figure 6: Largest void volume (normalized with the cross-sectional area) at various time stamps during creep of a vitrimer and epoxy. The plotted points correspond to time instances from \(a\) to \(f\). which is shown in the inset and is indicative of tertiary creep in the vitrimer system. The values of stress and reaction probability dictate the extent and the rate of creep in materials. In the previous section, the accelerated creep was studied under extreme conditions. In the following section, we will look into the phenomena under smaller stresses and dynamic bond reaction rates to understand how the creep rates scale in typical laboratory conditions. ### Influence of loading Figure 7(a) shows the creep stretch response over time under the application of different values of applied uniaxial tensile stress. It is evident that primary creep strain increases with the magnitude of applied stress. The void fraction shown in Figure 7(b) remains stable for the lower stress cases over time due to healing processes. This indicates that the increase in creep strain during secondary creep is primarily a result of dynamic bonding driving the realignment of bonds orthogonal to the loading direction leading to decreased axial stiffness. The void fraction in the extreme case of 500 MPa is higher initially due to the higher chain mobility, however, healing processes eventually lower the volume fraction of voids to around 10% beyond which tertiary creep processes take over that increases the void volume fraction again. Tertiary creep behavior is not seen at lower stress levels due to lower chain mobility slowing down the progression of creep as well as a greater probability of healing as evident by the larger number of dynamics reactions at lower applied stress as shown in Figure 7(c). Figure 7: Vitrimer creep under various levels of applied stress (a) stretch ratio (\(\lambda\)) vs. time and (b) void volume fraction (c) the average number of reactions per loading step, and (d) number of reactions (N\({}_{\text{ran}}\)) at each time step shown in with data points and a moving average shown with line plot At higher stresses, the creep strains are higher which increases the total volume of the system and thereby reduces the total number of dynamic S\(-\)S bonds available to interact per unit volume. This reduction in the number of reactions is seen over time in all the systems as the creep strain increases and is shown in Figure 7(d). The changes in the number of chemical reactions as a function of time shown in this figure also depict intermittent increases in dynamic reactions accompanying an increase in creep strain. This is followed by intermittent drops in the number of reactions that stabilize the creep strain but at the same time increase the void volume fraction. The creep strain occurs in step-like manner for higher loads due to increases in the types of bond-exchange reactions that lead to void growth and stretch (transformations to configuration 3) followed by void healing processes (transformations to configuration 2). While we observe these step-like increments in creep strain due to the nanoscopic nature of MD simulations (limited number of atoms), a more continuous response is expected to be observed at the macroscopic level, where multiple such step-like increments may be happening at random locations at different times resulting in continuous macroscopic strain response. For a lower stress value, the stress is not enough to cause such drastic'sudden' increases in creep strain as the chains are unable to rapidly overcome inter-chain interactions (due to lower mobility). ### Influence of reaction probability Interestingly, the evolution of strain in secondary creep in Figure 7(a) shows that the strain rate remains stable with an increase in applied stress from 150 to 300 MPa. In all these cases, the reaction probability is taken as one, indicating the complete conversion of dynamic bonds when the S-S atoms interact during the simulation. The strain rate is constant for these stress levels indicating that the dynamic bond reaction rates are the controlling factor for differences in the strain rate in secondary creep. A recent experimental study by Hubbard et al. noted that at low temperatures and at low catalyst concentrations, vitrimers simply behave as a traditional epoxy material and the secondary creep rates increase with the amount of catalyst[27]. This effect of the extent of reactions can be simulated by controlling the probability of the disulfide reactions in our model. Figure 8 shows the influence of the probability of the dynamic cross-linking reactions for the vitrimer on the creep behavior at an applied stress of 500 MPa. A small increase in primary creep with an increase in the reaction probability is observed. The primary creep strain is relatively unaffected by the reaction probability as the initial strain happens rapidly enough such that the number of dynamic reactions during this stage is low. However, the reaction probability (as a proxy for the amount of catalyst and temperature) has a strong effect on the strain rate during secondary creep as observed in experiments[27]. At a very low value of reaction probability (1%), secondary creep behavior mirrors that of epoxy (Figure 8(a)). However, increasing the reaction probability to \(10\%\) causes a significant increase in the creep strain as the finite number of reactions drives more compliant behavior. Reaction probability also has a significant effect on the void fraction evolution as shown in Figure 8(b). Due to void healing in the presence of dynamic reactions, the general trend is that the void fraction is lower with an increase in reaction probability. In the extreme case of a reaction probability of 1.0, the tertiary creep behavior emerges resulting in an increase in the void fraction at later times. Experimental data also show that the tertiary behavior at later times is specific to cases with high amounts of catalysts[27]. The inset of Figure 8(b) shows the rapid increase in void fraction during primary creep is almost identical for all reaction probabilities initially due to the rapidity of loading. Beyond the peak in void fraction, all cases show a drop in the void fraction as the chain mobility leads to a decrease in free volume. However, the general trend of lower void fraction with increased reaction probability is seen during secondary creep once sufficient time is available for the healing of voids via dynamic reactions. Figure 8(c) shows the trends in the creep strain versus time behavior under loading at a relatively lower stress level of 150 MPa. The primary creep regime is more gradual for a lower stress case for all probability cases indicating a closer representation of the laboratory setting of creep experiments (even though the temperature is still elevated). Figure 8(d) shows that the increase in the number of reactions per deformation step is consistent with the increase in reaction probability. In Figure 8(c), the trend of the increase in the secondary creep rate with an increase in reaction probability can be seen. However, the creep strains at 150 MPa in Figure 8(c) are significantly lower than those at 500 MPa in Figure 8(a), and the transition from primary creep to secondary creep is smoother due to lower chain mobility at lower applied stresses. ## 4 Conclusions Vitrimers represent the next generation of epoxies, bringing forth the benefits of processability and recyclability. However, the added benefit of dynamic bonds that allow processability also leads to undesired creep behavior. This paper employed MD simulations to understand the nature of creep mechanisms in polymers with dynamic bonding as compared to cases where there is no dynamic bonding. A model disulfide bond exchange-based vitrimer system was investigated using all-atom large-scale molecular dynamics simulations. A novel approach was developed to simulate creep in vitrimers using topology-based bonding and a combination of NVE and NPT simulations to provide stable simulations in the presence of chemical changes. The following conclusions can be drawn from this study: * Vitrimer without reaction (equivalent to epoxy) shows a primary creep response which is driven by chain rearrangement and a very slow secondary creep response due to chain mobility around free-volume. * Vitrimer with dynamic reactions shows either two or three creep stages depending upon the probability of chemical reactions (equivalent to catalyst concentration and/or temperature). * In all cases, the first stage is primary creep showing an initial increase in voids due to chain rearrangement as soon as the load is applied followed by a rapid decrease and stabilization of void volume. This behavior is equivalent to epoxy although the void volume is lower in vitrimers due to some healing. * The second stage of creep is the secondary creep where two processes are in concert: elongation due to chain rearrangement (as in primary creep) but also sudden bursts in chemical reactions driving realignment of bonds orthogonal to the loading direction, thus decreasing the stiffness along the loading axis and increasing the creep strain. Very little void growth is seen, as the voids created during loading are balanced by healing via dynamic bonding. * Tertiary creep is seen in high reaction probability and high applied stress cases. In addition to the secondary creep mechanisms of void healing, the growth of an isolated large void(s) occurs in this regime which cannot be healed anymore. Since the slow secondary creep stage is the rate-determining step in the eventual failure of the material, methodologies to mitigate that should be prioritized in vitrimers. As observed in the simulations, the difference between the secondary Figure 8: Vitrimer under different S\(-\)S reaction probabilities at 600 K (a) stretch ratio vs. time at 500 MPa, (b) Void evolution at 500 MPa (c) stretch ratio vs. time for a lower stress level \(\sigma=150\) MPa, (d) average number of reactions per deformation step at 150 MPa creep phenomena in epoxies and vitrimers is the ability of dynamic bonding to reorient the bonds with respect to the loading direction. Thus, chemistry changes or additives that can prevent the initial realignment of dynamic bonds can be an effective strategy to mitigate creep in vitrimers. Indeed, a recent work shows that the addition of metal complexes that decrease chain-to-chain interactions can significantly reduce creep in vitrimers[61]. ## Acknowledgement This research was supported by a Seeding To Accelerate Research Themes (START) award titled 'Modeling and Characterizing Vitrimer and Vitrimer Composites for Structural Applications' at the University of Michigan. The authors would also like to acknowledge the computational resources and services provided by Advanced Research Computing at the University of Michigan, Ann Arbor. ## Supplementary Information Additional data referred to in this paper are included in a supplementary file. _Data Availability Statement:_ The data that support the findings of this study are available from the corresponding author upon reasonable request. ## References * [1] Omid Zabihi, Mojtaba Ahmadi, Saeid Nikafshar, Karthik Chandrakumar Preyesswary, and Minoo Naebe. A technical review on epoxy-clay nanocomposites: Structure, properties, and their applications in fiber reinforced composites. _Composites Part B: Engineering_, 135:1-24, 2018. * [2] Nabeed Saba, Mohammad Jawaid, Othman Y Alothman, MT Paridah, and Azman Hassan. Recent advances in epoxy resin, natural fiber-reinforced epoxy composites and their applications. _Journal of Reinforced Plastics and Composites_, 35(6):447-470, 2016. * [3] Maria L. Arias, Patricia M. Frontini, and Roberto J.J. Williams. Analysis of the damage zone around the crack tip for two rubber-modified epoxy matrices exhibiting different toughenability. _Polymer_, 44:1537-1546, 3 2003. * [4] Gurmeet Singh, David Kumar, and PM Mohite. Damage modelling of epoxy material under uniaxial tension based on micromechanics and experimental analysis. _Archive of Applied Mechanics_, 87(4):721-736, 2017. * [5] Pavan Kumar Penumakala, Jose Santo, and Alen Thomas. A critical review on the fused deposition modeling of thermoplastic polymer composites. _Composites Part B: Engineering_, 201:108336, 2020. * [6] Dhaiwat N Trivedi and Nikunj V Rachchh. Graphene and its application in thermoplastic polymers as nano-filler-a review. _Polymer_, 240:124486, 2022. * [7] Damien Montarnal, Mathieu Capelot, Francois Tournilhac, and Ludwik Leibler. Silica-like malleable materials from permanent organic networks. _Science_, 334(6058):965-968, 2011. * [8] Mathieu Capelot, Miriam M Unterlass, Francois Tournilhac, and Ludwik Leibler. Catalytic control of the vitrimer glass transition. _ACS Macro Letters_, 1(7):789-792, 2012. * [9] Jie Zheng, Zhuang Mao Png, Shi Hoe Ng, Guo Xiong Tham, Enyi Ye, Shermin S. Goh, Xian Jun Loh, and Zibiao Li. Vitrimers: Current research trends and their emerging applications. _Materials Today_, 51:586-625, 12 2021. * [10] Pengfei Zhang and Guoqiang Li. Advances in healing-on-demand polymers and polymer composites. _Progress in Polymer Science_, 57:32-63, 2016. * [11] Shafiqul Islam and Gajanan Bhat. Progress and challenges in self-healing composite materials. _Materials Advances_, 2(6):1896-1926, 2021. * [12] Seppe Terryn, Jakob Langenbach, Ellen Roels, Joost Brancart, Camille Bakkali-Hassani, Quentin-Arthur Poutrel, Antonia Georgopoulou, Thomas George Thuruthel, Ali Safaei, Pasquale Ferrentino, et al. A review on self-healing polymers for soft robotics. _Materials Today_, 47:187-205, 2021. * [13] Xiaoguang Li, Siwu Wu, Shuangjian Yu, Chong Xiao, Zhenghai Tang, and Baochun Guo. A facile one-pot route to elastomeric vitrimers with tunable mechanical performance and superior creep resistance. _Polymer_, 238:124379, 2022. * [14] Amber M Hubbard, Yixin Ren, Dominik Konkolewicz, Alireza Sarvestani, Catalin R Picu, Gary S Kedziora, Ajit Roy, Vikas Varshney, and Dhriti Nepal. Vitrimer transition temperature identification: Coupling various thermomechanical methodologies. _ACS Applied Polymer Materials_, 3(4):1756-1766, 2021. * [15] William N Findley and Francis A Davis. _Creep and relaxation of nonlinear viscoelastic materials_. Courier corporation, 2013. * [16] W Bradley, WJ Cantwell, and Hans Henning Kausch. Viscoelastic creep crack growth: a review of fracture mechanical analyses. _Mechanics of Time-Dependent Materials_, 1:241-268, 1997. * [17] Mario F Sa, Augusto M Gomes, Joao R Correia, and Nuno Silvestre. Creep behavior of pultruded gfrp elements-part 1: Literature review and experimental study. _Composite Structures_, 93(10):2450-2459, 2011. * [18] Hal F Brinson, L Catherine Brinson, et al. _Polymer engineering science and viscoelasticity: An introduction_. Springer, 2008. * [19] SW Bradley, PM Puckett, WL Bradley, and HJ Sue. Viscoelastic creep characteristics of neat thermosets and thermosets reinforced with e-glass. _Journal of Composites, Technology and Research_, 20(1):51-58, 1998. * [20] Robert A. Riggleman, Kenneth S. Schweizer, and Juan J. De Pablo. Nonlinear creep in a polymer glass. _Macromolecules_, 41:4969-4977, 7 2008. * [21] Wei Jian and Denvid Lau. Creep performance of cnt-based nanocomposites: A parametric study. _Carbon_, 153:745-756, 11 2019. * [22] Zhicheng Chang, Yafei Wang, Zhiyu Zhang, Ke Gao, Guanyi Hou, Jianxiang Shen, Liqun Zhang, and Jun Liu. Creep behavior of polymer nanocomposites: Insights from molecular dynamics simulation. _Polymer_, 228:123895, 7 2021. * [23] A Plaseied and A Fatemi. Tensile creep and deformation modeling of vinyl ester polymer and its nanocomposite. _Journal of Reinforced Plastics and Composites_, 28(14):1775-1788, 2009. * [24] Hau-Nan Lee, Keewook Paeng, Stephen F Swallen, and MD Ediger. Direct measurement of molecular mobility in actively deformed polymer glasses. _Science_, 323(5911):231-234, 2009. * [25] Yaguang Sun, Hua Yang, Wenjie Xia, and Yafang Guo. Molecular dynamics simulations of surface welding in crosslinked networks with thermally reversible linkages. _Applied Surface Science_, 527:146947, 10 2020. * [26] Chanwook Park, Geonwoo Kim, Jiwon Jung, Balaji Krishnakumar, Sravendra Rana, and Gun Jin Yun. Enhanced self-healing performance of graphene oxide/vitirimer nanocomposites: A molecular dynamics simulations study. _Polymer_, 206:122862, 10 2020. * [27] Amber M Hubbard, Yixin Ren, Catalin R Picu, Alireza Sarvestani, Dominik Konkolewicz, Ajit K Roy, Vikas Varshney, and Dhriti Nepal. Creep mechanics of epoxy vitrimer materials. _ACS Applied Polymer Materials_, 2022. * [28] Wenhao Liu, Daniel F Schmidt, and Emmanuelle Reynaud. Catalyst selection, creep, and stress relaxation in high-performance epoxy vitrimers. _Industrial & Engineering Chemistry Research_, 56(10):2667-2672, 2017. * [29] Shuyin Jiao and Yashashree Kulkarni. Molecular dynamics study of creep mechanisms in nanotwinned metals. _Computational Materials Science_, 110:254-260, 12 2015. * [30] Gurmeet Singh, Anthony M Waas, and Veera Sundararaghavan. Understanding defect structures in nanoscale metal additive manufacturing via molecular dynamics. _Computational Materials Science_, 200:110807, 2021. * 2452, 2011. * [32] Nicholas Fasanella and Veera Sundararaghavan. Atomistic modeling of thermomechanical properties of swnt/epoxy nanocomposites. _Modelling and Simulation in Materials Science and Engineering_, 23(6):065003, 2015. * [33] A Kumar, V Sundararaghavan, and AR Browning. Study of temperature dependence of thermal conductivity in cross-linked epoxies using molecular dynamics simulations with long range interactions. _Modelling and Simulation in Materials Science and Engineering_, 22(2):025013, 2014. * 3385, 2009. * [35] Gurmeet Singh and Veera Sundararaghavan. Modeling self-healing behavior of vitrimers using molecular dynamics with dynamic cross-linking capability. _Chemical Physics Letters_, 760:137966, 2020. * [36] P. Keblinski, D. Wolf, and H. Gleiter. Molecular-dynamics simulation of grain-boundary diffusion creep. _Interface Science 1998 6:3_, 6:205-212, 7 1998. * [37] V. Yamakov, D. Wolf, S. R. Phillpot, and H. Gleiter. Grain-boundary diffusion creep in nanocrystalline palladium by molecular-dynamics simulation. _Acta Materialia_, 50:61-73, 1 2002. * [38] Ricardo Simoes, Antonio M Cunha, and Witold Brostow. Molecular dynamics simulations of polymer viscoelasticity: effect of the loading conditions and creep behaviour. _Modelling and Simulation in Materials Science and Engineering_, 14(2):157, 2006. * [39] Robert A Riggleman, Hau-Nan Lee, Mark D Ediger, and Juan J De Pablo. Free volume and finite-size effects in a polymer glass under stress. _Physical Review Letters_, 99(21):215501, 2007. * [40] Iwan H Sahputra and Andreas T Echtermeyer. Creep-fatigue relationship in polymer: Molecular dynamics simulations approach. _Macromolecular Theory and Simulations_, 24(1):65-73, 2015. * [41] A. L. Bowman, S. Mun, S. Nouranian, B. D. Huddleston, S. R. Gwaltney, M. I. Baskes, and M. F. Horstemeyer. Free volume and internal structural evolution during creep in model amorphous polyethylene by molecular dynamics simulations. _Polymer_, 170:85-100, 4 2019. * [42] Xueliang Li, Xiaoyu Zhang, Jianzhong Chen, Li Huang, and Yong Lv. Uniaxial tensile creep behavior of epoxy-based polymer using molecular simulation. _Polymers_, 13(2):261, 2021. * [43] Simone Ciarella, Francesco Sciortino, and Wouter G Ellenbroek. Dynamics of vitrimers: Defects as a highway to stress relaxation. _Physical Review Letters_, 121(5):058003, 2018. * [44] Bernardo Oyarzun and Bortolo Matteo Mognetti. Efficient sampling of reversible cross-linking polymers: Self-assembly of single-chain polymeric nanoparticles. _Journal of Chemical Physics_, 148(11), mar 2018. * [45] Jian-Bo Wu, Shu-Jia Li, Hong Liu, Hu-Jun Qian, and Zhong-Yuan Lu. Dynamics and reaction kinetics of coarse-grained bulk vitrimers: a molecular dynamics study. _Physical Chemistry Chemical Physics_, 21(24):13258-13267, 2019. * [46] Frank Smallenburg, Ludwik Leibler, and Francesco Sciortino. Patchy particle model for vitrimers. _Physical Review Letters_, 111(18), oct 2013. * [47] Francesco Sciortino. Three-body potential for simulating bond swaps in molecular dynamics. _European Physical Journal E_, 40(1), jan 2017. * [48] Hua Yang, Kai Yu, Xiaoming Mu, Yujie Wei, Yafang Guo, and H. Jerry Qi. Molecular dynamics studying on welding behavior in thermosetting polymers due to bond exchange reactions. _RSC Advances_, 6(27):22476-22487, 2016. * [49] Jacob R Gissinger, Benjamin D Jensen, and Kristopher E Wise. Modeling chemical reactions in classical molecular dynamics simulations. _Polymer_, 128:211-217, 2017. * [50] Wim Denissen, Johan M Winne, and Filip E Du P rez. Vitrimers: permanent organic networks with glass-like fluidity. _Chemical science_, 7(1):30-38, 2016. * [51] Steve Plimpton. Fast parallel algorithms for short-range molecular dynamics. _Journal of Computational Physics_, 117(1):1-19, 1995. * [52] Aidan P Thompson, H Metin Aktulga, Richard Berger, Dan S Bolintineanu, W Michael Brown, Paul S Crozier, Pieter J in't Veld, Axel Kohlmeyer, Stan G Moore, Trung Dac Nguyen, et al. Lammps-a flexible simulation tool for particle-based materials modeling at the atomic, meso, and continuum scales. _Computer Physics Communications_, 271:108171, 2022. * [53] Pnina Dauber-Osguthorpe, Victoria A Roberts, David J Osguthorpe, Jon Wolff, Moniqe Genest, and Arnold T Hagler. Structure and energetics of ligand binding to proteins: Escherichia coli dihydrofolate reductase-trimethoprim, a drug-receptor system. _Proteins: Structure, Function, and Bioinformatics_, 4(1):31-47, 1988. * [54] Jacob R Gissinger, Benjamin D Jensen, and Kristopher E Wise. Reacter: a heuristic method for reactive molecular dynamics. _Macromolecules_, 53(22):9953-9961, 2020. * [55] Vikas Varshney, Soumya S Patnaik, Ajit K Roy, and Barry L Farmer. A molecular dynamics study of epoxy-based networks: cross-linking procedure and prediction of molecular and material properties. _Macromolecules_, 41(18):6837-6842, 2008. * [56] Mohammad Atif Faiz Afzal, Andrea R. Browning, Alexander Goldberg, Mathew D. Halls, Jacob L. Gavartin, Tsuguo Morisato, Thomas F. Hughes, David J. Giesen, and Joseph E. Goose. High-throughput molecular dynamics simulations and validation of thermophysical properties of polymers for various applications. _ACS Applied Polymer Materials_, 3:630, 2 2021. * [57] Xueliang Li, Xiaoyu Zhang, Jianzhong Chen, Li Huang, and Yong Lv. The mechanical properties and creep behavior of epoxy polymer under the marine environment: A molecular dynamics investigation. _Materials Today Communications_, 28:102737, 9 2021. * [58] Lik ho Tam, Jinqiao Jiang, Zechuan Yu, John Orr, and Chao Wu. Molecular dynamics investigation on the interfacial shear creep between carbon fiber and epoxy matrix. _Applied Surface Science_, 537:148013, 1 2021. * [59] Herbert Edelsbrunner and Ernst P Mucke. Three-dimensional alpha shapes. _ACM Transactions on Graphics (TOG)_, 13(1):43-72, 1994. * [60] Alexander Stukowski. Computational analysis methods in atomistic modeling of crystals. _JOM_, 66(3):399-407, 2014. * [61] Sheng Wang, Songqi Ma, Qiong Li, Xiwei Xu, Binbo Wang, Kaifeng Huang, Yanlin Liu, and Jin Zhu. Facile preparation of polyimine vitrimers with enhanced creep resistance and thermal and mechanical properties via metal coordination. _Macromolecules_, 53(8):2919-2931, 2020. Supplementary Information: Understanding Creep in Vitrimers: Insights from Molecular Dynamics Simulations **Gurmeet Singh, Veera Sundararaghavan1** Footnote 1: Corresponding author: Prof. Sundarararaghavan, Email: [email protected], Tel: 734-615-7242 **Vikas Varshney** Materials and Manufacturing Directorate Air Force Research Laboratory Wright-Patterns Air Force Base, OH, USA. [email protected] Footnote 2: footnotemark: #### Preparation of polymer system for MD simulations Initially the individual structures of 4-aminophenyl disulfide (AFD) and diglycidyl ether of bisphenol A (DGEBA) are created in Material Studio (shown in Figure S.1). Note that we have created DGEBA structure with epoxied ring open to having a simpler one-step reaction for curing. The epoxy bond in the DGEBA unit is considered to be open as it will reduce the steps in the curing amine reactions. The templates for both primary and secondary amines are created and supplied to the _fix bond/react_ command in LAMMPS to enable the curing reactions. Our previous work has detailed information about the curing pre- and post-reaction templates and the curing process followed by annealing of the obtained structure and can referred from [3]. Figure S.1: (a) Monomer structures (left) and the repeat unit box that contains DGEBA:AFD in 2:1 ratio ## Simulating creep in vitrimers The dynamic bond exchange reaction (DBER) of this vitrimer system involves the rearrangement of the chains due to the disulfide bond which is simulated by topology change-based reactions[1, 2]. The pre- and post-reaction templates for this DBER are shown in Figure S.2. The reaction is initiated based on the cutoff distance between the initiator atoms (either one of the green or yellow sulfur atoms). To simulate, it is required to prescribe a stress-controlled loading, and as result, there can be sudden large dimension changes in the simulation box. The presence of dynamic bond exchange reaction (local topology change) along with the sudden large change in the box dimensions lead to missing image flags of bonds and atoms across the periodic boundary. Therefore, we use a step-wise alternating loading with NPT (constant number of particles, pressure tensor, and temperature) and reactions under NVE (constant number of particles, volume, and energy) conditions. Figure S.3 shows the effect of the addition of the NVE step along with NPT for an 'epoxy' (vitrimer system with no reactions). This shows that the addition of an NVE step does not introduce any alteration to the chain mobility. Figure S.2: Reaction template for S-S bond exchange reaction of the vitrimer with pre (on left) and post (on right) reaction templates (each contains 44 atoms). One green and one yellow colored sulfur are the initiator or bonding atoms, and the C atoms shown here denote the edge atom in the templates Figure S.3: Comparison of running NPT only and NPT+NVE for the case of vitrimer without reaction
2307.04356
InfLoR-SNN: Reducing Information Loss for Spiking Neural Networks
The Spiking Neural Network (SNN) has attracted more and more attention recently. It adopts binary spike signals to transmit information. Benefitting from the information passing paradigm of SNNs, the multiplications of activations and weights can be replaced by additions, which are more energy-efficient. However, its "Hard Reset" mechanism for the firing activity would ignore the difference among membrane potentials when the membrane potential is above the firing threshold, causing information loss. Meanwhile, quantifying the membrane potential to 0/1 spikes at the firing instants will inevitably introduce the quantization error thus bringing about information loss too. To address these problems, we propose to use the "Soft Reset" mechanism for the supervised training-based SNNs, which will drive the membrane potential to a dynamic reset potential according to its magnitude, and Membrane Potential Rectifier (MPR) to reduce the quantization error via redistributing the membrane potential to a range close to the spikes. Results show that the SNNs with the "Soft Reset" mechanism and MPR outperform their vanilla counterparts on both static and dynamic datasets.
Yufei Guo, Yuanpei Chen, Liwen Zhang, Xiaode Liu, Xinyi Tong, Yuanyuan Ou, Xuhui Huang, Zhe Ma
2023-07-10T05:49:20Z
http://arxiv.org/abs/2307.04356v2
# InfLoR-SNN: Reducing Information Loss for Spiking Neural Networks ###### Abstract The Spiking Neural Network (SNN) has attracted more and more attention recently. It adopts binary spike signals to transmit information. Benefitting from the information passing paradigm of SNNs, the multiplications of activations and weights can be replaced by additions, which are more energy-efficient. However, its "Hard Reset" mechanism for the firing activity would ignore the difference among membrane potentials when the membrane potential is above the firing threshold, causing information loss. Meanwhile, quantifying the membrane potential to 0/1 spikes at the firing instants will inevitably introduce the quantization error thus bringing about information loss too. To address these problems, we propose to use the "Soft Reset" mechanism for the supervised training-based SNNs, which will drive the membrane potential to a dynamic reset potential according to its magnitude, and Membrane Potential Rectifier (MPR) to reduce the quantization error via redistributing the membrane potential to a range close to the spikes. Results show that the SNNs with the "Soft Reset" mechanism and MPR outperform their vanilla counterparts on both static and dynamic datasets. Keywords:Spiking Neural Network; Information Loss; Soft Reset; Quantization Error; Membrane Potential rectificar. ## 1 Introduction Deep Neural Networks (DNNs) have greatly improved many applications in computational vision, _e.g_., object detection and recognition [20], object segmentation [44], object tracking [2], etc. In pursuit of models with better performance, more and more complex networks are proposed. However, the increasing complexity poses a new challenge to model deployment on power-constrained devices, thus becoming an impediment to the applications of these advanced complex models. There have been several approaches to address this problem, such as quantization [12, 31, 32], pruning [21], knowledge distillation [41], spiking neural networks (SNNs) [11, 47, 30, 33, 16, 14, 17, 13, 15], and so on. Among these approaches, the biology-inspired method, SNNs provide a unique way to reduce energy consumption by mimicking the spiking nature of brain neurons. A spiking neuron integrates the inputs over time and fires a spike output whenever the membrane potential exceeds the firing threshold. And using 0/1 spike to transmit information makes SNNs enjoy the advantage of multiplication-free inference by converting multiplication to additions. Furthermore, SNNs are energy-efficient on neuromorphic hardwares, such as SpiNNaker [22], TrueNorth [1], Darwin [36], Tianjic [40], and Loihi [5]. Despite the attractive benefits, there is still a huge performance gap between existing SNN models and their DNN counterparts. We argue that the reason for the low accuracy is there exists information loss in SNNs. First, the information processing of neurons in supervised training-based SNNs are generally following the rules of the Integrate-and-Fire (IF) model or Leaky IF (LIF) model, where once a membrane potential exceeds the firing threshold, a "Hard Reset" operation will force the "residual" potential to be set to 0, _i.e._, once fired, all the information will be taken away. Obviously, this mechanism of "residual" membrane potential-ignored reset mode would fail to preserve the diversity of various membrane potentials. Hence the information encoding capacity of the network is compromised, such that the risk of information loss increases accordingly. Second, although the 0/1 spike information processing paradigm enables SNNs to enjoy the advantage of high efficiency, quantifying the real-valued membrane potential to 0/1 spikes will inevitably introduce the quantization error, which also brings about information loss. To address the information loss problem, we propose a "Soft Reset"-based IF (SRIF) neuron model that retains the "residual" membrane potential from subtracting its spike value at the firing instants. Hence the diversity of the membrane potentials that exceed the firing threshold will be preserved. Though "Soft Reset" is commonly used in converting methods from ANN to SNN (ANN2SNN) [19, 18, 30, 24] methods, rarely applied in supervised SNNs [27], and has not been Figure 1: The difference of our “Soft Reset”-based neuron and vanilla “Hard Reset”-based neuron. The membrane potential will be redistributed to reduce the quantization error in our neuron with MPR while not in the vanilla neuron. discussed in SNN enhancement from the perspective of information loss reducing. In addition, for alleviating quantization error, the Membrane Potential Rectifier (MPR) is proposed, which is performed before the firing activity to adjust the membrane potentials towards the spike values (_i.e._, 0/1). With MPR, the membrane potential will be decoupled as an original one and a modulated one. The original one can keep the mechanism of a neuron and the modulated one enjoys less quantization error than the original one without suffering from any negative effects. The difference between our neuron and the vanilla neuron is illustrated in Fig. 1. Our main contributions are as follows: * We propose using the SRIF model for supervised training-based SNNs. By retaining the "residual" membrane potential, SRIF enables the networks to distinguish the differences among those membrane potentials that exceed the firing threshold via subtracting their spike values thus enhancing the information encoding capacity of supervised training-based SNNs. * We present MPR to mitigate the quantization error. By utilizing a non-linear function to modulate the membrane potential close to 0/1 before firing activity triggers, the gap between the potential and its corresponding 0/1 spike value is minified while maintaining the sparse spike activation mechanism of SNNs. To our best knowledge, few works have noticed the quantization error in SNNs, and a simple but effective method for addressing this problem is presented. * Extensive experiments on both static and dynamic datasets were conducted to verify our method. Results show that the SNN trained with the proposed method is highly effective and efficient compared with other state-of-the-art SNN models, _e.g._, 96.49% top-1 accuracy and 79.41% top-1 accuracy are achieved on the CIFAR-10 and CIFAR-100. These results of our models even outperform their DNN counterparts surprisingly, and it is very rare that SNNs may have a chance to surpass their DNN counterparts. ## 2 Related Work ### Learning Methods of Spiking Neural Networks The training methods of SNNs can be divided into two categories. The first one is ANN2SNN [19, 18, 30, 24]. ANN2SNN yields the same input-output mapping for the ANN-SNN pair via approximating the continuous activation values of an ANN using ReLU by averaging the firing rate of an SNN under the rate-coding scheme. Since the ANN has achieved great success in many fields, ANN2SNN can maintain the smallest gap with ANNs in terms of performance and can be generalized to large-scale structures. However, being restricted to rate-coding, ANN2SNN usually requires dozens or even hundreds of timesteps to obtain well-performed networks. Lots of efforts have been done to reduce the long inference time, such as weight normalization [9], threshold rescaling [45], soft reset [19], threshold shift [30], and the quantization clip-floor-shift activation function [3], it is still hard to obtain high-performance SNNs with ultra-low latency. The second one is supervised learning-based SNNs. SNNs quantize the real-valued membrane potentials into 0/1 spikes via the firing activity. Since the gradient of the firing activity function is zero almost everywhere, the gradient descent-based optimizer can not be directly used for the training of SNNs. To alleviate the optimization difficulty, the approximate gradient-based strategy is commonly used, and some related approaches had been proposed to achieve trainable SNNs with high performance. For example, by regarding the SNN as a special RNN, a training method of back-propagation through time with different kinds of surrogate gradient was proposed [37]. The spatio-temporal back-propagation (STBP) [46] method enables SNNs to be trained on the ANN programming platform, which also significantly promotes the direct training research of SNNs. Differentiable spike which can match the finite difference gradient of SNNs well was proposed in [33]. The temporal efficient training (TET) [7] method with a novel loss and a gradient descent regime that succeeds in obtaining more generalized SNNs, has also attracted much attention. In RecDissNN [16], a new perspective to understand the difficulty of training SNNs by analyzing undesired membrane potential shifts is presented and the MPD-Loss to penalize the undesired shifts is proposed. Numerous works verify that supervised learning can greatly reduce the number of timesteps and handle dynamic datasets. It has increasingly aroused researchers' interest in recent years. In this work, we focus on improving the performance of the supervised learning-based SNNs by repressing information loss, which is rarely mentioned in other works. ### Threshold-dependent Batch Normalization Batch Normalization (BN) is one of the most widely used normalization technologies, which is initially designed for very deep Convolutional Neural Networks (CNNs). As it only focuses on normalizing the spatial feature maps, directly applying BN to SNNs would damage the temporal characteristic of SNNs, which stand with spatio-temporal feature maps, leading to low accuracy. To address this issue, some specially-designed normalization methods for SNNs were proposed recently. Typically, to simultaneously balance neural selectivity and normalize the neuron activity, NeuNorm [46] was proposed. Then, a more effective normalization technique that can take good care of the firing threshold, named threshold-dependent Batch Normalization (tdBN) was further proposed in [49]. It can normalize the feature maps of SNNs in both spatial and temporal domains [49]. Specifically, let \(\mathbf{X}_{t}\in\mathbb{R}^{B\times C\times H\times W}\) represent the input maps at each timestep, where \(t=1,\ldots,T\) (\(B\): batch size; \(C\): channel; \((H,W)\): spatial domain). Then for each channel \(c\), the spatio-temporal sequence \(\mathbf{X}^{(c)}=\{\mathbf{X}_{1}^{(c)},\cdots,\mathbf{X}_{T}^{(c)}\}\) is normalized by tdBN as follows, \[\tilde{\mathbf{X}}^{(c)}=\lambda\cdot\frac{\alpha V_{th}(\mathbf{X}^{(c)}- \bar{x}^{(c)})}{\sqrt{\text{mean}((\mathbf{X}^{(c)}-\bar{x}^{(c)})^{2})+ \epsilon}}+\beta, \tag{1}\] where \(V_{th}\) is the firing threshold, \(\alpha\) is a network-structure-dependent hyperparameter, \(\epsilon\) is a tiny constant, \(\lambda\) and \(\beta\) are two learnable parameters, mean(\(\mathbf{X}^{(c)}\)) is the mean value of \(\mathbf{X}^{(c)}\), \(\tilde{\mathbf{X}}^{(c)}\) is the normalized maps. In this paper, tdBN is also adopted considering its spatio-temporal normalization mechanism. ## 3 Preliminary and Methodology To avoid the information loss in supervised training-based SNNs, we propose the "Soft Reset" IF (SRIF) model and Membrane Potential Rectificater (MPR). ### "Soft Reset" IF Model An SNN adopts a biology-inspired spiking neuron that accumulates inputs along the time dimension as its membrane potential and fires a spike when the potential exceeds the firing threshold. This mechanism makes it much different from its DNN counterpart. For better introducing the proposed SRIF neuron, a unified form defined by a recent work [11], is given to describe the dynamics of all kinds of spiking neurons as follows, \[H[t]=f(U[t-1],X[t]), \tag{2}\] \[O[t]=\Theta(H[t]-V_{th}), \tag{3}\] \[U[t]=H[t](1-O[t])+V_{reset}O[t], \tag{4}\] where \(X[t]\), \(H[t]\), \(U[t]\), and \(O[t]\) are the input, membrane potentials before and after the trigger of a spike, and output spike at the timestep \(t\), respectively. \(V_{th}\) is the firing threshold, and is usually set to 0.5. \(\Theta(\cdot)\) is the step function defined by \(\Theta(x)=1\) for \(x\geq 0\) and \(\Theta(x)=0\) for \(x<0\). \(V_{reset}\) denotes the reset potential, which is set as 0. The function \(f(\cdot)\) describes the neuronal dynamics of spiking neuron models, for the commonly used IF neuron and LIF neuron, \(f(\cdot)\) can be respectively defined as follows, \[H[t]=U[t-1]+X[t], \tag{5}\] \[H[t]=\tau U[t-1]+X[t], \tag{6}\] where \(\tau\) denotes the membrane time constant. Both LIF and IF neurons have some unique advantages, with decay characteristics introduced by the membrane time constant, LIF neuron behaves more biologically compared with IF neuron, while IF neuron is more efficient due to its addition-only processing manner. In terms of accuracy performance, neither of them show an overwhelming advantage, and more detailed experimental results of these two neurons are provided in Section 4. Considering the subtle gap in performance, we prefer to use LIF model due to its neurodynamic characteristic, from the perspective of brain science research. Conversely, from the perspective of computer science research, we recommend using IF model, since it is more friendly to hardwares. However, both the IF model and LIF model might undertake a greater or lesser risk of information loss by the "Hard Reset" mechanism, _i.e._, when the input membrane potentials exceed the firing threshold, the neurons will force the membrane potentials to a fixed value. Such mechanism ignores the "residual" parts of those fired membrane potentials. These "residual" parts contain the diversity of the input potentials, and we argue that a neuron model which can preserve the diversity or differences of these membrane potentials that cause the firing is more suitable. To this end, along with the consideration of efficiency, we propose using a "Soft Reset" mechanism-based IF neuron, SRIF, which can keep the diversity of the membrane potentials by subtracting their firing spike values from themselves at the time where the threshold is exceeded. Though this similar "Soft Reset" mechanism has been widely used in ANN2SNN [19, 18, 30, 24], there are few works to use it in supervised learning-based SNNs [27]. We found its value in this field from a new perspective to reduce information loss. In SRIF neuron, Eq. (4) is updated as \[U[t]=H[t](1-O[t])+(H[t]-O[t])O[t]. \tag{7}\] It can be further simplified as \[U[t]=H[t]-O[t]. \tag{8}\] It can be seen that, similar to IF neuron, SRIF is also an addition-only model, thus enjoying computational efficiency when implementing on hardwares. Fig. 2 compares the difference between IF neuron and SRIF neuron in an intuitive way. Suppose that both models receive weighted input sequence of \(1.5V_{th}\), \(1.2V_{th}\), \(1.5V_{th}\), \(0.9V_{th}\), and \(1.4V_{th}\) across 5 consecutive timesteps. Our SRIF neuron will produce three spikes by retaining the residual potentials at the firing instants as depicted in Fig. 2. Whereas, the IF neuron will produce four spikes. ### Membrane Potential Rectificater To further mitigate the information loss, we present a non-linear function, called MPR by reducing the quantization error. MPR aims to redistribute the membrane potential before it is operated by the step function. It only modulates the membrane potential that is presented to the step function but does not modify the value of membrane potential, which receives and accumulates spikes from other neurons. Specifically, we further distinguish the membrane potentials as the original one, \(H\) as in Eq. (2) and the modulated one, \(\hat{H}\), which is the membrane potential that will be presented to the step function. In all previous works, \(H\) and \(\hat{H}\) are treated as the same. While in this paper, we would like to provide a new perspective that using a decoupling function to separate \(H\) and \(\hat{H}\) can be helpful. Specifically, \(H\) manages the original tasks as in other work, \(\hat{H}\) derives from \(H\) with a non-linear function, \(\varphi(\cdot)\), and it will be fed into the step function with a modulated form that can shrink the quantization error. With this decoupling mechanism, a neuron model can not only keep the membrane potential updating rule but also enjoy less quantization error. Before giving the full details of the MPR, we try to formulate the quantization error first. It is clear that the quantization errors corresponding to different membrane potentials should be different. Hence, a value closer to its quantization spike, \(o\), enjoys less quantization error. In specific, the firing threshold divides the membrane potentials into two parts, the part with smaller values is assigned to "0" spike, and the other with larger values is assigned to "1" spike. Then the quantization error depends on the margin between the membrane potential and its corresponding spike. Therefore, the quantization error can be defined as the square of the difference between the membrane potential and its corresponding quantization spike value as follows: \[\mathcal{L}_{q}=(u-o)^{2}, \tag{9}\] where \(u\) is the membrane potential and \(o\in\{0,1\}\). when \(u\) is below the firing threshold, \(o\) is 0, otherwise, 1. Hence, the design of MPR should obey the following two principles: * **Spike-approaching**: the modulated membrane potential, \(\hat{H}\) should be closer to the 0/1 spikes than the original membrane potential, \(H\). This principle ensures quantization error reduction. * **Firing-invariance**: for the \(H\) less than \(V_{th}\), the MPR should not produce the \(\hat{H}\) greater than \(V_{th}\) and vice versa. This principle ensures the neuron output be consistent with or without using MPR. Figure 2: The difference of “Hard Reset” IF neuron and “Soft Reset” IF (SRIF) neuron. Based on the above two principles, we define the MPR as the following symmetrical function: \[\varphi(u)=\left\{\begin{array}{ll}-(1-u)^{1/3}+1,&u{<}0,\\ \frac{1}{2tanh(3/2)}tanh(3(u-1/2))+1/2,&0\leq u\leq 1,\\ (u)^{1/3},&u{>}1.\end{array}\right. \tag{10}\] Fig. 3 shows the response curve of the designed MPR function following the principles of spike-approaching and firing-invariance. According to [49], the membrane potential follows a Gaussian distribution, \(\mathcal{N}(\mu;\sigma)\). Hence, to visualize the effect of the MPR, we sample 1000,00 values from a Gaussian distribution with \(\mathcal{N}(1/2;1)\), and present them to the MPR. Then the distribution of these 1000,00 MPR outputs is drawn in Fig. 4. It can be seen that the unimodal distribution, \(\mathcal{N}(1/2;1)\) is adjusted to a bimodal distribution which is with less quantization error since it can naturally gather the membrane potentials near "0" and "1". Moreover, it is worth noting that, the redistributed membrane potential, \(\hat{H}\) by MPR is only used for narrowing the gap between the true membrane potential, \(H\) and its quantization spike. It will not replace the original \(H\) in our Figure 4: The effect of the MPR. The original membrane potential distribution (left). The redistributed membrane potential distribution by MPR (right). Figure 3: The MPR function. SRIF neuron model. Then the complete new dynamics of the SRIF model can be described as follows, \[H[t]=U[t-1]+X[t], \tag{11}\] \[\hat{H}[t]=\varphi(H[t]), \tag{12}\] \[O[t]=\Theta(\hat{H}[t]-V_{th}), \tag{13}\] \[U[t]=H[t]-O[t]. \tag{14}\] The detailed Feed-Forward procedure for the SRIF neuron with MPR is given in Algo.1. ## 4 Experiment The proposed methods were evaluated on various static datasets (CIFAR-10 [25], CIFAR-100 [25], ImageNet [6]) and one neuromorphic dataset (CIFAR10-DVS [29]) with widely-used spiking archetectures including ResNet20 [42, 45], VGG16 [42], ResNet18 [10], ResNet19 [49], and ResNet34 [10]. ### Datasets and Settings **Datasets.** The CIFAR-10(100) dataset consists of 60,000 images in 10(100) classes with \(32\times 32\) pixels. The number of the training images is 50,000, and that of the test images is 10,000. The CIFAR10-DVS dataset is the neuromorphic version of the CIFAR-10 dataset. It is composed of 10,000 images in 10 classes, with 1000 images per class. ImageNet dataset has more than 1,250,000 training images and 50,000 test images. **Preprocessing.** Data normalization is applied on all static datasets to ensure that input images have 0 mean and 1 variance. Besides, the random horizontal flipping and cropping on these datasets were conducted to avoid overfitting. For CIFAR-10, the AutoAugment [4] and Cutout [8] were used for data augmentation. For the neuromorphic dataset, since the CIFAR10-DVS dataset does not separate data into training and testing sets, we split the dataset into 9000 training images and 1000 test images similar to [47]. For data preprocessing and augmentation, we resized the training image frames to \(48\times 48\) as in [49] and adopted random horizontal flip and random roll within 5 pixels. And the test images are just resized to \(48\times 48\) without any additional processing. **Training setup.** For all the datasets, the firing threshold \(V_{th}\) was set as 0.5 and \(V_{reset}\) as 0. For static image datasets, the images were encoded to binary spike using the first layer of the SNN, as in recent works [42, 11, 10]. This is similar to rate-coding. For the neuromorphic image dataset, we used the 0/1 spike format directly. The neuron models in the output layer accumulated the incoming inputs without generating any spike as the output like in [42]. For CIFAR-10(100) and CIFAR10-DVS datasets, the SGD optimizer with the momentum of 0.9 and learning rate of 0.01 with cosine decayed [34] to 0. All models were trained within 400 epochs with the same batch size of 128. For the ImageNet dataset, the SGD optimizer with a momentum set as 0.9 and a learning rate of 0.1 with cosine decayed [34] to 0. All models are trained within 320 epochs as in [10]. The batch size is set to 64. ### Ablation Study for Different Neuron Models We first conducted a set of ablation experiments to verify the effectiveness of the proposed SRIF model on CIFAR-10(100) using ResNet20 as the backbone under various timesteps without MPR. The results are shown in Tab. 1. It can be seen that whether on CIFAR-10 or CIFAR-100, the SRIF neuron always obtains the best result ranging from 2 timesteps to 8 timesteps. This indicates the superiority of the SRIF neuron. On the other hand, the LIF neuron performs better than the "Hard Reset" IF neuron on CIFAR-10, while the IF neuron performs better on CIFAR-100, even though the LIF neuron is more like a biological neuron. This comparison also shows that, although SNNs are proposed to imitate the biological neural networks, for the implementation of large-scale networks, they still need to rely on computer hardwares. Hence, the characteristics of computational science should also be considered. In this respect, the SRIF neuron is more suitable for its advantage of low power consumption and capacity of reducing information loss. ### Addition of MPR Then, a set of ablation experiments for the MPR were conducted on CIFAR-10(100) using ResNet20 and ResNet19 as backbones within 4 timesteps. Results in Tab. 2 show that the MPR can greatly improve performance. Especially on CIFAR-100, where ResNet20 with MPR increases the accuracy by 2.73%. These results verify the effectiveness of MPR in terms of performance improvement. We also computed the average quantization error of the first layer of the second block in the ResNet20/19 before and after MPR on the test set of CIFAR-10(100), respectively. Results in Tab. 3 show that the quantization error is obviously reduced by the MPR. The overall original membrane potential distribution and modulated membrane potential distribution by MPR of the first layer of the second block in ResNet20 on CIFAR-10 and CIFAR-100 test sets are shown in Fig. 5. It shows that the MPR adjusts the membrane potential distribution near "0" and "1", which is closer to its quantization spike. Put together, these results quantitatively support the effectiveness of MPR in reducing quantization error. ### Comparisons with Other Methods Our method was further compared with other state-of-the-art SNNs on static and neuromorphic datasets. Results are shown in Tab. 4, where for each run, the mean accuracy and standard deviation of 3 trials are listed. For simplification, **InfLoR** (_i.e._, short for **Inf**ormation **Loss** **R**educing) is used to denote the combination of SRIF and MPR. \begin{table} \begin{tabular}{c c c c} \hline \hline Dataset & Neuron model & Timestep & Accuracy \\ \hline \multirow{10}{*}{CIFAR-10} & “Hard Reset” LIF & 2 & 90.36\% \\ & “Hard Reset” IF & 2 & 90.07\% \\ & “Soft Reset” IF (SRIF) & 2 & **90.38\%** \\ \cline{2-4} & “Hard Reset” LIF & 4 & 92.22\% \\ & “Hard Reset” IF (SRIF) & 4 & **92.46\%** \\ \cline{2-4} & “Hard Reset” LIF & 6 & 92.66\% \\ & “Hard Reset” IF (SRIF) & 6 & **93.40\%** \\ \cline{2-4} & “Hard Reset” LIF & 8 & 92.90\% \\ & “Hard Reset” IF & 8 & 92.86\% \\ & “Soft Reset” IF (SRIF) & 8 & **94.09\%** \\ \hline \multirow{10}{*}{CIFAR-100} & “Hard Reset” LIF & 2 & 62.67\% \\ & “Hard Reset” IF & 2 & 63.43\% \\ \cline{2-4} & “Soft Reset” IF (SRIF) & 2 & **63.85\%** \\ \cline{2-4} & “Hard Reset” LIF & 4 & 66.00\% \\ & “Hard Reset” IF & 4 & 66.95\% \\ & “Soft Reset” IF (SRIF) & 4 & **67.90\%** \\ \cline{2-4} & “Hard Reset” LIF & 6 & 67.44\% \\ & “Hard Reset” IF & 6 & 68.31\% \\ & “Soft Reset” IF (SRIF) & 6 & **69.59\%** \\ \cline{2-4} & “Hard Reset” LIF & 8 & 67.85\% \\ & “Hard Reset” IF & 8 & 69.14\% \\ & “Soft Reset” IF (SRIF) & 8 & **69.90\%** \\ \hline \hline \end{tabular} \end{table} Table 1: Ablation study for different neuron models without MPR. **CIFAR-10(100).** For CIFAR-10, our method improves network performance across all commonly used backbones in SNNs. ResNet19-based InfLoR-SNN achieved 96.49% top-1 accuracy with 6 timesteps, which outperforms its STBP-tdBN counterpart with 3.33% higher accuracy and its ANN counterpart 0.20% higher accuracy even. The ResNet20-based InfLoR-SNN can reach to 93.65%, while only 92.54% in [42]. And our VGG16-based network also shows higher accuracy than other methods with fewer timesteps. On CIFAR-100, InfLoR-SNN also performs better and achieves a 1.89% increment on VGG16. Noteworthy, InfLoR-SNN significantly surpasses Diet-SNN [42] with 7.12% higher accuracy, which is not easy to achieve in the SNN field. Again, our ResNet19 also outperforms its ANN counterpart. To our best knowledge, it is the first time that the SNN can outperform its ANN counterpart. **ImageNet.** For the ImageNet dataset, ResNet18 and ResNet34 were used as the backbones. Results show that our ResNet18 achieves a 1.60% increment on SEW ResNet18 and a 2.46% increment on Spiking ResNet18. The accuracy of our ResNet34 does not exceed SEW ResNet34. However, SEW ResNet34 [10] transmits information with integers, which is not a typical SNN. For a fair comparison, we also report the result of Spiking ResNet34 in [10] which is worse than our method. Moreover, our InfLoR-based ResNet34 with 4 timesteps still obviously outperforms STBP-tdBN-based RersNet34 with 6 timesteps. \begin{table} \begin{tabular}{l l l c c} \hline \hline Dataset & Architecture & Method & Timestep & Accuracy \\ \hline \multirow{4}{*}{CIFAR-10} & ResNet20 & SRIF w/o MPR & 4 & 92.46\% \\ & & SRIF w/ MPR & 4 & **92.94\%** \\ \cline{2-5} & ResNet19 & SRIF w/o MPR & 4 & 95.44\% \\ & & SRIF w/ MPR & 4 & **96.27\%** \\ \hline \multirow{4}{*}{CIFAR-100} & ResNet20 & SRIF w/o MPR & 4 & 67.90\% \\ & & SRIF w/ MPR & 4 & **70.63\%** \\ \cline{2-5} & ResNet19 & SRIF w/o MPR & 4 & 77.85\% \\ \cline{2-5} & ResNet19 & SRIF w/ MPR & 4 & **78.42\%** \\ \hline \hline \end{tabular} \end{table} Table 2: Ablation study for MPR. \begin{table} \begin{tabular}{l l l l c} \hline \hline Dataset & Architecture & Method & Timestep & Avg. error \\ \hline \multirow{4}{*}{CIFAR-10} & ResNet20 & Before MPR & 4 & 0.28 \\ & After MPR & 4 & **0.04** \\ \cline{2-5} & ResNet19 & Before MPR & 4 & 0.20 \\ & & After MPR & 4 & **0.03** \\ \hline \multirow{4}{*}{CIFAR-100} & ResNet20 & Before MPR & 4 & 0.38 \\ & & After MPR & 4 & **0.05** \\ \cline{1-1} \cline{2-5} & ResNet19 & Before MPR & 4 & 0.32 \\ \cline{1-1} & & After MPR & 4 & **0.04** \\ \hline \hline \end{tabular} \end{table} Table 3: Quantization error. \begin{table} \begin{tabular}{l l l l l l} \hline \hline Dataset & Method & Type & Architecture & Timestep & Accuracy \\ \hline \multirow{8}{*}{CIFAR-10} & SpikeNorm [45] & ANN2SNN & VGG16 & 2500 & 91.55\% \\ & Hybrid-Train [43] & Hybrid & VGG16 & 200 & 92.02\% \\ & Spike-basedBP [28] & SNN training & ResNet11 & 100 & 90.95\% \\ & STBP [47] & SNN training & CIFARNet & 12 & 90.53\% \\ & TSSL-BP [48] & SNN training & CIFARNet & 5 & 91.41\% \\ & PLIF [11] & SNN training & PLIFNet & 8 & 93.50\% \\ \cline{2-6} & \multirow{4}{*}{SNN training} & VGG16 & 5 & 92.70\% \\ & & & 10 & 93.44\% \\ \cline{2-6} & & ResNet20 & 5 & 91.78\% \\ \cline{2-6} & & & 10 & 92.54\% \\ \cline{2-6} & \multirow{4}{*}{SNN training} & \multirow{4}{*}{ResNet19} & 2 & 92.34\% \\ & & & 4 & 92.92\% \\ & & & 6 & 93.16\% \\ \cline{2-6} & ANN* & ResNet19 & 1 & 96.29\% \\ \cline{2-6} & & & 2 & **94.44\%**\(\pm\)0.08 \\ & & & 4 & **96.27\%**\(\pm\)0.07 \\ & & & 6 & **96.49\%**\(\pm\)0.08 \\ \cline{2-6} & \multirow{4}{*}{SNN training} & ResNet20 & 5 & **93.01\%**\(\pm\)0.06 \\ & & & 10 & **93.65\%**\(\pm\)0.04 \\ \cline{2-6} & & VGG16 & 5 & **94.06\%**\(\pm\)0.08 \\ \cline{2-6} & BinarySNN [35] & ANN2SNN & VGG15 & 62 & 63.20\% \\ & Hybrid-Train [43] & Hybrid & VGG11 & 125 & 67.90\% \\ & T2FSNN [39] & ANN2SNN & VGG16 & 680 & 68.80\% \\ & Burst-coding [38] & ANN2SNN & VGG16 & 3100 & 68.77\% \\ & Phase-coding [23] & ANN2SNN & VGG16 & 8950 & 68.60\% \\ \cline{2-6} & \multirow{4}{*}{SNN training} & ResNet20 & 5 & 64.07\% \\ & & & VGG16 & 5 & 69.67\% \\ \cline{2-6} & ANN* & ANN & ResNet19 & 1 & 78.61\% \\ \cline{2-6} & & & ResNet20 & 5 & **71.19\%**\(\pm\)0.09 \\ \cline{2-6} & & & & 10 & **73.17\%**\(\pm\)0.08 \\ \cline{2-6} & \multirow{4}{*}{SNN training} & \multirow{4}{*}{ResNet19} & 2 & **75.56\%**\(\pm\)0.11 \\ & & & 4 & **78.42\%**\(\pm\)0.09 \\ & & & 6 & **79.51\%**\(\pm\)0.06 \\ \hline \multirow{8}{*}{ImageNet} & Hybrid-Train [43] & Hybrid & ResNet34 & 250 & 61.48\% \\ & SpikeNorm [45] & ANN2SNN & ResNet34 & 2500 & 69.96\% \\ \cline{2-6} & STBP-tdBN [49] & SNN training & ResNet34 & 6 & 63.72\% \\ \cline{2-6} & SEW ResNet [10] & SNN training & ResNet18 & 4 & 63.18\% \\ \cline{2-6} & \multirow{4}{*}{S spiking ResNet [10]} & ResNet34 & 4 & 67.04\% \\ \cline{2-6} & & ResNet18 & 4 & 62.32\% \\ \cline{2-6} & & ResNet34 & 4 & 61.86\% \\ \cline{2-6} & **InfLoR-SNN** & SNN training & ResNet18 & 4 & **64.78\%**\(\pm\)0.07 \\ \cline{2-6} & & ResNet34 & 4 & 65.54\%\(\pm\)0.08 \\ \hline \hline \end{tabular} \end{table} Table 4: Comparison with SoTA methods.* denotes self-implementation results. **CIFAR10-DVS.** For the neuromorphic dataset, CIFAR10-DVS, InfLoR-SNN achieves the best performance with 75.50% and 75.10% top-1 accuracy in 10 timesteps with ResNet19 and ResNet18 as backbones, and obtains 7.80% improvement compared with STBP-tdBN for ResNet19. It's worth noting that, as a more complex model, ResNet19 only performs a little better than ResNet20 on CIFAR10-DVS. It might be that this neuromorphic dataset suffers much more noise than static ones, thus a more complex model is easier to overfit. ## 5 Conclusions This work aims at addressing the information loss problem caused by the "Hard Reset" mechanism of neurons and the 0/1 spike quantification. Then, the SRIF model, which will drive the membrane potential to a dynamic reset potential, and the MPR that can adjust the membrane potential to a new value closer to quantification spikes than itself are proposed. A detailed analysis of why the SRIF and MPR can reduce the information loss is provided. Furthermore, abundant ablation studies of the proposed methods are given. Combining these two methods, our SNNs outperform other state-of-the-art methods. Figure 5: The effect of MPR. The overall original membrane potential distribution (left) and the redistributed membrane potential distribution by MPR (right) of the first layer of the second block in ResNet20 on CIFAR-10 and CIFAR-100 test sets. \begin{table} \begin{tabular}{l l l l l l} \hline \hline Dataset & Method & Type & Architecture & Timestep & Accuracy \\ \hline \multirow{4}{*}{CIFAR10-DVS} & Rollout [26] & Rollout & DenseNet & 10 & 66.80\% \\ & STBP-tdBN [49] & SNN training & ResNet19 & 10 & 67.80\% \\ \cline{2-6} & **InfLoR** & SNN training & ResNet19 & 10 & **75.50\%**\(\pm\)0.12 \\ \cline{1-1} \cline{2-6} & & ResNet20 & 10 & **75.10\%**\(\pm\)0.09 \\ \hline \hline \end{tabular} \end{table} Table 5: Training Spiking Neural Networks on CIFAR10-DVS.
2307.03439
Zig-zag-matrix algebras and solvable quasi-Hermitian quantum models
It is well known that the unitary evolution of a closed $M-$level quantum system can be generated by a non-Hermitian Hamiltonian $H$ with real spectrum. Its Hermiticity can be restored via an amended inner-product metric $\Theta$. In Hermitian cases the evaluation of the spectrum (i.e., of the bound-state energies) is usually achieved by the diagonalization of the Hamiltonian. In the non-Hermitian (or, more precisely, in the $\Theta-$quasi-Hermitian) quantum mechanics we conjecture that the role of the diagonalized-matrix solution of the quantum bound-state problem could be transferred to a maximally sparse ``zig-zag-matrix'' representation of the Hamiltonians.
Miloslav Znojil
2023-07-07T07:51:47Z
http://arxiv.org/abs/2307.03439v1
**Zig-zag-matrix algebras and solvable quasi-Hermitian quantum models** ## Abstract In quantum mechanics of unitary systems using non-Hermitian (or, more precisely, \(\Theta-\)quasi-Hermitian) Hamiltonians \(H\) such that \(H^{\dagger}\Theta=\Theta\,H\), the exactly solvable \(M-\)level bound-state models with arbitrary \(M\leq\infty\) are rare. A new class of such models is proposed here, therefore. Its exact algebraic solvability (involving not only the closed formulae for wave functions but also the explicit description of all of the eligible metrics \(\Theta\)) was achieved due to an extremely sparse (viz., just \((2M-1)-\)parametric) but still nontrivial "zig-zag-matrix" choice of the form of \(H\). ## Keywords non-Hermitian quantum mechanics of unitary systems; a zig-zag-matrix class of \(N-\)state solvable models; closed formulae for wave functions; closed formula for general physical inner-product metric Introduction One of the key obstacles encountered during transition from classical to quantum mechanics is that the corresponding evolution equations become operator equations. For this reason the experimentally testable quantum-theoretical predictions become, in general, incomparably more difficult. As a consequence, our understanding of the quantum dynamics becomes only too often dependent on an analysis mediated by some thoroughly simplified models of the physical reality in which, typically, a given selfadjoint Hamiltonian can be easily diagonalized, \(\mathfrak{h}\to\mathfrak{h}_{diagonal}\). In 1956, Freeman Dyson [1] had to deal with a fairly complicated multifermionic Hamiltonian \(\mathfrak{h}\) for which the convergence of the conventional numerical diagonalization algorithms happened to be prohibitively slow. Still, he managed to find a way out of the difficulty. His construction of the bound states became nicely convergent when he preconditioned his Hamiltonian, \[\mathfrak{h}\ \to\ H=\Omega^{-1}\,\mathfrak{h}\,\Omega\,. \tag{1}\] The essence of his convergence-acceleration recipe lied in a judicious guess of a sufficiently effective preconditioning (1) mediated by a suitable invertible mapping \(\Omega\). In the language of physics, this choice just reflected the role of the correlations in the many-body system in question. In this sense, the Dyson's simplification-oriented model-building strategy found a number of applications, first of all, in nuclear physics where the role of the short-range correlations is fairly well understood as well as sufficiently easily simulated [2]. The originality of the Dyson's innovation was that his mappings \(\Omega\) were allowed non-unitary, \(\Omega^{\dagger}\,\Omega\neq I\). The simplification (1) has been achieved, paradoxically, at an expense of the loss of the Hermiticity of the Hamiltonian. In the language of mathematics, this can be perceived as an unusual, non-unitary transition from a conventional Hilbert space (say, \(\mathcal{L}\)) to another, auxiliary but user-friendlier Hilbert space (say, \(\mathcal{H}_{math}\)). In the language of operators one moves from the conventional textbook representation of a realistic Hamiltonian which is self-adjoint in \(\mathcal{L}\), \(\mathfrak{h}=\mathfrak{h}^{\dagger}\), to its isospectral (and, presumably, significantly simpler) manifestly non-Hermitian avatar \(H\neq H^{\dagger}\) in \(\mathcal{H}_{math}\). In 1992, Scholtz et al [3] proposed a different, albeit closely related model-building strategy. These authors assumed that we are given a non-Hermitian operator \(H\) (or rather a set of such operators) in advance. Under this assumption they described the way how this operator or operators could "constitute a consistent quantum mechanical system". Thus, in our present notation they just considered an inverted correspondence (1), \[H\ \to\ \mathfrak{h}=\Omega\,H\,\Omega^{-1}\,. \tag{2}\] In such a deeply innovative approach one _preselects_ a suitable tentative non-Hermitian candidate for the Hamiltonian \(H\neq H^{\dagger}\) from the very beginning. Although the approach has recently been enriched by the development of mathematical techniques in which the feasibility of practical calculations has been enhanced (see, e.g., the more recent review [4]), its mathematical aspects are still full of open questions (see, e.g., monograph [5]). In applications, naturally, an internal consistency of the theory based on reconstruction (2) must be guaranteed. Thus, the spectrum of \(H\) must be real: In this respect it often helps when \(H\) is chosen parity-time symmetric [6, 7]. Secondly, many rather unpleasant emerging mathematical obstacles (see, e.g., their descriptions in [8, 9, 10, 11]) may be circumvented when the states of the system in question are represented in an \(M-\)dimensional Hilbert space \({\cal H}^{(M)}_{math}\) where \(M\) is arbitrarily large but finite [1, 3]. Under these conditions (see also [4] for more details) the implicit, hidden Hermiticity (or, in mathematics, quasi-Hermiticity [12]) of the operator \(H\) representing an input information about dynamics has to be made explicit. Once we abbreviate \(\Omega^{\dagger}\,\Omega=\Theta\) (calling this product a "physical Hilbert-space inner-product metric"), the standard and conventional textbook self-adjointness requirement \(\mathfrak{h}=\mathfrak{h}^{\dagger}\) becomes formally equivalent to the quasi-Hermiticity of \(H\) in \({\cal H}^{(M)}_{math}\), \[H^{\dagger}\,\Theta=\Theta\,H\,. \tag{3}\] This makes the reconstruction (2) of \(\mathfrak{h}\) redundant. In the words of review [3] one manages to find a physical inner-product metric \(\Theta\) compatible with Eq. (3) "if it exists" (i.e., just in certain parameter regimes). For practical purposes the use of the quasi-Hermitian formulation of quantum mechanics makes sense only if Eq. (3) as well as the related bound-state Schrodinger equation \[H\left|\psi_{n}\right\rangle=E_{n}\left|\psi_{n}\right\rangle,\ \ \ \ \ n=1,2,\ldots,M \tag{4}\] remain sufficiently user-friendly and solvable. In fact, there exist not too many solvable models of such a type. One category of the technical obstacles emerges when \(H\) is a differential operator. Indeed, as long as these operators are, typically, unbounded, the abstract quantum theory of Ref. [3] (where all of the operators of observables have been assumed bounded) cannot be applied. Even when both of our above-mentioned Hilbert spaces \({\cal L}\) and \({\cal H}^{(M)}_{math}\) are kept finite-dimensional, \(M<\infty\), the literature offers just a few toy-matrix models \(H\) which remain exactly solvable, at an arbitrary number of states \(M<\infty\), in the manner which combines the availability of a closed form of all of the solutions \(\left|\psi_{n}\right\rangle\) and \(E_{n}\) of Schrodinger Eq. (4) with the equally important availability of a closed form of at least one of the solutions \(\Theta=\Theta(H)\) of Eq. (3). In these models (see, e.g., [13, 14] or [15], with further references) one still has to work with the tridiagonal forms of the Hamiltonians. In what follows we intend to propose the class of solvable models in which the Hamiltonians form even a sparse-matrix subset of the similar tridiagonal models. They will form a new exactly solvable family of unitary quasi-Hermitian quantum models. We will see that these models can be perceived as an illustration of the situation in which the unitary quantum model based on a manifestly non-Hermitian Hamiltonian \(H\neq H^{\dagger}\) appears preferable and, not quite expectedly, technically simpler than its isospectral Hermitian-matrix alternative of conventional textbooks. ## 2 Exact solution of Schrodinger equation It is not too surprising that in the majority of the realistic applications of the bound-state Schrodinger equations using a self-adjoint phenomenological Hamiltonian \(\mathfrak{h}\) people recall the variational argument and approximations and keep the dimension \(M\) of the conventional textbook Hilbert space \(\mathcal{L}\) finite [1, 2, 3]. Then, there are also no conceptual problems with the linear-algebraic correspondence between \(\mathcal{L}\) and \(\mathcal{H}^{(M)}_{math}\) and/or between \(\mathfrak{h}\) and \(H\) (cf. Eq. (1)). The situation is different when the Hamiltonians \(\mathfrak{h}\) and/or \(H\) are differential operators with \(M=\infty\). On positive side, the standard "'kinetic plus potential energy" structure of such a class of operators makes them intuitively acceptable on physical grounds: Typically, this renders them eligible in the role of prototype models in quantum field theory [7]. For this reason, even on the level of quantum mechanics the dedicated literature abounds with the exactly solvable models [16] as well as with the quasi-exactly solvable models [17] - [20] of such a type. On negative side, the recent progress in the analysis of the \(\mathfrak{h}\ \leftrightarrow\ H\) correspondence led to several disappointing disproofs of its existence [5]. _Pars pro toto_ it is sufficient to mention papers [9, 10] containing the mathematically rigorous disproofs of the existence of _any_ self-adjoint partner \(\mathfrak{h}\) for the most popular imaginary cubic oscillator Hamiltonian \(H\) of Ref. [6]. After all, the very explicit words of warning were already written in the older review [3]. The authors required there that _any_ eligible non-Hermitian operator representing an observable should be bounded. In other words, under the warmly recommended auxiliary assumption \(M<\infty\) the mathematics becomes perceivably simpler. The problems which remain to be resolved are purely technical, emerging usually just at sufficiently large matrix dimensions \(M\gg 1\) and requiring only a sufficiently reliable numerical software. The prevailing nature of results is then purely numerical. The exactly solvable bound-state models are rare. Even the diagonalization of a next-to-diagonal (i.e., tridiagonal) matrix form of \(\mathfrak{h}\) may be ill-conditioned and just badly convergent [21]. In this context the guiding mathematical idea of our present project was that one of the rarely emphasized consequences of the choice of a non-Hermitian model \(H\) with real spectrum is that its nontrivial (i.e., non-diagonal) matrix representation can be "sparse tridiagonal". An exciting formal appeal of the latter idea appeared accompanied by the emerging possibility of its transfer to the phenomenology and physics of various lattice models [22]. Both of these observations led us directly to the introduction and study of the \(M\) by \(M\) "zig-zag-matrix" (ZZM) Hamiltonians \[H=H^{(ZZM)}(\vec{a},\vec{c})=\left[\begin{array}{cccccc}a_{I}&0&0&0&\ldots&\\ c_{I}&a_{2}&c_{2}&0&\ldots&\\ 0&0&a_{3}&0&0&\ldots\\ 0&0&c_{3}&a_{4}&c_{4}&\ddots\\ \vdots&\vdots&0&0&a_{5}&\ddots\\ &&\vdots&\ddots&\ddots&\ddots\end{array}\right] \tag{5}\] in which just \(2M-1\) real parameters do not vanish. A compact outline of some of the purely mathematical properties of matrices (5) may be found postponed to Appendix A below. The bound-state spectrum of these matrices (i.e., of the Hamiltonians of our present interest) coincides with the subset of parameters \(a_{1},a_{2},\ldots,a_{M}\) occupying the main diagonal (see Lemma 5 in the Appendix). This means that the unitarity of the evolution is guaranteed by the reality of the spectrum of energies, i.e., by the reality of these dynamical-input parameters. For the purposes of applications we are just left with the necessity of the construction of the wave functions i.e., in the conventional Dirac's notation, of the column-vector solutions \(|\psi_{1}\rangle\), \(|\psi_{2}\rangle\), \(\ldots\) of our Schrodinger Eq. (4) corresponding to the respective bound-state energies \(E=a_{n}\) with \(n=1,2,\ldots\). Surprisingly enough, these ket-vectors can be obtained in closed form. Incidentally, the construction is most straightforward when \(M=\infty\) because in such a case we do not need to separate the description of the solutions at even and odd \(M<\infty\). Another useful trick used during the explicit systematic construction of the solutions of our Schrodinger Eq. (4) is that at any \(M\leq\infty\) we can concatenate our ket-vector columns into a single \(M\) by \(M\) matrix, say \[\{|\psi_{1}\rangle,|\psi_{2}\rangle,\ldots,|\psi_{M}\rangle\}=Q_{solution}\,. \tag{6}\] Indeed, precisely the study of this matrix-of-solutions leads to the following important result. **Lemma 1**: _In Schrodinger Eq. (4) with \(M=\infty\) and with the ZZM Hamiltonian \(H=H^{(ZZM)}(\vec{a},\vec{c})\), the column-vector eigenstates \(|\psi_{n}\rangle\) corresponding to the energies \(E=a_{n}\) with \(n=1,2,\ldots\) and arranged in matrix (6) acquire precisely the ZZM-matrix form defined in terms of suitable vectors of parameters \(\vec{x}=\{x_{1},x_{2},\ldots\}\) and \(\vec{y}=\{y_{1},y_{2},\ldots\}\),_ \[Q_{solution}=H^{(ZZM)}(\vec{x},\vec{y})\,. \tag{7}\] _Under the auxiliary ad hoc assumption that \(c_{j}\neq 0\) at all odd \(j\) we may accept, say, the following normalization of the separate ket-vector columns of \(Q_{solution}\),_ \[x_{2}=x_{4}=\ldots=1\,,\ \ \ \ y_{1}=y_{3}=\ldots=1\,. \tag{8}\] _Then, the closed-form solution of our infinite-dimensional matrix Schrodinger equation is given by formulae_ \[x_{j}=(a_{j}-a_{j+1})/c_{j}\,,\ \ \ \ \ j=\mathrm{odd} \tag{9}\] _and_ \[y_{k}=-\frac{(a_{k+1}-a_{k+2})c_{k}}{(a_{k}-a_{k+1})c_{k+1}}\,,\ \ \ \ \ k= \mathrm{even}\,. \tag{10}\] **Proof.** Proof is based on the auxiliary lemmas of Appendix A reflecting the remarkable properties of the algebra of zig-zag matrices. The formulae themselves follow directly from the insertion of the solution in Schrodinger equation. \(\Box\) In this Lemma our assumption \(M=\infty\) enabled us to avoid the discussion of the role of the truncation of the matrix at \(M<\infty\). In the latter case, fortunately, it proves sufficient to set, formally, \(a_{M+1}=a_{M+2}=\ldots=0\) and \(c_{M}=c_{M+1}=\ldots=0\). Also the apparent \(c_{j}\to 0\) singularities at \(j<M\) are just an artifact of our normalization (8). Whenever needed, these singularities may be removed easily because our choice of the normalization has been dictated by the simplicity of the proof rather than by the simplicity or optimality of the formulae (7) and (8). The amendment is offered by the following re-normalized and more compact result. **Lemma 2**: _In Schrodinger Eq. (4) with the ZZM Hamiltonian \(H=H^{(ZZM)}(\vec{a},\vec{c})\), the column-vector eigenstates corresponding to the bound-state energies \(E=a_{n}\) with \(n=1,2,\ldots,M\) can be given a differently normalized "tilded" form_ \[\{|\widetilde{\psi_{1}}\rangle,|\widetilde{\psi_{2}}\rangle,\ldots,| \widetilde{\psi_{M}}\rangle\}=H^{(ZZM)}(\vec{p},\vec{q}) \tag{11}\] _where we employ a different, unit-diagonal normalization \(p_{j}=1\) at all \(j\), and where we obtain the more compact formula for the off-diagonal parameters forming the vector \(\vec{q}\),_ \[q_{k}=-c_{k}/(a_{k}-a_{k+1})\,,\ \ \ \ \ k=1,2,\ldots,M-1\,. \tag{12}\] **Proof.** As long as we just changed the normalization convention, there exists a diagonal matrix (say, \(\varrho\)) such that \(H^{(ZZM)}(\vec{x},\vec{y})\,\varrho=H^{(ZZM)}(\vec{p},\vec{q})\). \(\Box\) Closed-form construction of all of the eligible metrics It is well known [23] that whenever we replace the manifestly non-Hermitian Hamiltonian \(H\) in Schrodinger Eq. (4) by its conjugate \(H^{\dagger}\), the knowledge of the "ketket" solutions of the associated Schrodinger equation \[H^{\dagger}\left|\psi_{n}\right\rangle\!\rangle=E_{n}\left|\psi_{n}\right\rangle \!\rangle\,,\ \ \ \ n=1,2,\ldots,M \tag{13}\] enables us to define all of the admissible metrics \(\Theta=\Theta(H)\) (i.e., all of the admissible solutions of Eq. (3)) by formula \[\Theta=\Theta(\kappa_{1}^{2}\,,\kappa_{2}^{2}\,,\ldots,\kappa_{M}^{2})=\sum_ {n=1}^{M}\,\left|\psi_{n}\right\rangle\!\rangle\,\kappa_{n}^{2}\left\langle \!\langle\psi_{n}\right|. \tag{14}\] This is _not_ a spectral representation of \(\Theta\) because in general (i.e., due to the non-Hermiticity of \(H\)) the overlaps \(\left\langle\!\langle\psi_{m}|\psi_{n}\rangle\!\rangle\right.\) need not vanish even when \(m\neq n\). Still, this formula shows that the general metric can vary with as many as \(M\) freely variable real and positive parameter \(\kappa_{n}^{2}\). In comparison with Eq. (4), the most important comment concerning Eq. (13) is that as long as our toy-model ZZM Hamiltonians \(H\) are real, we now have to deal with the transposed matrices, \[H^{\dagger}=H^{T}=H^{(TZZM)}(\vec{a},\vec{c})=\left[\begin{array}{cccccc}a_{ \emph{1}}&c_{\emph{1}}&0&0&\ldots&\\ 0&a_{\emph{2}}&0&0&\ldots&\\ 0&c_{\emph{2}}&a_{\emph{3}}&c_{\emph{3}}&0&\ldots\\ 0&0&0&a_{\emph{4}}&0&\ddots\\ \vdots&\vdots&0&c_{\emph{4}}&a_{\emph{5}}&\ddots\\ &&\vdots&\ddots&\ddots&\ddots\end{array}\right]\,. \tag{15}\] The crucial consequence is that it is sufficient to replace the ZZM theory of Appendix A by its transposed-matrix TZZM alternative. **Lemma 3**: _In Schrodinger Eq. (13) with the transposed Hamiltonian \(H^{T}=H^{(TZZM)}(\vec{a},\vec{c})\), the collection of the column-vector eigenstates \(\left|\psi_{n}\right\rangle\!\rangle\) corresponding to the bound-state energies \(E=a_{n}\) with \(n=1,2,\ldots,M\) can be given the TZZM form,_ \[\{\left|\psi_{1}\right\rangle\!\rangle,\left|\psi_{2}\right\rangle\!\rangle, \ldots,\left|\psi_{M}\right\rangle\}=H^{(TZZM)}(\vec{p},\vec{q})\,. \tag{16}\] _The normalization \(p_{j}=1\) (at all \(j\)) leads to the closed-form result_ \[q_{k}=-c_{k}/(a_{k}-a_{k+1})\,,\ \ \ \ \ k=1,2,\ldots,M-1\,. \tag{17}\] **Proof.** Proof is a TZZM analogue of the ZZM proof of Lemma 2. Formula (16) containing \(M-1\) characteristics (17) of the Hamiltonian may be inserted in the definition of all of the eligible metrics (14). The resulting \(M\) by \(M\) matrices \(\Theta\) would be, by construction, invertible, Hermitian and positive definite. Due to the reality and tridiagonality of the factor (16) and of its transposition, all of the metrics will have a real and symmetric pentadiagonal-matrix form. The explicit evaluation of their matrix elements is straightforward and constitutes our present main mathematical result. **Theorem 4**: _Every metric \(\Theta\) guaranteeing the quasi-Hermiticity (3) of our \((2M-1)-\)parametric ZZM Hamiltonian (5) can be given the three-component form_ \[\Theta=\Theta^{(diag)}+\Theta^{(tridiag)}+\Theta^{(pentadiag)}\,. \tag{18}\] _Its first component is just the invertible, \(q_{j}-\)independent and positive-definite diagonal matrix,_ \[\Theta^{(diag)}=H^{(TZZM)}(\vec{\kappa^{2}},\vec{0})\,. \tag{19}\] _The second component has the sparse tridiagonal-matrix form with vanishing main diagonal,_ \[\Theta^{(tridiag)}_{nn}=0\,,\;\;\;\;\;n=1,2,\ldots,M\,. \tag{20}\] _Its off-diagonal elements_ \[\Theta^{(tridiag)}_{m,m+1}=\Theta^{(tridiag)}_{m+1,m}=q_{m}\kappa_{m+1}^{2}\,,\;\;\;\;m=1,3,\ldots\,\,(\leq M-1) \tag{21}\] _and_ \[\Theta^{(tridiag)}_{n,n+1}=\Theta^{(tridiag)}_{n+1,n}=q_{n}\kappa_{n}^{2}\,, \;\;\;\;\;n=2,4,\ldots\,\,(\leq M-1) \tag{22}\] _are all linear in \(q_{j}\)s. The remaining, third component of the metric has the pentadiagonal sparse-matrix form_ \[\Theta^{(pentadiag)}=\left[\begin{array}{cccccc}{q_{1}}^{2}\kappa_{2}^{2}&0 &{q_{1}}\,\kappa_{2}^{2}\,{q_{2}}&0&\ldots\\ 0&0&0&0&0&\ldots\\ {q_{1}}\,\kappa_{2}^{2}\,{q_{2}}&0&{q_{2}}^{2}\kappa_{2}^{2}+{q_{3}}^{2} \kappa_{4}^{2}&0&{q_{3}}\,\kappa_{4}^{2}\,{q_{4}}&0&\ldots\\ 0&0&0&0&0&0&\ldots\\ \vdots&0&{q_{3}}\,\kappa_{4}^{2}\,{q_{4}}&{q_{4}}^{2}\kappa_{4}^{2}+{q_{5}}^{ 2}\kappa_{6}^{2}&0&{q_{5}}\,\kappa_{6}^{2}\,{q_{6}}&\ddots\\ &\vdots&0&0&0&0&0&\ddots\\ &&\vdots&0&{q_{5}}\,\kappa_{6}^{2}\,{q_{6}}&0&{q_{6}}^{2}\kappa_{6}^{2}+{q_{7} }^{2}\kappa_{8}^{2}&\ddots\\ &&\vdots&\ddots&\ddots&\ddots&\ddots\end{array}\right] \tag{23}\] _with elements which are all quadratic in \(q_{j}\)s._ **Proof.** The result follows directly from formulae (14) and (16). \(\square\) In the context of physics the latter result is truly remarkable because it implies that it really does make sense to work with the present non-Hermitian ZZM or TZZM representations \(H\) of the Hamiltonians. For the simplest dynamical scenarios with the finite and not too large numbers of the bound-state levels \(M<\infty\) at least, one could feel tempted to employ the standard algorithms of linear algebra and to factorize the pentadiagonal-matrix metric \(\Theta=\Omega^{\dagger}\Omega\) of Theorem 4. In principle, this would yield the explicit Dyson map \(\Omega\) and, finally, enable us to return to the quantum mechanics of textbooks in which the conventional self-adjoint representation \(\mathfrak{h}\) of the Hamiltonian would be "easily" reconstructed via Eq. (2). Nevertheless, a _feasible_ realization of such an alternative, more traditional version of the present models would require an invention of new methods. Indeed, the first mathematical obstacle would emerge when we imagine that the metrics \(\Theta\) of Eq. (14) and of Theorem 4 are ambiguous, \(M-\)parametric [3]. Secondly, we would have to deduce, from factorization \(\Theta=\Omega^{\dagger}\,\Omega\), a suitable sample of the Dyson map \(\Omega\). Then, indeed, a new set of free parameters forming a unitary matrix \(\mathcal{U}\) would have to be introduced and considered here due to the ambiguity of the factorization of the metric itself, \(\Theta=\Omega^{\dagger}\,\Omega=\Omega^{\dagger}\,\mathcal{U}^{\dagger}\, \mathcal{U}\,\Omega\). Thus, certainly, the "conventional" Hermitian matrix \(\mathfrak{h}\) would be a non-sparse, user-unfriendly matrix in general. Hence, the non-Hermitian matrix \(H\) really seems to offer the most economical representation of the Hamiltonian. One could hardly find reasons for a tedious reconstruction of its partner(s) \(\mathfrak{h}\) of conventional textbooks. ## 4 Conclusions In the dedicated literature, not too many quasi-Hermitian quantum models have the "exact and complete solvability" property of our present class of \(M-\)level bound-state systems using the real zig-zag-matrix Hamiltonians (5) with \(2M-1\) free parameters. Typically, the algebraically solvable models of such a type are based on the use of tridiagonal matrix forms of \(H\) (cf., e.g., a sample of such a class of quasi-Hermitian models in [15]). In general, given a realistic non-Hermitian \(H\), the metric \(\Theta\) assigned to the model is usually just approximate and not too flexible, corresponding usually just to a fixed choice of the set of parameters \(\kappa_{n}^{2}\) in (14). In comparison, our present model is rather exceptional in keeping the whole set of the metric-determining parameters \(\kappa_{n}^{2}\) freely variable. Moreover, our restriction of the class of the Hamiltonians to the mere sparse zig-zag matrices of Appendix A proved fortunate: We discovered that the full sets of the eigenstates of \(H\) appeared to belong to the same (viz., ZZM) subclass of the highly sparse zig-zag matrices. One would even like to say "serendipitiously fortunate" because the same comment appeared to apply also to the TZZM subclass and to the transposed Hamiltonian playing a key role in the construction of the _complete_ set of the eligible metrics \(\Theta=\Theta(H)\). In the context of physics one of the remarkable properties of the model is that its bound-state energy spectrum coincides, due to the ZZM sparsity of the Hamiltonian, with its main diagonal. As an input information about dynamics it can be, therefore, fixed in advance. This means that the remaining \(M-1\) freely variable off-diagonal matrix elements of \(H\) can be interpreted as playing an energy-complementing role of parameters responsible for the operator metric \(\Theta\), i.e., for the correct physical geometry of the Hilbert space. In this manner these parameters influence, directly and implicitly, the selection and form of the other possible observable features of the system [3]. A final complementary comment may be also added on the existence and structure of the exactly solvable quantum models of unitary systems occurring and widely used within the framework of the conventional Hermitian quantum mechanics in which the metric is kept trivial, \(\Theta_{conventional}=I\). Indeed, once the operators of the observables (including the Hamiltonians) become required, in the conventional textbook spirit, self-adjoint, the first nontrivial matrix form of an observable (or of the Hamiltonian) has to be real and symmetric, i.e., fully tridiagonal, i.e., from the numerical-manipulation perspective, perceivably more complicated than our "maximally sparse" ZZM models (5).
2304.02277
Rethinking the Trigger-injecting Position in Graph Backdoor Attack
Backdoor attacks have been demonstrated as a security threat for machine learning models. Traditional backdoor attacks intend to inject backdoor functionality into the model such that the backdoored model will perform abnormally on inputs with predefined backdoor triggers and still retain state-of-the-art performance on the clean inputs. While there are already some works on backdoor attacks on Graph Neural Networks (GNNs), the backdoor trigger in the graph domain is mostly injected into random positions of the sample. There is no work analyzing and explaining the backdoor attack performance when injecting triggers into the most important or least important area in the sample, which we refer to as trigger-injecting strategies MIAS and LIAS, respectively. Our results show that, generally, LIAS performs better, and the differences between the LIAS and MIAS performance can be significant. Furthermore, we explain these two strategies' similar (better) attack performance through explanation techniques, which results in a further understanding of backdoor attacks in GNNs.
Jing Xu, Gorka Abad, Stjepan Picek
2023-04-05T07:50:05Z
http://arxiv.org/abs/2304.02277v2
# Rethinking the Trigger-injecting Position in Graph Backdoor Attack ###### Abstract Backdoor attacks have been demonstrated as a security threat for machine learning models. Traditional backdoor attacks intend to inject backdoor functionality into the model such that the backdoored model will perform abnormally on inputs with predefined backdoor triggers and still retain state-of-the-art performance on the clean inputs. While there are already some works on backdoor attacks on Graph Neural Networks (GNNs), the backdoor trigger in the graph domain is mostly injected into random positions of the sample. There is no work analyzing and explaining the backdoor attack performance when injecting triggers into the most important or least important area in the sample, which we refer to as trigger-injecting strategies MIAS and LIAS, respectively. Our results show that, generally, LIAS performs better, and the differences between the LIAS and MIAS performance can be significant. Furthermore, we explain these two strategies' similar (better) attack performance through explanation techniques, which results in a further understanding of backdoor attacks in GNNs. backdoor attack, trigger-injecting position, graph neural networks ## I Introduction Graph Neural Networks (GNNs) have demonstrated their superior performance in a variety of applications, such as node classification [7], graph classification [3], image classification [33], and natural language processing [33]. However, GNNs are vulnerable to various adversarial attacks, including the backdoor attack. Specifically, a backdoor attack occurs when the adversary deliberately modifies a proportion of the training data by adding the trigger (e.g., subgraph in a graph) to make the model misclassify the samples with the trigger as the target label(s). The backdoored GNN model aims to perform normally on benign testing samples. However, if the same trigger used in the training phase is introduced onto a testing sample, the backdoored model exhibits a particular output behavior of the adversary's choosing, such as misclassification into to target label(s)). Backdoor attacks have been demonstrated to perform malicious tasks on security-related graph learning services, such as converting the label of a fraud account to benign in a social network [11]. Hence, the backdoor attack is a serious threat to the practical applications of GNNs. Several works explored the backdoor attacks in GNNs [24, 26, 31]. In these works, one idea of the trigger-injecting position is randomly selecting a subgraph as there is no specific location information in a graph [23]. Another idea to inject the trigger into a graph is to select the subgraph which has high similarity with the trigger graph [24]. Moreover, based on the improvement of the explanation techniques in the graph domain, [26] proposed injecting the trigger into the most important or least important area of the sample. However, that work does not provide any experimental analysis to confirm the assumptions made. Also, there is no work so far on using explanation tools to explain the backdoor attack behavior in the graph domain. This work first raises a core question: _What is the attack performance when injecting trigger into the most or least important area of the sample?_ To answer this question, we explore the impacts of the backdoor trigger-injecting position from the perspective of the most (MIAS) or least important area of the sample (LIAS). Although there is no location information in a graph, we can still locate the most (least) important area in a graph, like in an image, by using some explanation techniques [27]. As shown in experiments, we demonstrate that the attack performance of LIAS is better, where the difference from MIAS can even be significant. This observation inspires one further question: _Can we explain this difference?_ There are already some works on explaining backdoor attacks in the image domain through visualization techniques [4, 25]. For example, [4] plotted the average activations of the backdoored model's last convolutional layer over clean and backdoored images to explain their attack. [25] used the Grad-CAM [16] visualization method to explain the backdoor attack in federated learning. One example of explaining a backdoor attack in the image domain with Grad-CAM is shown in Fig. 1. Comparing the heatmaps of the clean and poisoned images on the backdoored model, we can clearly understand how the backdoored model recognizes the trigger pattern to achieve the backdoor attack. In contrast, applying visualization techniques to explain the backdoor attack behavior in the graph domain is difficult. First, the complexity of the visual representation of a graph is much larger than visualizing an image, especially for large graphs [6]. Second, visualizing the graph neural networks to explain the backdoor attack is not trivial as it is a time-consuming or even impossible process [8]. Therefore, in this work, instead of using the visualization method, we explain the difference between two triggering-injecting strategies by computing an evaluation metric. Specifically, we compute the similarity of the predicted mask of the representative features from the backdoored model and the target mask of the representative features from the clean model. In our experiments, we find that the successfully misclassified samples generally have high similarity while the unsuccessfully misclassified samples have a much lower similarity. However, we also find that in one specific case, the high similarity does not lead to a successful attack. We further study this phenomenon and find that the backdoored model trained by MIAS can recognize the original feature pattern in addition to the trigger pattern. As there has been an increasing number of studies on trustworthy GNNs [22, 2, 30], this paper also contributes to the exploration of GNN robustness by investigating the effectiveness of backdoor attacks in GNN models using an explainability tool. Our work is the first to revisit the trigger-injecting position in graph backdoor attacks and provide a new perspective. Our key contributions are: 1. We investigate backdoor attacks in GNNs by injecting triggers into the most or least important area of the sample. 2. We design a novel explanation framework to analyze the causes of the difference between these two strategies. 3. We verify the difference with quantitative analysis (recall score), which helps us further understand the backdoor attack behavior in GNNs. 4. We explore the interaction between the explainability and robustness of GNNs through experiments of two datasets and two GNN models. ## II Background ### _Graph Neural Networks_ Recently, Graph Neural Networks (GNNs) have achieved significant success in processing non-Euclidean spatial data, which are common in many real-world scenarios [33]. Unlike traditional neural networks, e.g., Convolutional Neural Networks (CNNs) or Recurrent Neural Networks (RNNs), GNNs work on graph data. GNNs take a graph \(G=(V,E,X)\) as an input, where \(V,E,X\) denote nodes, edges, and node attributes, and learn a representation vector (embedding) for each node \(\mathbf{v}\in G\), \(z_{\mathbf{v}}\), or the entire graph, \(z_{G}\). In particular, in modern GNNs, the node representation is computed by recursive aggregation and transformation of feature representations of its neighbors. After \(k\) iterations of aggregation, a node's representation captures both structure and feature information within its \(k\)-hop network neighborhood. Formally, the \(k\)-th layer of a GNN is: \[x_{\mathbf{v}}^{(k)}=AGGREGATION^{(k)}(\left\{z_{\mathbf{v}}^{(k-1)},\left\{z_{\mathbf{u} }^{(k-1)}|\mathbf{u}\in\mathcal{N}_{\mathbf{v}}\right\}\right\}), \tag{1}\] \[z_{\mathbf{v}}^{(k)}=TRANSFORMATION^{(k)}(x_{\mathbf{v}}^{(k)}), \tag{2}\] where \(z_{\mathbf{v}}^{(k)}\) is the representation of node \(\mathbf{v}\) computed in the \(k\)-th iteration. \(\mathcal{N}_{\mathbf{v}}\) are 1-hop neighbors of node \(\mathbf{v}\), and the \(AGGREGATION(\cdot)\) is an aggregation function that can vary for different GNN models. \(z_{\mathbf{v}}^{(0)}\) is initialized as node feature. The \(TRANSFORMATION(\cdot)\) function consists of a learnable weight matrix and activation function. For the node classification task, the node representation \(z_{\mathbf{v}}\) is used for prediction. In this paper, we investigate the node classification task. Moreover, we focus on two representation models of this family, which differ in one of the above two steps: aggregation and transformation. In the following, we briefly describe these models and their differences. **Graph Convolutional Networks (GCN) [9].** Let \(d_{\mathbf{v}}\) denotes the degree of node \(\mathbf{v}\). The aggregation operation in GCN is then given as: \[x_{\mathbf{v}}^{(k)}\leftarrow\sum_{u\in\mathcal{N}_{\mathbf{v}}\bigcup\mathbf{v}}\frac{ 1}{\sqrt{d_{\mathbf{v}}d_{\mathbf{u}}}}z_{\mathbf{u}}^{(k-1)}.\] GCN performs a non-linear transformation over the aggregated features to compute the node representation at layer \(k\): \[z_{\mathbf{v}}^{(k)}\gets ReLU(x_{\mathbf{v}}^{(k)}W^{(k)}).\] **Graph Attention Networks (GAT) [20].** In addition to the standard neighbor aggregation scheme mentioned above in equation 1 and equation 2, there are other non-standard neighbor aggregation schemes, e.g., weighted average via attention in GAT. Specifically, given a shared attention mechanism \(a\), attention coefficients can be computed by: \[e_{\mathbf{v}\mathbf{u}}=a(Wz_{\mathbf{v}}^{(k-1)},Wz_{\mathbf{u}}^{(k-1)}) \tag{3}\] that indicate the importance of node \(\mathbf{u}\)'s features to node \(\mathbf{v}\). Then, the normalized coefficients can be computed by using the softmax function: \[\alpha_{\mathbf{v}\mathbf{u}}=softmax_{\mathbf{u}}(e_{\mathbf{v}\mathbf{u}}). \tag{4}\] Finally, the next-level feature representation of node \(\mathbf{v}\) is: \[z_{\mathbf{v}}^{(k)}=\sigma\left(\frac{1}{P}\sum_{p=1}^{P}\sum_{\mathbf{u}\in\mathcal{ N}_{\mathbf{v}}}\alpha_{\mathbf{v}\mathbf{u}}^{p}W^{p}z_{\mathbf{u}}^{(k-1)}\right), \tag{5}\] Fig. 1: An example of using Grad-CAM to explain a backdoor attack in the image domain. (a) clean image, (b) heatmap of clean image for the true label on the backdoored model (predicted as the true label), (c) heatmap of the poisoned image for the target label on the backdoored model (predicted as the target label). where \(\alpha^{p}_{\mathbf{v}\mathbf{u}}\) are the normalized coefficients computed by the \(p\)-th attention mechanism \(a^{p}\) and \(W^{p}\) is the corresponding input linear transformation's weight matrix. ### _Backdoor Attacks_ Backdoors are training time attacks that aim to achieve misclassification at the testing phase for trigger-embedded samples while working correctly on clean inputs. Several studies showed that GNNs are also vulnerable to backdoor attacks. Similar to the backdoor attack in CNNs, the backdoor attack in GNNs can be implemented by poisoning the training data with a trigger, which can be a subgraph with/without features [24, 31] or a subset of node features [26]. After training the GNN model with the trigger-embedded data, the backdoored GNN would predict the test example injected with a trigger as the pre-defined target label. ### _Explainability of GNNs_ Recently, several explainability techniques in GNNs have been proposed, such as XGNN [28], GNNExplainer [27], PGExplainer [12], and SubgraphX [29]. These methods are developed from different angles and provide different levels of explanations. GNNExplainer is the model-agnostic approach for providing explanations on any GNN-based model's predictions. Given a trained GNN model and its prediction(s), GNNExplainer returns an explanation in the form of a small subgraph of the input graph, with a small subset of node features that contribute most to the final model prediction(s). In this paper, we focus on the GNNExplainer method as it can explain predictions of any GNN on any graph-based machine learning task without requiring modification of the underlying GNN architecture or re-training. ## III Threat Model We consider a _gray-box_ threat model assuming the attacker can freely modify a small portion of the training dataset. Since the explanation masks in GNNExplainer are generated through gradients of the GNN model, the attacker also has knowledge of the gradient information of the target model on the chosen training dataset. We also assume the attacker performs a _dirty-label_ backdoor attack, where the poisoned samples' labels are changed to the target label. Although this kind of attack is weaker than _clean-label_ backdoor attacks [19], where the labels remain unaltered, dirty label attacks are the most common in the literature [24, 26, 31]. The attacker's goal is to inject a backdoor in the given pre-trained clean GNN model through training over the poisoned training dataset, which achieves misclassification under the presence of a trigger while maintaining clean high accuracy on the original task. This threat model is realistic in real-world settings. For example, if the training dataset is collected from public users, the adversary can provide trigger-embedded training data to implement the backdoor attack. ## IV Methodology ### _General Framework_ As stated before, we aim to discover if and how the explainability techniques in GNNs help improve the performance of backdoor attacks. Here, we focus on utilizing the feature-trigger backdoor attack from [26] for the node classification task. The trigger used in the backdoor attacks in our paper is defined as: **Definition 1** (Trigger): _In our backdoor attacks, the trigger is a specific feature pattern that is created by modifying the value of a subset of a node's features._ Generally, two steps are conducted to implement backdoor attacks using explainability techniques: (1) We apply an explainability technique (i.e., GNNExplainer) on a pre-trained clean GNN model to implement backdoor attacks based on two trigger-injecting strategies defined below. **Definition 2** (The Most/Least Representative Features): _Through applying the GNNExplainer on the pre-trained clean GNN model on the target node, we can obtain the original importance order of the node features. Based on the importance order information, we can locate the most or least representative features._ **Definition 3** (Most Important Area Strategy (MIAS)): _We select the most representative features of the target node and inject the feature trigger into the corresponding dimensions._ **Definition 4** (Least Important Area Strategy (LIAS)): _We select the least representative features of the target node and inject the feature trigger into the corresponding dimensions._ We then compare the attack performance based on these two strategies, including the attack success rate and clean accuracy drop. (2) Next, we try to explain the attack performance of these two strategies by again applying the explainability techniques on the backdoored model over the poisoned testing dataset. As a result, we can obtain the new importance order of the node features, which is used to compute the similarity with the original feature importance order. The proposed framework is presented in Fig. 2. ### _Explanation Design_ The detailed process of generating poisoned training dataset and target masks is presented in Algorithm 1. \(EXP(\cdot)\) is the applied GNN explanation technique, i.e., GNNExplainer, and \(s\) is the trigger-injecting strategies, i.e., MIAS or LIAS. The algorithm first samples a subset from the original training dataset with a poisoning rate \(r\) (line \(2\)). For each sampled node, the algorithm will compute the corresponding feature order to determine the trigger-injecting location for MIAS and LIAS. Meanwhile, the label of the poisoned training dataset will change to the target label. The trigger size \(n\) is the number of the features in the feature trigger, which means \(n\) node features will be modified. The poisoned testing dataset is obtained by injecting a trigger (following the same strategy as the poisoned training dataset) into the samples and changing their labels to the target label. Finally, based on the order of representative features, we can generate a target mask for each node in the poisoned testing dataset (line \(17\)). The target mask has the same shape as the node feature vector, and the most (least) \(n\) important features are masked in while other features are masked out. To evaluate whether the backdoored model can recognize the trigger pattern precisely, the number of features to be masked in is set to be \(n\). The definition of the target mask is as follows: **Definition 5** (Target Mask): _The target mask is a boolean tensor that indicates which \(n\) features contribute more to the final prediction from the pre-trained clean model \(\theta\) for the target node compared to other features._ ``` Input: Pre-trained clean GNN model \(\theta\), Training set \(D_{train}\), Testing set \(D_{test}\), Trigger-injecting strategy \(s\in\{MIAS,LIAS\}\), Target label \(y_{t}\in[0,C)\) Poisoned training dataset \(\hat{D}^{s}_{train}\), Poisoned testing dataset \(\hat{D}^{s}_{test}\), Target masks \(M^{s}_{t}\) 1/* Sampling Training Dataset to Inject Trigger */ 2\(\hat{D}^{s}_{train}\)\(\leftarrow\)\(sample(D_{train},r,y\neq y_{t})\) 3foreach\(\{x,y\}\in\hat{D}^{s}_{train}\)do 4/* Computing Order of Representative Features */ 5\(feature\_order=EXP(\theta,x,y)\) 6\(\hat{x}^{s}=Inject\_Trigger(x,feature\_order,s)\) 7\(\hat{y}^{s}=y_{t}\) 8 end for 9\(\hat{D}^{s}_{test}\)\(\leftarrow\)\(D_{test}|\)\(\backslash\)\(y_{t}\)\(\backslash\)\(M^{s}_{t}\)\(\leftarrow\)\(\emptyset\) 10foreach\(\{x,y\}\in\hat{D}^{s}_{test}\)do 1/* Computing Order of Representative Features */ 11\(feature\_order=EXP(\theta,x,y)\) 12\(\hat{x}^{s}=Inject\_Trigger(x,feature\_order,s)\) 13\(\hat{y}^{s}=y_{t}\) 14/* Generating Target Mask */ 15\(m^{i}_{i}=Get\_Mask(feature\_order,s)\) 16\(M^{s}_{t}=M^{s}_{t}\cup m^{s}_{i}\) 17 end for return\(\hat{D}^{s}_{train},\hat{D}^{s}_{test},M^{s}_{t}\) ``` **Algorithm 1**Generate Poisoned Training Dataset and Target Masks Once the poisoned training dataset is generated, we can obtain the backdoored models \(\hat{\theta}^{s}\) by retraining the clean model \(\theta\) with the backdoored training dataset. 1 The process of training the backdoored models and obtaining predicted masks is shown in Algorithm 2. To analyze the impact of injecting trigger into the most/least important part of the node features on the attack performance, we compare the attack performance of \(\hat{\theta}^{MIAS}\) and \(\hat{\theta}^{LIAS}\), including the attack success rate and clean accuracy drop. Finally, for the poisoned testing dataset, which we used to calculate the attack success rate, we again utilize the GNNExplainer to obtain the new feature importance order for each node on the backdoored GNN model \(\hat{\theta}^{MIAS}\) or \(\hat{\theta}^{LIAS}\) (line \(7\)). The new feature importance order is used to generate the predicted mask. The definition of the predicted mask is as follows: Footnote 1: In this work, we combine the original training dataset and the poisoned training dataset as the backdoored training dataset. **Definition 6** (Predicted Mask): _The predicted mask is a boolean tensor which indicates which \(n\) features contribute more to the final prediction from the backdoored GNN model \(\hat{\theta}\) for the target node compared to other features._ Combining the target masks we get in Algorithm 1, we can compute the similarity between the ordering of the new representative features and the old ones by calculating the recall score of the target mask and the predicted mask: \[\begin{split}& RS^{s}_{i}=\frac{TP(M^{s}_{t,i},M^{s}_{p,i})}{TP(M^{s }_{t,i},M^{s}_{p,i})+FN(M^{s}_{t,i},M^{s}_{p,i})},i\in N\\ & M^{s}_{t,i}(M^{s}_{p,i})=[0,\cdots,1,\cdots,1,\cdots,0],\end{split} \tag{6}\] where \(RS^{s}_{i}\) is the recall score of the \(i\)th poisoned testing sample with \(s\) strategy, \(M^{s}_{t,i}\) and \(M^{s}_{p,i}\) is the target mask, and predictive mask of the \(i\)th poisoned testing sample, \(TP\) and \(FN\) is the true positive and false negative rate of these two masks, respectively, and \(N\) is the number of the poisoned testing dataset. We assume that higher similarity indicates that the backdoored model can better recognize the trigger pattern, contributing to better attack performance. ``` Input: Pre-trained clean GNN model \(\theta\), Training set \(D_{train}\), Poisoned training dataset \(\hat{D}_{train}\), Poisoned testing dataset \(\hat{D}_{test}\), Trigger-injecting strategy \(s\in\{MIAS,LIAS\}\) **Output:** Backdoored GNN model \(\hat{\theta}^{s}\), Predicted masks \(M^{s}_{p}\) 1/* Training Backdoored Models */ 2/* \(\{x,y\}\in D_{train}\), \(\{\hat{x}^{s},\hat{y}^{s}\}\in\hat{D}^{s}_{train}\) */ 3\(\hat{\theta}^{s}=\)\(\text{argmin}_{i}(\sum_{i}L(x_{i},y_{i};\theta)+\sum_{i}L(\hat{x}^{s}_{i}, \hat{y}^{s}_{i};\theta))\) 4\(M^{s}_{p}\)\(\leftarrow\)\(\emptyset\) 5foreach\(\{\hat{x}^{s},\hat{y}^{s}\}\in\hat{D}^{s}_{test}\)do 6/* Getting Predictive Mask */ 7\(feature\_order=EXP(\hat{\theta}^{s},\hat{x}^{s},\hat{y}^{s})\) 8\(m^{s}_{i}=Get\_Mask(feature\_order)\) 9\(M^{s}_{p}=M^{s}_{p}\cup m^{s}_{i}\) 10 end for return\(\hat{\theta}^{s},M^{s}_{p}\) ``` **Algorithm 2**Train Backdoored GNN Models and Generate Predicted Masks ## V Experimental Results ### _Experimental Setting_ We implemented the backdoor attack on the node classification task using the PyTorch framework. All experiments were run on a server with \(2\) Intel Xeon CPUs, \(1\) NVIDIA 1080 Ti GPU with \(32\)GB RAM, and Ubuntu \(20.04\) LTS OS. Each experiment was repeated \(10\) times to obtain the average result. **Dataset.** For our experiments, we use two publicly available real-world datasets for the node classification task: Cora [17] and CiteSeer [17]. These two datasets are citation networks in which each publication is described by a binary-valued word vector indicating the absence/presence of the corresponding word in the collection of \(1,433\) and \(3,703\) unique words, respectively. For each node classification dataset, we split \(20\%\) of the total nodes as the original training dataset (labeled), and the rest of the nodes are treated as the original testing dataset. To generate the backdoored training dataset, we sample \(10\%\) of the original training dataset to inject the feature trigger and relabel these nodes with the target label. The trigger size is set to \(5\%\) of the total number of node feature dimensions. We set these parameters as they provided the best results after conducting a tuning phase. **Models and training.** We use the popular GAT [20] and GCN [9] models, as these two methods are commonly-used GNN models for the node classification task. We train the clean and backdoored GNN models with a learning rate of \(0.005\) and use Adam as the optimizer. **Attack evaluation metrics.** To compare the attack performance of MIAS and LIAS, we utilize two commonly used backdoor attack evaluation metrics: 1. **Attacks Success Rate** (ASR): measures the backdoor performance of the model on a fully poisoned dataset \(\hat{D}\). It is computed as \(ASR=\frac{\sum_{i=1}^{N}1(\hat{\theta}(x_{i})=y_{t})}{N}\) where \(\hat{\theta}\) is the poisoned model, \(\hat{x_{i}}\) is a poisoned input, \(\hat{x_{i}}\in\hat{D}\), \(y_{t}\) is the target class, and \(\mathbb{I}\) is an indicator function. 2. **Clean Accuracy Drop** (CAD): measures the effect of the backdoor attack on the original task. It is calculated by comparing the performance of the poisoned and clean models on a clean holdout testing set. The accuracy drop should generally be small to keep the attack stealthy. ### _Results and Analysis_ **Results.** The backdoor attack results on two graph datasets based on two models and two trigger-injecting strategies are shown in Fig. 3. In particular, the ASR and CAD of two GNN models on two datasets are presented in Table I. We can observe that both strategies can achieve a high attack success rate, i.e., more than \(97\%\), except GCN on the Cora dataset with MIAS. In addition, in most cases, the ASR of LIAS is slightly higher, around \(1\%\), than that of MIAS. However, for the GCN model on the Cora dataset, the ASR of LIAS is significantly higher: more than \(8\%\), than the MIAS. We can also see that the CAD for all datasets and models is unnoticeable, and the difference between the two strategies over CAD is negligible. **Analysis.** Next, we investigate the reason why the backdoor attack performance of the LIAS is somewhat higher or significantly higher (for the GCN model on the Cora dataset) than the MIAS. As mentioned in Section IV, we evaluate the similarity between the ordering of the new representative features and the old ones by calculating the recall score of the target mask and the predicted mask. The histogram of recall scores over the poisoned testing dataset of all datasets and models is shown in Fig. 4. We can observe that most poisoned testing samples have a recall score of more than \(0.5\) in both MIAS and LIAS, which results in a high attack success rate for both strategies. To further investigate the slight advantage of the LIAS over the MIAS, we split the poisoned testing samples into two parts, one is misclassified into the target class successfully, and the other one is not, and compute the recall scores for these two parts, as shown in Fig. 5. We notice that, generally, the successfully misclassified nodes have significantly higher recall scores than those not misclassified into the target class. This phenomenon is consistent with the assumption mentioned in Section IV, i.e., the higher similarity between the ordering of the new representative features and that of the original ones indicates that the backdoored model can recognize the trigger pattern better. When comparing the second column and the last column of Fig. 4(b), 4(c), and 4(d), we also see that LIAS has fewer nodes with low recall score than MIAS, which we believe is the reason of higher ASR of LIAS than MIAS. In contrast, we surprisingly see that for the GCN model on the Cora dataset with MIAS, the unsuccessfully misclassified nodes also have a high recall score as the successfully Fig. 3: Backdoor attack results of two trigger-injecting strategies. Fig. 2: An illustration of backdoor attack and explanation framework. misclassified nodes. We assume that the main reason behind this is that, under the MIAS, the feature trigger is injected into the positions of the most representative features. Thus the backdoored model will recognize not only the trigger pattern but also the representative feature pattern for the original label. Therefore, for MIAS, it is possible that even the poisoned testing samples that are not successfully misclassified into the target class will have a high recall score. We verify this hypothesis by extending the target masks and predicted masks twice the feature trigger length, i.e., \(2*n\), and computing the recall scores again. 2 The histogram of the new recall scores of the GCN model on the Cora dataset is shown in Fig. 6. We also checked the prediction of the backdoored model over the unsuccessfully misclassified nodes. The output indicates that all these nodes are classified into their original classes. Comparing Fig. 4(a) and 6, we observe that the recall scores of the successfully misclassified nodes generally reduce to half of that without extended masks. We believe this is because, for these nodes, the backdoored model recognizes the trigger location exactly, and when we extended the masks twice the trigger length, only half of the features can be recalled. However, we can also see that for the MIAS, the recall scores of the unsuccessfully classified nodes are still as high as those without the extended masks. This is because the backdoored model recognizes the feature pattern for the original label (that is why these nodes are classified into the original class and the attack is not successful), so even if the masks are extended, the recall score is still high. Footnote 2: Here, we select an extension rate of 2. To verify the hypothesis, the extension rate can be set to \(\gamma>1\), and the recall scores of the successfully misclassified nodes are expected to reduce to \(1/\gamma\) of that without extended masks. ## VI Related Work ### _Backdoor Attacks in GNNs_ Several works have conducted backdoor attacks in GNNs. Zhang et al. presented a subgraph-based backdoor attack in GNNs for graph classification task [31]. Xi et al. proposed a subgraph-based backdoor attack in GNNs, for both node classification and graph classification tasks [24]. Xu et al. explored the trigger-injecting position for the graph backdoor attack [26], representing the most related work to our paper. However, in that paper, the authors only provided assumptions about the results, and no experimental analysis was given to confirm the assumptions. In this work, we give an empirical analysis of the attack results, which leads to a further understanding of the backdoor attack behavior in GNNs. ### _Explanability in GNNs_ GNNs have become increasingly popular since many real-world data can be naturally represented as graphs, such as social networks, chemical molecules, and financial data [5, 32]. Consequently, numerous approaches are proposed to explain the predictions of GNNs. Generally, these methods can be categorized into two mainstream lines of research. One is the parametric explanation methods that are widely used nowadays. For instance, GNNExplainer [27] learns soft masks for edges and node features to explain the predictions via mask optimization. The soft masks are randomly initialized and treated as trainable variables. [12] proposed PGExplainer to collectively explains multiple instances with a probabilistic graph generative model. XGNN [28] uses a graph generator to generate class-wise graph patterns to explain GNNs for each class. Vu et al. proposed PGM-Explainer, a Bayesian \begin{table} \begin{tabular}{c c c c c} \hline \hline \multicolumn{5}{c}{MIAS} \\ \hline \multirow{2}{*}{Dataset} & \multicolumn{2}{c}{GCN} & GAT \\ \cline{2-5} & ASR \(\pm\) SD & CAD \(\pm\) SD & ASR \(\pm\) SD & CAD \(\pm\) SD \\ \hline Cora & \(90.08\%\pm 0.29\%\) & \(0.32\%\pm 0.19\%\) & \(97.91\%\pm 0.12\%\) & \(0.34\%\pm 0.24\%\) \\ CiteSeer & \(97.70\%\pm 0.10\%\) & \(0.32\%\pm 0.17\%\) & \(98.54\%\pm 0.09\%\) & \(0.71\%\pm 0.20\%\) \\ \hline \hline \multicolumn{5}{c}{LIAS} \\ \hline \multirow{2}{*}{Dataset} & \multicolumn{2}{c}{GCN} & GAT \\ \cline{2-5} & ASR \(\pm\) SD & CAD \(\pm\) SD & ASR \(\pm\) SD & CAD \(\pm\) SD \\ \hline Cora & \(98.65\%\pm 0.06\%\) & \(0.27\%\pm 0.21\%\) & \(99.89\%\pm 0.03\%\) & \(0.27\%\pm 0.21\%\) \\ CiteSeer & \(98.96\%\pm 0.07\%\) & \(0.15\%\pm 0.18\%\) & \(99.88\%\pm 0.03\%\) & \(0.80\%\pm 0.17\%\) \\ \hline \hline \end{tabular} \end{table} TABLE I: Backdoor attack performance of MIAS and LIAS (SD: standard deviation). Fig. 4: Histogram of recall scores over the poisoned testing dataset. network on the pairs of graph perturbations and prediction changes [21]. The other line is the non-parametric explanation methods, which do not involve any additional trainable models. They employ heuristics like gradient-like scores obtained by backpropagation as the feature contributions of a specific instance [1, 14, 15] ### _Explainability for Backdoor Attacks_ With the thriving development of explainability techniques in machine learning, the attacker can use model explanations to gain knowledge about the model to perform the adversarial attacks [13]. Kuppa et al. [10] used counterfactual explanations to find the malware features that most heavily impact the classifier decision. They used this knowledge to craft adversarial training samples that efficiently poison the model. Severi et al. [18] used SHAP to craft backdoor triggers in malware detectors. Utilizing the explanation, they determined which features to poison, resulting in a success rate of up to three times higher than that of a greedy algorithm that does not use explainable artificial intelligence (XAI). Xu et al. [26] injected backdoors into GNNs by leveraging XAI techniques. While there has been an increasing number of studies on utilizing explanation techniques to implement backdoor attacks in deep learning models, there has been no research on using explanation tools to clarify the backdoor attack behavior in the graph domain. ## VII Conclusion and Future Work This paper presents a comprehensive analysis and explanation of graph backdoor attacks with two trigger-injecting strategies; MIAS and LIAS. We investigate the node classification task and compare the attack performance for these two strategies. Our findings show that LIAS always achieves higher attack performance than MIAS. We further explain the difference with quantitative analysis, which contributes to a further understanding of the backdoor attack behavior in GNNs. Future work will include explaining the backdoor attack behavior of two trigger-injecting strategies in the graph classification task. More precisely, we would compute the similarity between the new representative subgraph and the old one by calculating the recall score of the target mask and the predicted mask.
2303.09385
ALMA detection of CO rotational line emission in red supergiant stars of the massive young star cluster RSGC1 -- Determination of a new mass-loss rate prescription for red supergiants
[Abridged] Aim: We aim to derive a new mass-loss rate prescription for RSGs that is not afflicted with some uncertainties inherent in preceding studies. Methods: We have observed CO rotational line emission towards a sample of RSGs in the open cluster RSGC1 that all are of similar initial mass. The ALMA CO(2-1) line detections allow to retrieve the gas mass-loss rates (Mdot_CO). In contrast to mass-loss rates derived from the analysis of dust spectral features (Mdot_SED), the data allow a direct determination of the wind velocity and no uncertain dust-to-gas correction factor is needed. Results: Five RSGs in RSGC1 have been detected in CO(2-1). The retrieved Mdot_CO values are systematically lower than Mdot_SED. Although only five RSGs in RSGC1 have been detected, the data allow to propose a new mass-loss rate relation for M-type red supergiants that is dependent on luminosity and initial mass. The new mass-loss rate relation is based on the new Mdot_CO values for the RSGs in RSGC1 and on prior Mdot_SED values for RSGs in 4 clusters, including RSGC1. The new Mdot-prescription yields a good prediction for the mass-loss rate of some well-known Galactic RSGs that are observed in multiple CO rotational lines, including alpha Ori, mu Cep and VX Sgr. However, there are indications that a stronger, potentially eruptive, mass-loss process - different than captured by our new mass-loss rate prescription - is occurring during some fraction of the RSG lifetime. Implementing a lower mass-loss rate in evolution codes for massive stars has important consequences for the nature of their end-state. A reduction of the RSG mass-loss rate implies that quiescent RSG mass loss is not enough to strip a single star's hydrogen-rich envelope. Upon core-collapse such single stars would explode as RSG.
Leen Decin, Anita M. S. Richards, Pablo Marchant, Hugues Sana
2023-03-16T15:14:15Z
http://arxiv.org/abs/2303.09385v2
ALMA detection of CO rotational line emission in red supergiant stars of the massive young star cluster RSGC1 Determination of a new mass-loss rate prescription for red supergiants ###### Abstract Context:The fate of stars depends largely on the amount of mass lost during the end stages of evolution. For single stars with initial mass between \(\sim\)8-30 M\({}_{\odot}\), most mass is lost during the red supergiant (RSG) phase, when stellar winds deplete the H-rich envelope. However, the RSG mass-loss rate (\(\dot{M}\)) is poorly understood theoretically, and so stellar evolution models rely on empirically derived mass-loss rate prescriptions. But, it has been shown that these empirical relations differ largely, with differences up to 2 orders of magnitude. Aims:We aim to derive a new mass-loss rate prescription for RSGs that is not afflicted with some uncertainties inherent in preceding studies. Methods:We have observed CO rotational line emission towards a sample of RSGs in the open cluster RSGC1 that all are of similar initial mass. The ALMA CO(2-1) line detections allow to retrieve the gas mass-loss rates (\(\dot{M}_{\rm CO}\)). In contrast to mass-loss rates derived from the analysis of dust spectral features (\(\dot{M}_{\rm SED}\)), the data allow a direct determination of the wind velocity and no uncertain dust-to-gas correction factor is needed. Results:Five RSGs in RSGC1 have been detected in CO(2-1). The retrieved \(\dot{M}_{\rm CO}\) values are systematically lower than \(\dot{M}_{\rm SED}\). Although only five RSGs in RSGC1 have been detected, the data allow to propose a new mass-loss rate relation for M-type red supergiants with effective temperatures between \(\sim\)3200 - 3800 K, that is dependent on luminosity and initial mass. The new mass-loss rate relation is based on the new \(\dot{M}_{\rm CO}\) values for the RSGs in RSGC1 and on prior \(\dot{M}_{\rm SED}\) values for RSGs in 4 clusters, including RSGC1. The new \(\dot{M}\)-prescription yields a good prediction for the mass-loss rate of some well-known Galactic RSGs that are observed in multiple CO rotational lines, including \(\alpha\) Ori, \(\mu\) Cep and VX Sgr. However, there are indications that a stronger, potentially eruptive, mass-loss process -- different than captured by our new mass-loss rate prescription -- is occurring during some fraction of the RSG lifetime. Conclusions:Implementing a lower mass-loss rate in evolution codes for massive stars has important consequences for the nature of their end-state. A reduction of the RSG mass-loss rate implies that quiescent RSG mass loss is not enough to strip a single star's hydrogen-rich envelope. Upon core-collapse such single stars would explode as RSG. Mass-loss rates of order \(\sim\)5 times higher would be needed to strip the H-rich envelope and produce a Wolf-Rayet star while evolving back to the blue side of the Hertzsprung-Russell diagram. Future observations of a larger sample of RSGs in open clusters should allow a more stringent determination of the \(\dot{M}_{\rm CO}\)-luminosity relation. ## 1 Introduction The evolution of massive stars up to the point of supernova (SN) remains poorly understood. The steepness of the initial mass function and their short lifetimes (\(\sim\)15 Myr) make such stars rare, whilst the brevity of their post main-sequence (MS) evolution makes the direct progenitors of SNe rarer still. The pre-SN mass-loss behaviour is the key property that determines the appearance of the SN, since it dictates the extent to which the envelope is stripped prior to explosion. It also determines the nature of the end-state, i.e. complete disruption, neutron star, black hole, or total implosion with no supernova (e.g. Heger et al., 2003). The most common of the core-collapse SNe are of type IIP, which are observed to have Red Supergiants (RSGs) as their direct progenitors (e.g. Smartt, 2009). However, the range of initial masses of these SN progenitors inferred from pre-explosion photometry, 8\(\leq\)\(M\)/M\({}_{\odot}\)\(\leq\)17 (Smartt, 2009), is at odds with conventional theory, which predicts that the upper mass limit should be closer to \(\sim\)30 M\({}_{\odot}\); referred to as the'red supergiant problem' (e.g. Ekstrom et al., 2012). The likely cause for this tension between observation and theory is our relatively poor knowledge of RSG mass-loss rates. Mass loss during the RSG phase can affect the progenitors of SNe in two ways. Firstly, increased mass loss can strip the star of a substantial fraction of the envelope, causing the star to evolve back to the blue before SN (Georgy, 2012), and possibly depleting the stellar envelope of hydrogen (hence changing
2308.14918
Multi-site Integrated Optical Addressing of Trapped Ions
One of the most effective ways to advance the performance of quantum computers and quantum sensors is to increase the number of qubits or quantum resources in the system. A major technical challenge that must be solved to realize this goal for trapped-ion systems is scaling the delivery of optical signals to many individual ions. In this paper we demonstrate an approach employing waveguides and multi-mode interferometer splitters to optically address multiple $^{171}\textrm{Yb}^+$ ions in a surface trap by delivering all wavelengths required for full qubit control. Measurements of hyperfine spectra and Rabi flopping were performed on the E2 clock transition, using integrated waveguides for delivering the light needed for Doppler cooling, state preparation, coherent operations, and detection. We describe the use of splitters to address multiple ions using a single optical input per wavelength and use them to demonstrate simultaneous Rabi flopping on two different transitions occurring at distinct trap sites. This work represents an important step towards the realization of scalable integrated photonics for atomic clocks and trapped-ion quantum information systems.
Joonhyuk Kwon, William J. Setzer, Michael Gehl, Nicholas Karl, Jay Van Der Wall, Ryan Law, Matthew G. Blain, Daniel Stick, Hayden J. McGuinness
2023-08-28T22:28:07Z
http://arxiv.org/abs/2308.14918v3
# Multi-site Integrated Optical Addressing of Trapped Ions ###### Abstract One of the most effective ways to advance the performance of quantum computers and quantum sensors is to increase the number of qubits or quantum resources used by the system. A major technical challenge that must be solved to realize this goal for trapped-ion systems is scaling the delivery of optical signals to many individual ions. In this paper we demonstrate an approach employing waveguides and multi-mode interferometer splitters to optically address multiple \({}^{171}\)Yb\({}^{+}\) ions in a surface trap by delivering all wavelengths required for full qubit control. Measurements of hyperfine spectroscopy and Rabi flopping were performed on the E2 clock transition, using integrated waveguides for delivering the light needed for Doppler cooling, state preparation, coherent operations, and detection. We describe the use of splitters to address multiple ions using a single optical input per wavelength and use them to demonstrate simultaneous Rabi flopping on two different transitions occurring at distinct trap sites. This work represents an important step towards the realization of scalable integrated photonics for atomic clocks and trapped-ion quantum information systems. ## I Introduction Since their initial realization, ions stored in RF Paul traps [1] have provided an effective platform for quantum information due to their stability [2], long coherence times [3], and high fidelities [4]. In addition to computing [5; 6], ion traps have also been used for atomic clocks [7; 8], quantum networks [9; 10], quantum sensors [11], and fundamental science [12; 13]. The development of surface traps [14; 15] furthered the potential for advancing these applications by enabling scaling to more ions. Using microfabrication capabilities originally developed by the semiconductor industry, surface ion traps were fabricated that could support many-ion quantum computers [16; 17]. These traps also provided a convenient platform for integrating other electronic and optical technologies, which is necessary for the individual addressing and readout of growing numbers of ions. These technologies include photonic waveguides and grating couplers [18; 19; 20; 21], detectors [22; 23; 24], and modulators [25], all of which combined can support quantum systems with many ions as well as applications which require low size, weight, and power (SWaP). While many of these proof-of-principle experiments focused on the operation of a single ion site, simultaneous independent control of multiple ions at several trap sites had until now not been realized. Here, we demonstrate a room temperature ion trap and simultaneous optical addressing of three \({}^{171}\)Yb\({}^{+}\) ions in independent wells, using only light delivered through waveguides. The light addressing two of the three ions comes from a single split input, forming a "multi-ion ensemble". Multi-mode interferometer (MMI) splitters are employed to equally divide the light from the input waveguide into two separate waveguides, which are routed to output couplers individually addressing the trapped ions. This technique could be employed to deliver light to a much larger number of ions and support the scalabity of future quantum devices. Ensembles of ions are particularly interesting for atomic clocks, where it has been shown that separately interrogated ensembles of ions can achieve sensitivities that scale as \((\alpha N)^{-m/2}\), where \(N\) is the number of ions in each of \(m\) ensembles, and \(\alpha\) is a protocol-dependent constant [26]. In this experiment we performed hyperfine spectroscopy and Rabi flopping on the E2 clock transition, consisting of Doppler cooling, state preparation, coherent operations, and detection. Only the photoionization laser beams were delivered via free space. This required multiple wavelengths ranging from UV (369 nm) to NIR (935 nm). We demonstrated the bluest wavelength used for integrated optical addressing to date, overcoming challenges related to the lithography of the grating couplers and propagation losses of the waveguides. Additionally, we measured simultaneous Rabi flopping for two different transitions occurring at different sites, which is a key capability for quantum systems that require magnetic field calibration. We also investigated optical crosstalk between sites. ## II Experimental setup Fig. 1b shows a false colored scanning electron microscope (SEM) image of multiple sites in the microfabricated surface trap. Each site was optically addressed by light from four output couplers, allowing for the necessary state manipulation needed for a clock based on \({}^{171}\)Yb\({}^{+}\). An energy level diagram of the relevant levels is shown in Fig. 1**e**. Ions were trapped \(50\,\mu\)m above the surface with a \(200\,\mu\)m spacing between trap sites. Light was coupled into the waveguides through input gratings located far from the trapping region and routed to the output grat ings. Single-layer aluminium oxide waveguides were used for the UV light at 369 nm and 435 nm, while silicon nitride waveguides were used for 760 nm and 935 nm. These materials were chosen for their low-loss at these wavelengths and for ease of fabrication. The gratings and waveguides for 369 nm and 435 nm were separately tested and a total loss of \(\approx\)-20 dB was measured. This value was dominated by insertion loss but also included the propagation loss of 1.35 dB/cm at 369 nm and 0.9 dB/cm at 435 nm. Light from four output couplers converged at each of three trap sites along the trap axis, allowing for three ions to be trapped and manipulated using only light from the integrated photonics. Ions were trapped at room temperature with a vacuum Figure 1: **Experimental setup.****a,** Conceptual illustration of a surface trap with ions and integrated beams **b,** SEM image of the trapping region of the micro-fabricated surface trap, consisting of multi-color waveguide output couplers (oval structures within RF rails), electrodes (green) and RF rails (blue). Single photon avalanche diodes (SPADs) are located at the center of trapping sites and are shielded by a grounded metal mesh to protect the ion from electric field perturbations when the SPADs are operational (see Methods) **c,** SEM image for an MMI splitter that routes light to waveguide output couplers (indicated in blue & dark blue). Similar splitters are also applied for other colors. **d,** SEM image of grating output couplers **e,**\({}^{171}\)Yb\({}^{+}\) energy level diagram, indicating the three different integrated wavelengths used in the experiment. pressure of \(\approx 1\times 10^{-11}\) Torr. A magnetic field of 5.5 G (\(1.4\hat{\mathbf{x}}+0.7\hat{\mathbf{y}}+5.3\hat{\mathbf{z}}\)) was applied to optimize both ion fluorescence and coupling to the quadrupole transition. This bias magnetic field also broke the degeneracy of the Zeeman sublevels of the excited states. A defining feature of these integrated photonics is that the waveguides for two of the three trap sites were fed via a common input. For each wavelength, an integrated splitter, as shown in Fig. 1**c**, was used to equally divide the input light into two waveguide channels which were routed to the output gratings of the two sites. The MMI splitters were measured to have low optical losses of \(<0.2\) dB per splitter. Integrated splitters allow for the control of numerous trap sites via a single input, providing a path for supporting many-ion ensembles with only a few optical inputs. ## III Multi-ion trapping and waveguide characterization State detection is particularly challenging in surface traps using integrated waveguides because light emitted from output couplers can scatter into the imaging system and overwhelm the ion fluorescence. In a standard detection scheme [19; 27], ions fluoresce at the same wavelength as the detection beams, rendering wavelength filters too broad to distinguish between the two. The light output from a grating is highly polarized and therefore polarization filtering is possible, however it was found to be insufficiently discriminating in this experiment. In addition to a conventional imaging system with both a CCD camera (Andor Zyla) and photo-multiplier tube (PMT, Hamamatsu H10682-210) for collecting fluorescence normal to the trap surface, we implemented side-imaging to collect light along the \(y\)-axis and avoid the excessive scattering in the vertical direction. This reduced the scattered light collected by the objective and obviated the need for polarization filtering. As a standard 6-inch octagonal vacuum chamber was used, the distance from the ion to the objective placed at a side viewport was larger by a factor of 5 compared to conventional overhead imaging, and therefore was much less efficient at collecting photons due the reduced numerical aperture. With the side-imaging system, the objective lens collected and focused light onto either a CCD camera (Andor Luca) or a 32 channel, linear PMT array (Hamamatsu H11659-200) with a multi-pinhole filter for the three waveguide sites. The \(100\,\mu\)m pinholes spatially filtered the ion fluorescence from background scattering to improve the signal-to-noise ratio (SNR) on the PMTs. Ultimately this arrangement achieved SNRs between 5 and 10 while supporting detection times on the order of milliseconds. Fig. 2**a** shows fluorescence images of the ions illuminated by both free-space and waveguide beams. Each waveguide-controlled ion, labeled (i) to (iii), was separated by a distance of \(200\,\mu\)m and placed in a local potential minimum created by the surrounding electrodes. To trap multiple ions simultaneously, we loaded each ion sequentially. The first ion was loaded at the loading hole position (marked with a red circle in Fig. 2**a**) using free-space beams (Doppler and repump) with a single-site potential solution (see Methods). This loaded ion was then shuttled towards the far-right position (iii) by applying the appropriate voltage waveforms to the interior control electrodes. This shuttling process was carried out without the use of cooling beams between sites, but once the ion reached the target position it was cooled by light from the output gratings. We subsequently loaded new ions by applying an additional potential minimum and repeating this process until three ions were stored at locations (i) to (iii). While the Rabi flopping measurements in this paper used PMTs, we also fabricated SPADs for each ion-trapping site for future integrated detection. We shuttled and detected ions with the SPADs even while they were operating. The applied control and RF voltages produced an axial frequency of \(2\pi\times 1.02\) MHz and radial frequency of \(2\pi\times 3.52\) MHz, corresponding to a radial trap depth of 71 meV. Positioning the DC electrodes inside the RF rails led to a significant increase in DC electrode efficacy, such that static voltages within \(\pm 1\)V were sufficient for achieving a 1 MHz axial frequency. The 369 nm and 935 nm beams were used for cooling, state preparation, and detection of the ions, while the 435 nm beam was used for performing coherent operations on the E2 transition. Waveguides for delivering 760 nm repump light were fabricated but never used. This wavelength can be used for depopulating the F state, which is helpful for increasing ion lifetimes but not strictly necessary. All waveguide output gratings were designed to emit beams that overlapped at the trapping site \(50\,\mu\)m above the surface. While the 369 nm and 435 nm gratings were designed to focus at the ion's location, the repump beams were intentionally unfocused in order to ensure they overlapped with the ion position. Fig. 2**b** shows the measured waveguide profiles for 369 nm and 435 nm in \(\hat{\mathbf{x}}\) and \(\hat{\mathbf{z}}\). The height of the ion (z) is controlled by adding an electric shim field in \(\hat{\mathbf{z}}\), with the resulting distances measured using the side-imaging camera. As shown in Fig. 2**c**, the two beam profiles have closely overlapping foci near the \(50\,\mu\)m ion height, and near the center of the integrated SPADs. The 369 nm integrated beam profile in Fig. 2**c** is measured using ion fluorescence well below saturation intensity, while ensuring that the signal is not cut off by the side-imaging pinholes. For the 435 nm quadrupole beam, the beam profile was characterized by measuring the Rabi frequency as a function of ion position. In both cases the profiles were fit to a Gaus sian, showing a full-width-half-maximum beam width of \(5.26\,\mu\)m and \(5.25\,\mu\)m respectively. The in-situ measurements of the integrated beams agreed well with independent microscope measurements. Fig. 2**d** shows the theoretically simulated waveguide output coupler profile for 369 nm and 435 nm beams, where the beams are overlapping around \(50\,\mu\)m as designed. The \(x^{\prime}\)-axis in this figure is not the original \(x\)-axis but instead corresponds to the projection of the \(x\)-axis on a plane that is perpendicular to the trap surface and runs through the centers of the gratings. ## III Simultaneous multi-ion/multi-state addressing In these experiments, ensembles consisting of 2+1 trapped ions (2 ion with shared waveguide inputs and a single ion with independent control) were probed simultaneously. Each ion was first optically pumped to \(\left|2\mathrm{S}_{1/2},F=0\right\rangle\) and then a 435 nm laser resonant with the quadrupole transition was used to demonstrate Rabi flopping with \(\left|2\mathrm{D}_{3/2},F=2\right\rangle\). The ion positions for the split beams (blue traces) were set to maximize the interaction strength for each ion within each 435 nm beam. Ideally, this would correspond to the maximum intensity, but other factors like micromotion could slightly offset these optima. The two ions exhibited similar Rabi rates (\(\delta\Omega=0.064(4)\Omega\)), indicating a near 50/50 split for the MMI. Though not necessary in this case, the interaction strengths could have been fine-tuned by adjusting the position of each ion to equalize their Rabi rates. The orange data indicates the independently addressed ion. Each data set was fit with a single-ion, two-state transition model that accounted for the average motional quanta \(\bar{n}\) of the ion in the Lamb-Dicke regime. In practical realizations of quantum computers, quantum sensors, or atomic clocks, it may be beneficial to probe multiple transitions with different sensitivities in order to calibrate magnetic fields. To demonstrate this capability, we simultaneously probed two dif Figure 2: **Multi-site optical addressing and integrated beam profiles**. **a,** Image of multiple trapped ions taken with a CCD camera from the side of the trap. The top image shows three waveguide-trapped ions, with the left two using light split from a single source. The fourth ion highlighted with the dashed red circle was trapped with free-space beams at the loading site. **b,** Measured integrated beam profiles for 369 nm and 435 nm beams using ion fluorescence and Rabi frequency. A 3D contour plot shows normalized intensity along the x-z plane. The \(x\)-axis origin is at the center of the middle SPAD, consistent with the coordinates in part **a**. The beams intersected at \(50\,\mu\)m from the surface and near the center of the SPAD. **c,** Measured profiles along the trap axis of the 369 nm and 435 nm at the height corresponding to the optimized interaction strength (used in Fig. 3). **d,** Simulated waveguide profiles for Doppler and transition beams. The \(x^{\prime}\)-axis in this part is along the line between waveguide gratings. In \(y\)-axis, 0 is the trap surface. ferent transitions (one with each ensemble) as shown in Fig. 3**c**. This measurement was similar to Fig. 3**b**, but with a different frequency used for the ions addressed by the split waveguides. These ions (dark and light blue) transitioned from \(\left|2\text{S}_{1/2},F=0,m_{\text{F}}=0\right\rangle\) to \(\left|2\text{D}_{3/2},F=2,m_{\text{F}}=0\right\rangle\) (the \(\text{ }^{171}\text{Yb}^{+}\text{ clock transition}\)), while the single-waveguide ion transitioned from \(\left|2\text{S}_{1/2},F=0,m_{\text{F}}=0\right\rangle\) to \(\left|2\text{D}_{3/2},F=2,m_{\text{F}}=-1\right\rangle\). Due to their different coupling coefficients, they exhibited different Rabi rates. The frequency required for the \(\delta m=0\) transition state is insensitive to magnetic fields and is naturally well-overlapped (the difference in Rabi rates is \(\delta\Omega=0.039(4)\Omega\)). This further confirmed the near 50/50 power split performance of the MMI splitter, as the Rabi rates became closer when the transition was not susceptible to spatial magnetic field gradients. The ions were not cooled to the motional ground state in this experiment (\(\bar{n}\approx 30\), based on previous Rabi fits), which leads to decay in the Rabi contrast as pulse time increases [29]. The higher \(\bar{n}\) observed may be due to the less optimal Doppler cooling caused by the fixed polarization and \(k\)-vector of the light emitted from the output coupler relative to the applied magnetic field, based on comparisons to the lower \(\bar{n}\) measured with free space beams. The detection fidelity achieved in this experiment is lower than typical, primarily due to collection efficiency limitations of the side-imaging system and the higher \(\bar{n}\). Our chamber setup unavoidably increased the distance between the side-imaging objective lens and the trapped ion, making it challenging to use high numerical aperture objective lenses. Additionally, while greatly improved by using side-imaging, there remained some scattering from the waveguide output couplers that contributed to background noise contamination. ## Cross-talk characterization An important consideration in any multi-ion system is optical crosstalk, as the unintended light on neighboring ions introduces errors in the clock or qubit. Crosstalk errors for integrated addressing of a single ion in a chain have been measured as a function of displacement from the center of the beam profile [18], as well as in a multi-zone trap [30]. To measure crosstalk, light was applied at one site while Rabi transitions were measured at all sites. The ion(s) of a single ensemble were probed using the \(\left|2\text{S}_{1/2},F=0,m_{\text{F}}=0\right\rangle\) to \(\left|2\text{D}_{3/2},F=2,m_{\text{F}}=-1\right\rangle\) transition, in the same way as Fig. 3**b**, while the corresponding optical crosstalk on the ion(s) of the other ensemble was observed. Fig. 4**a** shows that addressing the single ion ensemble resulted in a Rabi frequency \(\Omega_{0}\) for the single ion and a crosstalk-induced Rabi frequency \(\Omega_{c}\) on the closest ion of the other ensemble that was \(\sim 0.05\Omega_{0}\). This corresponds to a relative crosstalk intensity of \(I_{c}/I_{0}=0.0026(1)\), where \(I_{0}\propto\Omega_{0}^{2}\) is the driven intensity and \(I_{c}\) is the crosstalk induced intensity. It is notable that only one ion of the ensemble in Figure 3: **Coherent qubit operations**. **a,** Schematic of split and single waveguide output grating couplers. Blue output grating couplers are routed from the splitter and share the same input, whereas orange is independent. **b,** Simultaneous Rabi flopping measurements of the three waveguide-trapped ions. The trace colors correspond to the similarly colored gratings in **a.** The average ion population of the D-state (\(m_{\text{F}}=-1\)) was measured as the 435 nm pulse time varies. The Rabi flops are fit with a single-ion decoherence Rabi oscillation formula in the Lamb-Dicke regime [28] (see Methods). **c,** Same as **b** but with the 435 nm light tuned to the (\(m_{\text{F}}=0\)) state for the two ion ensemble (light and dark blue), while the 435 nm light for the single ion (orange) remained tuned to the \(m_{\text{F}}=-1\) state. Each data point is the average of three trials of 200 measurements of state-detection for each trace. Error bars show the standard error of the mean. dicates optical crosstalk. As shown in Fig. 4**a** (dark blue), crosstalk between further ions was not observed (to within the noise floor). This proximity dependence likely means that the cause was not the input couplers or waveguide-to-waveguide coupling on chip (see Methods), but rather scattered light from the output coupler. Fig. 4**b** shows the data for the opposite case, where the two ion ensemble was addressed. The single ion measured a relative crosstalk intensity of \((I_{c}/I_{0}=0.0036(2))\). The two measurements of crosstalk have similar intensities which suggests that the source of the crosstalk has little directional dependence and could be caused by diffuse scatter from the output couplers. ## IV Toward a Fully-Integrated System Our trap was equipped with SPADs at each trap site for integrated single-site detection in future experiments. The SPADs integrated in this trap were the same as previously designed [22], however an aluminum mesh was added to shield the ion from the electric field of the SPAD (see Methods). This change allowed for trapping directly above the SPAD while it was active. We operated the SPAD independently and achieved a count rate of over 7 kilocounts per seconds (kcps) from one quadrant of the SPAD, with dark counts measuring below 1 kcps. These results are better than the SPAD integration performances previously reported for \({}^{174}\)Yb [22]. Unfortunately, scattered light originating from the waveguides and output couplers propagated through the various layers of the device, saturating the SPADs and prohibiting integrated state-readout. In the future, features such as light baffles could mitigate this scatter and enable fully integrated addressing and state-readout on the same device. ## V Conclusions In this paper, we demonstrated individual addressing of trapped ions at multiple sites using MMI waveguide splitters and fully integrated multi-color waveguides for cooling, state preparation, coherent operations and detection. The combination of integrated waveguide delivery and multi-ion control with single channel optical inputs shows a path towards a scalable trapped-ion system. The number of ions in an ensemble could be greatly increased with more split channels in comparison to the present work. A near-term application of this work is a compact and portable clock with greater robustness due to the minimization of optomechanics. Current state-of-the-art ion clocks are typically room-sized and not transportable. Multiple ensembles can be used to improve the accuracy of an optical ion clock by concurrently interrogating the clock transition with different interrogation times for different ensembles. Zero-dead time [31] can also be realized with this system. Additionally, performance can be further enhanced by probing clock systematics [32; 33] while simultaneously operating the clock. Combining this work with micro-sized lasers on chip [34] could enable future compact and transportable clocks. Another application is quantum computing, where it is expected that potentially millions of qubits [35] will be needed for a truly useful general-purpose quantum computer. It is difficult to imagine how this many ions will be optically addressed with free-space optics, hence the need for integrated delivery and possibly addressing multiple ions with single optical inputs. **Acknowledgements.** We thank the members of Sandia's Microsystems and Engineering Sciences Application (MESA) facility for their fabrication expertise and for helpful comments on the manuscript. This work was supported by the Defense Advanced Research Projects Activity (DARPA). Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy's National Nuclear Security Administration under Contract No. DE-NA0003525. This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not Figure 4: **Optical Crosstalk a,** Measured crosstalk between the single ion ensemble (orange) and two ion ensemble (light and dark blue). Rabi oscillations with the D-state (\(m_{F}=-1\)) were measured while addressing the single ion ensemble (orange). Crosstalk was evident in the slow transition rate of the nearer ion (light blue) in the two ion ensemble (light and dark blue), and was more attenuated for the more distant ion. **b,** A similar measurement as **a,** but instead addressing the two ion ensemble (light blue and dark blue). Crosstalk was evident in the slow transition rate of the single ion ensemble (orange). **c,** Relative crosstalk intensities of transition light. Ratios were calculated from fits of Rabi frequencies and plotted for each trap site. necessarily represent the views of the U.S. Department of Energy or the United States Government. **Author Contributions.** J.K., W.J.S., D.L.S. and H.J.M. conceived the experiment. J.K. and W.J.S. took the measurement and analyzed data. M.G. designed the integrated optics while also performing waveguide tests and simulations with N.K.. J.V.D.W. developed multi-ion solution. R.L. setup the Doppler laser system. J.K., W.J.S. and H.J.M. wrote the original manuscript. All the authors reviewed and edited the manuscript. H.J.M. supervised the project. ## References * Paul and Steinwedel [1953]W. Paul and H. Steinwedel, Zeitschrift fur Naturforschung A **8**, 448 (1953). * Wu _et al._ [2021]H. Wu, M. Mills, E. West, M. C. Heaven, and E. R. Hudson, Phys. Rev. A **104**, 063103 (2021). * Wang _et al._ [2021]P. Wang, C.-Y. Luan, M. Qiao, M. Um, J. Zhang, Y. Wang, X. Yuan, M. Gu, J. Zhang, and K. Kim, Nat. Commun. **12**, 233 (2021). * Clark _et al._ [2021]C. R. Clark, H. N. Tinkey, B. C. Sawyer, A. M. Meier, K. A. Burkhardt, C. M. Seck, C. M. Shappert, N. D. Guise, C. E. Volin, S. D. Fallek, et al., Phys. Rev. Lett. **127**, 130505 (2021). * Srinivas _et al._ [2021]R. Srinivas, S. C. Burd, H. M. Knaack, R. T. Sutherland, A. Kwiatkowski, S. Glancy, E. Knill, D. J. Wineland, D. Leibfried, A. C. Wilson, et al., Nature **597**, 209 (2021). * Leung _et al._ [2018]P. H. Leung, K. A. Landsman, C. Figgatt, N. M. Linke, C. Monroe, and K. R. Brown, Phys. Rev. Lett. **120**, 020501 (2018). * Brewer _et al._ [2019]S. M. Brewer, J.-S. Chen, A. M. Hankin, E. R. Clements, C. W. Chou, D. J. Wineland, D. B. Hume, and D. R. Leibrandt, Phys. Rev. Lett. **123**, 033201 (2019). * Burt _et al._ [2021]E. A. Burt, J. D. Prestage, R. L. Tjoelker, D. G. Enzer, D. Kuang, D. W. Murphy, D. E. Robison, J. M. Seubert, R. T. Wang, and T. A. Ely, Nature **595**, 43 (2021). * Moehring _et al._ [2007]D. Moehring, P. Maunz, S. Olmschenk, K. Younge, D. Matsukevich, L.-M. Duan, and C. Monroe, Nature (2007). * Nichol _et al._ [2022]B. Nichol, R. Srinivas, D. Nadlinger, P. Drmota, D. Main, G. Araneda, C. Ballance, and D. Lucas, Nature **609** (2022). * Marciniak _et al._ [2022]C. D. Marciniak, T. Feldker, I. Pogorelov, R. Kaubrugger, D. V. Vasilyev, R. van Bijnen, P. Schindler, P. Zoller, R. Blatt, and T. Monz, Nature **603**, 604 (2022). * Zhang _et al._ [2018]X. Zhang, K. Zhang, Y. Shen, S. Zhang, J.-N. Zhang, M.-H. Yung, J. Casanova, J. S. Pedernales, L. Lamata, E. Solano, et al., Nat. Commun. **9**, 195 (2018). * Pinkas _et al._ [2023]M. Pinkas, O. Katz, J. Wengrowicz, N. Akerman, and R. Ozeri, Nat. Phys. (2023). * Chiaverini _et al._ [2005]J. Chiaverini, R. Blakestad, J. Britton, J. Jost, C. Langer, D. Leibfried, R. Ozeri, and D. Winland, Quantum Info. Comput. **5**, 419 (2005). * Blain _et al._ [2021]M. G. Blain, R. Haltli, P. Maunz, C. D. Nordquist, M. Revelle, and D. Stick, Quantum Sci. Technol. **6**, 034011 (2021). * Pino _et al._ [2021]J. M. Pino, J. M. Dreiling, C. Figgatt, J. P. Gaebler, S. A. Moses, M. S. Allman, C. H. Baldwin, M. Foss-Feig, D. Hayes, K. Mayer, et al., Nature **592**, 209 (2021). * Noel _et al._ [2022]C. Noel, P. Niroula, D. Zhu, A. Risinger, L. Egan, D. Biswas, M. Cetina, A. V. Gorshkov, M. J. Gullans, D. A. Huse, et al., Nat. Phys. **18**, 760 (2022). * Mehta _et al._ [2016]K. Mehta, C. Bruzewicz, R. McConnell, R. J. Ram, J. M. Sage, and J. Chiaverini, Nat. Nanotechnol. **11**, 1066 (2016). * Niffenegger _et al._ [2020]R. J. Niffenegger, J. Stuart, C. Sorace-Agaskar, D. Kharas, S. Bramhavar, C. D. Bruzewicz, W. Loh, R. T. Maxson, R. McConnell, D. Reens, et al., Nature **586**, 538 (2020). * Ivory _et al._ [2021]M. Ivory, W. J. Setzer, N. Karl, H. McGuinness, C. DeRose, M. Blain, D. Stick, M. Gehl, and L. P. Parazzoli, Phys. Rev. X **11**, 041033 (2021). * Vasquez _et al._ [2023]A. R. Vasquez, C. Mordini, C. Verniere, M. Stadler, M. Malinowski, C. Zhang, D. Kienzler, K. K. Mehta, and J. P. Home, Phys. Rev. Lett. **130**, 133201 (2023). * Setzer _et al._ [2021]W. J. Setzer, M. Ivory, O. Slobodyan, J. W. Van Der Wall, L. P. Parazzoli, D. Stick, M. Gehl, M. G. Blain, R. R. Kay, and H. J. McGuinness, Appl. Phys. Lett. **119**, 154002 (2021). * Reens _et al._ [2022]D. Reens, M. Collins, J. Ciampi, D. Kharas, B. F. Aull, K. Donlon, C. D. Bruzewicz, B. Felton, J. Stuart, R. J. Niffenegger, et al., Phys. Rev. Lett. **129**, 100502 (2022). * Todaro _et al._ [2021]S. L. Todaro, V. B. Verma, K. C. McCormick, D. T. C. Allcock, R. P. Mirin, D. J. Wineland, S. W. Nam, A. C. Wilson, D. Leibfried, and D. H. Slichter, Phys. Rev. Lett. **126**, 010501 (2021). * Hogle _et al._ [2023]C. W. Hogle, D. Dominguez, M. Dong, A. Leenheer, H. J. McGuinness, B. P. Ruzic, M. Eichenfield, and D. Stick, Npj Quantum Inf. **9**, 74 (2023). * Borregaard and Sorensen [2013]J. Borregaard and A. Sorensen, Phys. Rev. Lett. **111**, 090802 (2013). * Streed _et al._ [2011]E. W. Streed, B. G. Norton, A. Jechow, T. J. Weinhold, and D. Kielpinski, Phys. Rev. Lett. **106**, 010502 (2011). * Semennin _et al._ [2022]N. Semennin, A. Borisenko, I. Zalivako, I. Semerikov, M. Aksenov, K. Khabarova, and N. Kolachevsky, JETP Lett. **116**, 77 (2022). * Wineland _et al._ [1998]D. J. Wineland, C. Monroe, W. M. Itano, D. Leibfried, B. E. King, and D. M. Meekhof, J. Res. Natl. Inst. Stand. Technol. **103**, 259 (1998). * Mehta _et al._ [2020]K. Mehta, C. Zhang, M. Malinowski, T. Nguyen, M. Stadler, and J. Home, Nature **586** (2020). * Schioppo _et al._ [2017]M. Schioppo, R. C. Brown, W. F. McGrew, N. Hinkley, R. J. Fasano, K. Beloy, T. H. Yoon, G. Milani, D. Nicolodi, J. A. Sherman, et al., Nat. Photonics **11**, 48 (2017), 1607.06867. * Rosenband and Leibrandt [2013]T. Rosenband and D. Leibrandt (2013), arXiv:1303.6357 [quant-ph]. * Kim _et al._ [2022]M. Kim, W. McGrew, N. Nardelli, E. Clements, Y. Hassan, X. Zhang, J. Valencia, H. Leopardi, D. Hume, T. Fortier, et al., Nat. Phys. **19**, 1 (2022). * Elshaari _et al._ [2020]A. W. Elshaari, W. Pernice, K. Srinivasan, O. Benson, and V. Zwiller, Nat. Photonics **14**, 285 (2020). * Reiher _et al._ [2017]M. Reiher, N. Wiebe, K. M. Svore, D. Wecker, and M. Troyer, Proc. Natl. Acad. Sci. **114**, 7555 (2017). ## Methods ### Trap design and fabrication This trap was designed and fabricated at Sandia National Laboratories. It shares features common to other traps with integrated photonics fabricated at Sandia [1; 2]. One of the features of this trap is the combination of integrated components for both optical addressing with waveguides and MMI splitters as well as state readout with SPADs. The SPADs could not be operated when light was delivered via integrated waveguides, as the scatter from those components overwhelmed the ion signal on the SPADs. We implemented a grounded metal mesh on the top of the SPAD to reduce unpredictable charging effects [1] and consequent shuttling problems. It serves a similar function as an optically transparent and electrically conductive coating (e.g. indium tin oxide). The mesh allowed the ion to be transported over the entire aperture of the SPAD while the SPAD was active. This was not possible in previous Sandia-fabricated SPAD traps [2]. The mesh was designed to be \(1\,\mu\)m thick with many square \(3\,\mu\)m \(\times\)\(3\,\mu\)m openings to allow ion fluorescence to pass through, as can be seen in Fig. 5. The mesh blocks roughly 45% of the ion fluorescence that would otherwise impinge upon the SPAD, and based on electrostatic simulations would reduce the electric field at the ion ion by 16 dB compared to not having a screen. ### Experimental Procedures Ion trapping.The trap has several holes that penetrate the entire thickness of the trap and its substrate to support loading ions from ovens on the backside of the trap, to avoid electrode contamination. Ytterbium metal is placed in a stainless steel tube directly below the surface trap. When sufficient current flows through the tube a fraction of the ytterbium metal vaporizes and atoms are sent towards the trap surface through the loading holes. The atoms are ionized using a two-photon process with laser beams at 393 nm and 399 nm, at which point ytterbium ions are electrically confined by the RF and DC field created by the trap electrodes. The trap potential well is positioned \(50\,\mu\)m above the chip's surface. Initially ions were trapped using free-space Doppler (369 nm) and repump (935 nm) beams. The \({}^{171}\)Yb\({}^{+}\) ions in the \(\left|s,F=1\right\rangle\) state undergo cooling transitions to the \(\left|p,F=0\right\rangle\) state via the Doppler beam. Additional sidebands of 2.1 GHz and 14.7 GHz can be applied to the Doppler beam using electro-optic modulators (EOMs), which are necessary for optical pumping and Doppler cooling. During the optical pumping sequence, the ion is prepared in the \(\left|2\mathrm{S}_{1/2},F=0,m_{\mathrm{F}}=0\right\rangle\) state by applying only the extra 2.1 GHz frequency to the standard Doppler frequency. This preparation can be observed through a modified detection step that only detects atoms in the \(F=1\) state. Free-space 435 nm beams are coupled to the input grating couplers of each waveguide. About 2 mW (4 mW for the split ensemble) of optical power is applied to the output waveguide grating. The light propagates through roughly 5 mm of waveguide before being emitted through an output grating (Fig. 1**d**) which gives a \(\pi\)-time of \(\sim 150\,\mu\)s. The motional heating rate \(\left\langle\dot{n}\right\rangle\) of the axial trap mode was measured using free space beams. Using sideband thermometry [3] on the \(\left|2\mathrm{S}_{1/2},F=0,m_{\mathrm{F}}=0\right\rangle\) to \(\left|2\mathrm{D}_{3/2},F=2,m_{\mathrm{F}}=-1\right\rangle\) transition, a heating rate \(\dot{\bar{n}}\) = 2.9 quanta/ms was measured for an ion that was initially cooled to \(\bar{n}\sim 2\). Side-imaging systemOne of the main difficulties in integrated waveguide light delivery schemes is detection, since scattered photons from the grating output couplers can easily overwhelm the ion signal on the PMT detectors. Since the wavelength of detection is the same as the cooling wavelength, it is not possible to use a frequency filter to filter out scattered photons. To address this a side-imaging technique was developed. The fluorescence from an ion was collected with a 2-inch objective lens from the side of the chamber, and guided to either a CCD camera or linear PMT array. The path length to the camera and PMT was very close so that their focus was comparable and could be easily adjusted by re-focusing the objective lens. A 1D PMT was used for simultaneous independent detection of multiple ions. The ion separation in the image plane was comparable to the cell spacing of the PMT array so that each ion was detected by separate consecutive Figure 5: **Chip layout of SPAD system** Green metal mesh is located on the top of SPAD (purple area), which is surrounded by nearby electrode as depicted in Fig. 1**a**. PMT pixels. To exclude scattered light from the trap, a pinhole array with \(100\,\mathrm{\mu m\,diameter}\) holes was placed in front of the PMT array. ### Real-time multi-ion shuttling/trapping Generating multi-ion trapping solutions for multiple locationsThe DC voltage solutions for most ion trap experiments are created by constraining the electric field to be zero along three axes and the derivative of the electric field to have a specific positive value along one axis, all at the desired trapping location. This defines a well with a specific motional frequency. Since this problem is under-constrained and there are many possible solutions, other requirements (e.g. voltage limits on the electrodes) and desirements (e.g. applying voltages to fewer electrodes) are applied. Small changes to compensate the ion for deviations in the local electric field were computed separately and applied additively. While this system served well for many uses, the need to precisely position multiple ions with respect to fixed laser beams, as well as the need to shuttle newly loaded ions into position, required a more flexible solution. Initially a strategy of pre-computing all combinations of positions for all wells was attempted. Since the number of lines (corresponding to positions) in the voltage solution is exponential in the number of ions, this quickly became infeasible. The resolution to this problem was to create these solutions in real-time. For the real-time multi-ion solution the constraints were specified then combined to form a linear system that could be solved. An input defining the desired positions of all ions is provided, and the compensation fields at those positions. The system looks up the appropriate constraint matrices, concatenates them, and solves the resulting system. The time it takes to do so depends on the number of ions but is typically \(\sim\mu\mathrm{s}\), which is fast enough for manual adjustment of positions and shuttling. ### Theoretical approaches Rabi flop in Lamb-Dicke regimeTo quantify the average number of motional quanta in the system, we fit the experimentally obtained Rabi flop data to a dephasing model. First, we check if our system is in Lamb-Dicke regime (\(\sqrt{\bar{\eta_{k}}}\eta_{k}\ll 1\)). Here the Lamb-Dicke parameter \(\eta_{k}\) is defined as \(\eta_{k}=\frac{2\pi}{\lambda}\cos\theta\sqrt{\hbar/(2m\omega_{k})}\) where \(\lambda=435\) nm is the wavelength, \(m\) is a Ytterbium mass, \(\bar{h}\) is plank's constant, \(\theta\) is the beam angle, and \(\omega_{k}\) is the secular motional frequency. With the given parameters, we confirmed that \(\sqrt{\bar{n_{k}}}\eta_{k}\ll 1\) if \(\bar{n}<50\), which is valid in our experiment. Because the depths of ion traps are much larger than the thermal energy of the ion, we fit the Rabi carrier function in a Lamb-Dicke regime as \(P(t)\), the total population of excited state, as follows [4] \[P(t)=\frac{a}{2}\left(1-\frac{f_{1}(t)}{f_{2}(t)}\right) \tag{1}\] where \[f_{1}(t)=Re[e^{i\Omega t}]\prod_{k=1}^{N}e^{-i\Omega\eta_{k}^{2}t/2}\left(1- \frac{\bar{n}e^{i\Omega\eta_{k}^{2}t}}{\bar{n}+1}\right) \tag{2}\] and \[f_{2}(t)=\prod_{k=1}^{N}\left((\bar{n}+1)-2\bar{n}\cos\left(\Omega_{0}\eta_{ k}^{2}t\right)+\frac{\bar{n}^{2}}{\bar{n}+1}\right) \tag{3}\] for generalized \(N\) normal vibrational modes, characterized by \(N\) secular frequencies \(\omega_{k}\). Here \(a\) is the initial population of the ground state, \(\Omega_{0}\) is the Rabi frequency of the ion at rest, \(\eta_{k}\) is the Lamb-Dicke parameter for the given ion in the \(k\)th mode. Crosstalk - evanescent coupling calculationWe can estimate an evanescent coupling between waveguides, assuming two parallel waveguides of the symmetric and anti-symmetric mode. The coupling length can be calculated by considering the difference in refractive index between two coupled modes whose refractive index is around \(n\sim 1.51\). For the parameters in our device, where the waveguides are separated by \(6.26\,\mathrm{\,\mu m}\) over a distance of \(1.7\,\mathrm{mm}\), the evanescent coupling efficiency is on the order of \(10^{-25}\), which is negligible. We thus can conclude that the observed cross-talk is not caused from the evanescent coupling, but rather random scattering from the chip surface.
2307.12534
Towards Generalizable Deepfake Detection by Primary Region Regularization
The existing deepfake detection methods have reached a bottleneck in generalizing to unseen forgeries and manipulation approaches. Based on the observation that the deepfake detectors exhibit a preference for overfitting the specific primary regions in input, this paper enhances the generalization capability from a novel regularization perspective. This can be simply achieved by augmenting the images through primary region removal, thereby preventing the detector from over-relying on data bias. Our method consists of two stages, namely the static localization for primary region maps, as well as the dynamic exploitation of primary region masks. The proposed method can be seamlessly integrated into different backbones without affecting their inference efficiency. We conduct extensive experiments over three widely used deepfake datasets - DFDC, DF-1.0, and Celeb-DF with five backbones. Our method demonstrates an average performance improvement of 6% across different backbones and performs competitively with several state-of-the-art baselines.
Harry Cheng, Yangyang Guo, Tianyi Wang, Liqiang Nie, Mohan Kankanhalli
2023-07-24T05:43:34Z
http://arxiv.org/abs/2307.12534v2
# Towards Generalizable Deepfake Detection by Primary Region Regularization ###### Abstract The existing deepfake detection methods have reached a bottleneck in generalizing to unseen forgeries and manipulation approaches. Based on the observation that the deepfake detectors exhibit a preference for overfitting the specific primary regions in input, this paper enhances the generalization capability from a novel regularization perspective. This can be simply achieved by augmenting the images through primary region removal, thereby preventing the detector from over-relying on data bias. Our method consists of two stages, namely the static localization for primary region maps, as well as the dynamic exploitation of primary region masks. The proposed method can be seamlessly integrated into different backbones without affecting their inference efficiency. We conduct extensive experiments over three widely used deepfake datasets - DFDC, DF-1.0, and Celeb-DF with five backbones. Our method demonstrates an average performance improvement of 6% across different backbones and performs competitively with several state-of-the-art baselines. Deepfake Detection, Primary Region Localization, Regularization. ## I Introduction The significant advancement of realistic face synthesis models makes it accessible to alter a person's portrait [1, 2, 3]. Through the utilization of deepfake techniques, attackers can easily make celebrity pornography products, malicious political speeches, and deceptive government announcements, triggering widespread public concerns. To mitigate the abuse of face forgeries, the development of effective detection methods becomes imperative. Following a real/fake binary classification paradigm, deepfake detectors [4, 5, 6, 7] have found considerable success when training and testing on the same datasets, i.e., evaluating under the within-dataset setting. The majority of them identify the deepfakes via discriminating forgery traces, including the manipulated artifacts inside the faces [8, 9] and anomalous facial blending patterns [10, 11]. However, when shifted to unseen datasets or synthetic manipulations, the performance of these models often degrades significantly (refer to Figure 1a). A dominant reason is the inclination of detectors to overfit specific _primary regions_ where the loss function can be optimized most effectively [12]. This drives the detectors to perform _local_ learning while giving up searching for further regions that may be helpful to generalize to unseen data. As shown in Figure 1a, when trained and tested on the FaceSwap (FS) subset in FF++ [13], the model reaches a 99.82% AUC score based upon the primary regions that are mostly centered at the _nose_. However, when tested on the NeuralTextures (NT) subset, whose primary regions are _lips_, the model demonstrates significant limitations due to the overfitting of nose regions. A natural question motivates us - _Is it beneficial to guide the detector to explore beyond the primary regions?_ Existing studies on alleviating the overfitting problem generally follow two directions. The first is to leverage differences between real and fake images of the same identity (i.e., the subtraction operation) to guide the detectors to learn fine-grained forgeries [14, 15]. These techniques frequently outline the entire face based on the facial contour and skin tone rather than the exact manipulation areas [16]. The other direction is to modify the given faces by erasing random regions [12] or alternative facial attributes [17, 18]. The additional information brought by this strategy enables models to investigate more hidden features than usual [11]. However, methods in this scope either require random-sized mask generation [19] or the prior of facial attributes [18, 20], which can lead to unstable results. This paper addresses the lack-of-generalization problem with a novel view of preventing models from overfitting Fig. 1: Performance comparison of baselines and ours on NT and FS and our proposed method overview. (a) When trained on NT and FS, the model’s attention is respectively placed on lips and noses. Despite the good performance of the baseline over individual datasets, generalization to other datasets is severely hampered (i.e., from FS to NT), where our approach significantly prevails. (b) Our approach augments the training samples to FS\({}_{\rho}\) via masking the primary regions. The FS\({}_{\rho}\) is used to train the detector with original images jointly. specific primary regions1. Specifically, we implement this idea via augmenting the images with the primary regions carefully removed (as shown in Figure 1b). The augmented data act in the regularization role and help models leverage more clues for detection. To this end, the key challenge lies in localizing primary regions. Inspired by the success of ensemble learning [21, 22], we propose to employ multiple representative pre-trained deepfake detectors to construct candidate region maps according to the signals from the gradient. Based on the consensus of these models, we then design a novel approach to reduce the bias via ensembling the candidate maps to generate a single one. Thereafter, we adopt a dynamic exploitation approach to refine the fused map and obtain a more accurate primary region mask to prevent overfitting. The final masks are overlaid on the original image to obtain the augmented images, which are utilized for training the model with the original ones jointly. Footnote 1: Considering that deepfake methods composite whole faces [3], we believe that there exist cues beyond primary regions that reflect the forgery algorithms. The pipeline automates the acquisition of the primary regions, enabling the training of a robust and generalizable deepfake detection model. Our approach can easily be integrated into different backbones without architectural modifications or adaptations. We conduct extensive experiments on three widely exploited deepfake datasets - DFDC [23], DF-1.0 [24], and Celeb-DF [25]. The experimental results demonstrate that the five popular backbones incorporating our method achieve significant improvements in generalization performance, with an average gain of approximately 6% on AUC under the cross-dataset setting. In addition, our method demonstrates highly competitive results as compared to some SOTA baselines. In summary, our contributions are three-fold: * We tackle the deepfake detection from a novel view of regularizing the overfitting of primary regions. With this guidance, our proposed data augmentation method allows models to explore more forgeries by seeing other non-local facial regions. * We devise a novel region localization strategy to identify the primary regions. Besides enhancing detector generalization abilities, it potentially benefits tasks like forgery trace localization and segmentation. * The experimental results show that the integration of our method will greatly improve the generalizability of backbones, and the performance is comparable to a variety of SOTA competitors. ## II Related Work ### _Deepfake Generation_ Benefiting from the continuous development of portrait synthesis, deepfake [26] has recently emerged as a prevailing research problem. The well-studied autoencoders [27] serve as the leading architecture in this area [28, 29]. Specifically, typical approaches first respectively train two models with the reconstruction task and then swap their decoders to alter the identities of source faces. These approaches yield realistic faces but are limited to one-to-one face swapping. To achieve arbitrary face synthesis, Generative Adversarial Networks (GANs) [30] have grown in popularity due to their promising performance. For instance, StyleGAN [31] modifies high-level facial attributes with a progressive growing approach and adaptive instance normalization. IPGAN [32] disentangles the identity and attributes of the source and target faces, respectively. These two are thereafter blended for face synthesis. Different from these methods, identity-relevant features are recently introduced into deepfake generation. Kim _et al._[33] applied 3DMM [34] to produce controllable portraits. Xu _et al._[35] augmented local and global identity-relevant features by modeling the cross-scale semantic interaction to achieve identity-consistent face swapping. ### _Deepfake Detection_ Deepfake detection [36, 37] is generally cast as a binary classification task. Preliminary efforts often endeavor to detect the specific manipulation traces [38]. Masi _et al._[39] proposed a two-branch network to extract optical and frequency artifacts separately. SSTNet [40] detects edited faces through spatial, steganalysis, and temporal features. These models have shown certain improvements on some datasets. Nonetheless, they often encounter inferior performance when applied to different data distributions or manipulation methods. Several cross-dataset detection approaches are proposed to address this lack-of-generalization issue [41]. One manner is to introduce the complementary modalities [9] to vision-only detectors. Zhou _et al._[42] leveraged speech content to detect mismatching mouth-related dynamics. RealForensics [7] exploits the visual and auditory correspondence in real videos to enhance detection performance. Nevertheless, these methods are often limited in certain datasets due to the requirement for additional modalities, e.g., emotion or audio. To partially alleviate this limitation, some approaches are proposed to perform data augmentation on the original dataset. For instance, FakeLocator [14] calculates the difference between real and fake images to locate the manipulated facial properties. Chen _et al._[20] specified the blending regions and facial attributes to enrich the deepfake dataset with more manipulation types. Wang _et al._[12] generated random-sized masks around the pixel with the highest probability of being manipulated. These augmentation strategies enable the models to capture more manipulation types and areas during training, thereby enhancing performance. However, these methods apply either predefined augmentation types or regions, which might trigger bias and lead to sub-optimal outcomes. ### _Deepfake Detection Benchmarks_ The research community has dedicated significant efforts to establishing robust benchmarks for deepfake detection. Early datasets might be relatively small. For instance, DF-TIMIT [43] contains merely 620 videos synthesized from one manipulation approach. Subsequently, larger datasets like FF++ [13] incorporate a wide range of manipulation techniques beyond face swapping. DFDC [23] is synthesized from a pool of 960 individuals, resulting in a comprehensive compilation of 100,000 videos. On the other hand, the KoDF dataset [44] comprises over 200,000 videos generated by six distinct algorithms. Recent advancements in deepfake datasets have culminated in increasing sophistication [45]. For example, FakeAVCeleb [46] modifies audio and video, DGM4[47] provides detailed forgery grounding annotations, while DF-Platter [48] involves modifications to multiple faces. These high-quality datasets encompass diverse data sources and various forgery methods. However, it is crucial to note that this diversity may inadvertently introduce biases between datasets due to unique artifacts from specific forgery generation methods or the use of particular data. Consequently, deepfake detection methods that solely focus on specific artifacts often struggle to generalize effectively across different datasets. Footnote 1: [https://github.com/face](https://github.com/face) idea through the data augmentation technique. Specifically, we augment the original dataset by carefully removing the primary regions. As these new images do not contain the regions that are deemed important by the model, this prevents the model from taking the shortcut and rather forces it to harness other essential cues for decision-making. In view of this, the augmented data serve as a regularization for the original objective, and the empirical risk is reformulated as: \[R_{\mathcal{S}}(\theta):=\hat{R}_{\mathcal{S}}(\theta)+\frac{1}{n}\sum_{i=1}^{n }(\ell\left(\theta,\mathbf{I}_{i}*\overline{\mathbf{M}}_{i},y_{i}\right)), \tag{5}\] where \(\overline{\mathbf{M}}_{i}=\mathbf{1}-\mathbf{M}_{i}\) is the complement mask with respect to \(\mathbf{M}_{i}\). Based upon this objective, we propose a **P**rimary **R**egion **L**ocalization and then **E**xploitation method, dubbed as PRLE in this work. As displayed in Figure 2, our method consists of two stages: static localization and dynamic exploitation. The former obtains joint primary region representations with a novel offline fusion strategy. The latter refines the primary region masks using our proposed augmenting method during the model training. ### _Static Localization of Primary Regions_ To achieve the regularization goal, the key is constructing attention maps that can accurately locate primary regions. Although widely applied as explanation tools, attention maps may exhibit bias due to the dependence on a specific backbone [65]. We thus apply multiple deepfake detectors with different architectures to obtain attention maps separately and fuse them as a comprehensive map to reduce this bias. As shown in Figure 2, we first construct a model zoo \(\mathcal{Z}=\{\mathrm{Z}_{1},\mathrm{Z}_{2},\ldots,\mathrm{Z}_{T}\}\), where \(T\) is the zoo size, and \(\mathrm{Z}_{t}\in\mathcal{Z}\) is one pre-trained deepfake detector. Thereafter, we obtain \(T\) maps based on Equation 3 and combine them into a map set \(\mathcal{A}\) = \(\{\mathbf{A}_{1},\mathbf{A}_{2},\ldots,\mathbf{A}_{T}\}\). To integrate these maps into a single one, an intuitive approach is calculating the average. Specifically, the pixels at the same position \(x_{i}\) in each map from \(\mathcal{A}\) are averaged as \(s_{i}\). One threshold \(\tau_{1}\) is then introduced to filter the noise points holding relatively small attention values, \[\hat{\mathbf{A}}\left(x_{i}\right)=\left\{\begin{array}{ll}s_{i},&\text{if }s_{i}>\tau_{1},\\ 0,&\text{otherwise}.\end{array}\right. \tag{6}\] Nevertheless, this operation may cause less satisfactory results. As depicted in Figure 3, an improper \(\tau_{1}\) can result in excessive noise (first row) or region loss (second row). To address this problem, we design a neighboring fusion method to expand the regions based on the average fused maps with a higher \(\tau_{1}\). In particular, the fused value of a point depends on both itself and its neighboring points, \[\mathrm{g}(x_{i})\!=\!1\!\left\{\!\exists_{\mathbf{A}_{j},\mathbf{A}_{k}\in \mathcal{A}}\sum_{x_{a}\in\mathcal{N}}\frac{\left|\mathbf{A}_{j}(x_{a})\!-\! \mathbf{A}_{k}(x_{i})\right|}{\left|\mathcal{N}\right|}\!>\!\lambda\!\right\}, \tag{7}\] where \(\mathcal{N}\) is the neighboring set of \(x_{i}\), \(\mathbb{1}\) is the indicator function, and \(\lambda\) is a hyperparameter. Equation 7 indicates that if the existing neighboring point \(x_{a}\in\mathcal{N}\) receives more attention (larger CAM value), we consider \(x_{i}\) to be of interest as well. The value of \(\hat{\mathbf{A}}(x_{i})\) is taken as the maximum one among its adjacent points, \[\hat{\mathbf{A}}(x_{i})\!=\!\left\{\begin{array}{ll}\max_{\mathbf{A}_{j}\in \mathcal{A},x_{n}\in\mathcal{N}}\mathbf{A}_{j}(x_{n}),&\mathrm{g}(x_{i})\!=\!1,\\ 0,&\mathrm{g}(x_{i})\!=\!0.\end{array}\right. \tag{8}\] Fig. 3: Heat maps (left) and data points statistics (right) from the static methods. Left: randomly selected average fused maps with lower \(\tau_{1}\) (1st row) and higher \(\tau_{1}\) (2nd row), and those after applying the neighboring strategy (3rd row). Right: statistical data points with attention values: \(x\)-axis - attention value; \(y\)-axis - number of data points (1e7). Fig. 2: Overview of our PRLE method. We first apply multiple models to localize the candidate primary regions in parallel, where the model bias can be relatively reduced. The exploitation strategy can filter the unnecessary attention areas with a series of masks. Consequently, the augmented data can be effectively employed to existing methods to learn a better detector. As shown in the third row of Figure 3, after employing the neighboring fusion, the fused maps display more explicit boundaries and maintain the integrity of the primary regions. The statistics of data points exhibit that the neighboring method preserves more data points with high attention than the average one (high \(\tau_{1}\)) without adding too much noise. This is expected since a high attention value reflects the judgment cues of the models, which are the fundamental indicators of the primary regions. The above process can be performed in an offline way. Using this, we can identify the common primary regions that models tend to overfit. Next, we show how we leverage these attention maps for data augmentation. ### _Dynamic Exploitation of Attention Maps_ To align the regularization procedure in Equation 5, we convert \(\hat{\mathbf{A}}\) into the binary mask \(\mathbf{M}_{b}\) using Equation 4. Although it can remove noise and retain the main attention regions, the static stage is prone to localize surplus regions (as seen in Section IV-E). While this can be alleviated by manually adjusting \(\lambda\) and \(\tau_{1}\), it requires a burdensome tuning process and results in fixed masks across multiple training epochs that may also cause overfitting. To address this, we propose a dynamic exploitation strategy, where we adjust the size of these masks dynamically during training [66, 67]. As illustrated in Figure 4, we first arrange the values \(\hat{\mathbf{A}}(x_{i})\) in descending order, \[\mathcal{V}=\mathrm{Desc.}(\hat{\mathbf{A}})|_{\hat{\mathbf{A}}(x_{i})>0}, \tag{9}\] where \(\mathcal{V}\) is a set recording the position of pixels according to their attention values. We keep the highest values for they receive the most attention, \[\mathcal{V}_{\alpha}=\{x_{i}\in\mathcal{V}:0\leq i\leq\alpha|\mathcal{V}|\}, \tag{10}\] where \(\mathcal{V}_{\alpha}\) is the set of selected pixels corresponding to specific \(\alpha\). Thereafter, we augment the images \(\mathbf{I}_{i}\) with \(\mathcal{V}_{\alpha}\), \[\hat{\mathbf{I}}_{i}=\mathbf{I}_{i}*(\mathbf{1}-(\mathcal{V}_{\alpha}\odot \mathbf{M}_{b})), \tag{11}\] where \(\odot\) can be formulated as, \[(\mathcal{V}_{\alpha}\odot\mathbf{M}_{b})(x_{i})=\left\{\begin{array}{ll} \mathbf{M}_{b}(x_{i}),&x_{i}\in\mathcal{V}_{\alpha},\\ 0,&x_{i}\notin\mathcal{V}_{\alpha}.\end{array}\right. \tag{12}\] As illustrated in Figure 4, by setting \(\alpha\) randomly, we can dynamically augment images and make masks of varying widths for each training epoch. This method helps diversify the augmented data and prevent the model from overfitting, as shown in our experiments. ### _Training Protocols_ The static localization of primary regions and their dynamic exploitation are sequentially utilized to curate the augmented image. Specifically, we treat the former stage in the way of data preprocessing using pre-trained models. The latter stage dynamically augmented the training data throughout the training procedure using random \(\alpha\). **Notably, we do not double the training set as other augmentation approaches do.** Instead, we choose to either use the original image or its counterpart with primary regions being masked for each training epoch, as described in Algorithm 1. In this way, the training burden of the model is increased imperceptibly (Please refer to Section IV-G for the analysis of efficiency), and the random setting also prevents the models from overfitting. The training objective can be formalized as, \[R_{\mathcal{S}}(\theta)\!:=\!\frac{1}{n}\sum_{i=1}^{n}\!(q\ell\left(\theta, \mathbf{I}_{i},y_{i}\right)+(1\!-q)(\gamma\ell\left(\theta,\hat{\mathbf{I}}_{ i},y_{i}\right)), \tag{13}\] where \(q\in\{0,1\}\) indicates whether the input is augmented, and \(\gamma\) is a hyperparameter that controls the regularization terms. Similar to other regularization methods such as mixup learning [68], in which training is performed on synthetic samples from the original training set, our approach approximates regularized loss minimization, making vanilla detectors more generalizable and robust [69, 70]. With the constraint of this regularization, detectors have to take into account both primary regions and beyond that can be useful for generalization. ``` 0: The original images \(\mathbf{I}_{i}\), the attention maps \(\hat{\mathbf{A}}_{i}\), the function \(\mathrm{Dynamic}\) as described in Section III-C, and the predictive function \(f_{\theta}:\mathbf{I}\rightarrow[0,1]\); 0: The predictive score \(\hat{y}_{i}\); 1:if\(\mathrm{Rand}(0,1)\leq p\)then 2:\(\mathbf{I}^{*}\leftarrow\mathrm{Dynamic}(\mathbf{I}_{i},\hat{\mathbf{A}}_{i},\alpha)\); 3:\(\triangleright\)\(\alpha\) is randomly chosen from (0, 1); 4:else 5:\(\mathbf{I}^{*}\leftarrow\mathbf{I}_{i}\); 6:endif 7:\(\hat{y}_{i}\gets f_{\theta}(\mathbf{I}^{*})\); 8:return\(\hat{y}_{i}\); ``` **Algorithm 1** Training with PRLE ## IV Experiments ### _Datasets and Baselines_ **Training datasets.** Following the common setting of cross-dataset deepfake detection [6, 12, 17], we trained our model on the FF++ dataset [13]. It contains 1,000 original pristine videos collected from YouTube and five types of manipulation techniques, i.e., Deepfakes (DF), FaceSwap (FS), Face2Face Fig. 4: Illustration of our dynamic exploitation stage. The fused heat maps undergo 1) mask conversion, where they are transformed into a binary mask. And 2) a sorting procedure in which the pixels of attention maps are sorted, and only \(\alpha\) percentage is reserved. The outputs of these two are combined to make the refined mask images. (F2F), FaceShifter (Fsh), and NeuralTextures (NT), resulting in 6,000 videos in total. **Testing datasets.** Three widely used deepfake datasets are applied to evaluate the generalizability of our model. **1) Deepfake Detection Challenge (DFDC)**[23] is one of the largest public deepfake datasets by far, with 23,654 real videos and 104,500 fake videos synthesized using eight different facial manipulation methods. **2) DF-1.0**[24] includes videos with manually adding deliberate distortions and perturbations to the clean deepfake videos. We followed the official split and performed deepfake detection on the 1,000 testing set videos with mixed distortions. **3) Celeb-DF**[25] is one of the most challenging deepfake detection datasets. We considered the Celeb-DF official testing dataset with 518 videos in the experiments. ### _Implementation_ We utilized the Dlib library4 to extract and align faces, which are then resized to 256 \(\times\) 256 for both training and testing sets. These faces are employed for the image-level experiments conducted on two RTX 3090 GPUs with a batch size of 64. To obtain the primary regions, we employed three reliable deepfake detection models with diverse architectures, namely Xception [13], EfficientNet [63], and VGG [64], where the latter two are pre-trained on ImageNet [75]. The values of threshold \(\tau_{1}\) and \(\lambda\) used in Section III-B are set as 0.3 and 0.15, respectively. We selected the value of \(\alpha\) in Section III-C from the range of 0.0 to 1.0, and set \(p\) in Algorithm 1 and \(\gamma\) in Section III-D to 0.5 and 1.0, respectively. Footnote 4: [http://dlib.net/](http://dlib.net/). ### _Performance Comparison_ #### Iv-C1 Comparison on testing datasets We compared the generalization performance of several SOTA baselines, backbones, and our proposed method. All the models are trained on FF++ and evaluated on the three testing datasets. This cross-dataset setup is challenging since neither the testing pristine/forged videos nor the manipulated techniques are visible in the training dataset. We utilized two metrics to quantify the models' performance: ACC (accuracy) and AUC (area under the receiver operating characteristic curve). Also, the mean AUCs and ACCs over testing datasets are calculated to evaluate the overall performance. The experimental results are summarized in Table II. **Comparison with backbones.** We first validated the generalizability brought by PRLE via applying it to different backbones. Pertaining to the five backbones, i.e., Xception, EfficientNet, VGG, F3-Net, and ResNet-50, the model zoo uses the former three to generate the primary region maps5. The latter two are applied to validate the applicability of primary region masks, as they have not been utilized previously. Footnote 5: As PRLE is used in training set only, this does not lead to label leaking or cheating in the testing stage. As can be observed, adopting our PRLE approach (marked as _blue_) significantly improves the generalizability of backbones (marked as _pink_). For example, the ACC of Xception on the Celeb-DF dataset increases by 9%, and the AUC of EfficientNet on DF-1.0 gains by almost 15%, demonstrating that our PRLE drives the detectors to explore additional information beyond the primary regions for generalization to unseen distributions and manipulation. In addition to the three models involved in the primary maps generation, F3-Net, and ResNet-50 also exhibit significant performance improvement over respective backbones. Both models achieve an average AUC improvement of approximately 3%. In a nutshell, by simply adding our regularization term, classic detectors can be made more transferable. **Comparison with SOTAs.** We further compared the gener \begin{table} \begin{tabular}{l|c|c c c c c c|c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{AUG} & \multicolumn{2}{c}{DFDC} & \multicolumn{2}{c}{DF-1.0} & \multicolumn{2}{c|}{Celeb-DF} & \multicolumn{2}{c}{AVG} \\ \cline{3-10} & & ACC & AUC & ACC & AUC & ACC & AUC & ACC & AUC \\ \hline MesoNet [71] & \(\times\) & 50.02 & 50.16 & 50.05 & 50.21 & 36.73 & 50.01 & 45.60 & 50.13 \\ Capsule [72] & \(\times\) & 51.30 & 56.16 & 59.29 & 61.46 & 61.96 & 59.93 & 57.52 & 59.18 \\ CViT [73] & \(\times\) & 60.76 & 67.43 & 54.97 & 58.52 & 53.26 & 63.60 & 56.33 & 63.18 \\ FFD [16] & ✓ & 59.44 & 59.47 & 53.69 & 53.81 & 46.19 & 55.86 & 53.11 & 56.38 \\ MAT [61] & ✓ & 63.16 & 69.06 & 56.90 & 61.72 & 44.78 & 57.20 & 54.95 & 62.66 \\ SRM [4] & ✓ & 59.5 & 64.80 & 55.83 & 62.54 & 52.95 & 60.90 & 56.24 & 62.75 \\ RFM [12] & ✓ & 60.55 & 66.03 & 59.33 & 60.27 & 62.04 & 65.79 & 60.64 & 64.03 \\ RECCE [17] & ✓ & 59.30 & 62.82 & 56.02 & 60.41 & 68.49 & 69.80 & 61.27 & 64.34 \\ \hline \hline 1\(\times\)Xception [13] & \(\times\) & 59.93 & 64.17 & 48.04 & 55.01 & 56.12 & 56.75 & 54.70 & 58.64 \\ 1\(\%\)EfficientNet [63] & \(\times\) & 60.63 & 65.43 & 54.97 & 58.59 & 62.79 & 64.59 & 59.46 & 62.87 \\ 1\(\%\)VGG [64] & \(\times\) & 58.06 & 61.60 & 63.96 & 55.69 & 66.62 & 63.97 & 62.75 & 63.72 \\ F3\({}^{2}\)-Net [5] & ✓ & 63.76 & 67.59 & 56.93 & 59.15 & 58.58 & 64.76 & 59.76 & 63.83 \\ ResNet-50 [74] & \(\times\) & 59.84 & 64.34 & 58.15 & 62.78 & 56.68 & 60.32 & 58.22 & 62.48 \\ \hline \hline 1\(\times\)Xception + PRLE & ✓ & 64.33 & 69.38 & 57.22 & 66.62 & 65.19 & 65.94 & 62.28 & 67.31 \\ 1\(\%\)EfficientNet + PRLE & ✓ & **64.52** & **69.64** & 62.14 & **74.72** & **69.71** & **70.67** & **65.46** & **71.68** \\ 1\(\%\)VGG + PRLE & ✓ & 56.87 & 62.54 & **64.92** & 68.73 & 67.87 & 68.73 & 62.22 & 66.68 \\ F3\({}^{2}\)-Net + PRLE & ✓ & 63.50 & 69.47 & 63.32 & 74.30 & 68.46 & 68.17 & 65.09 & 70.65 \\ ResNet-50 \(\pm\) PRLE & ✓ & 61.50 & 66.02 & 57.43 & 70.52 & 58.94 & 61.26 & 59.29 & 65.93 \\ \hline \hline \end{tabular} \end{table} TABLE II: Performance comparison between SOTA baselines (unmarked), backbones (pink background), and backbones with our PRLE method added (marked blue). All the models are trained on the FF++ dataset. The best performance is marked as bold. \({}^{\ddagger}\): the backbone is used to generate the primary region maps. alization ability of our method with several SOTA deepfake detectors. Specifically, we categorized the compared methods into two groups based on whether they use additional features, e.g., facial details [16] or modalities [5], outside the backbone. From the overall performance in Table II, models using augmentation usually outperform the part without augmentation. Nonetheless, the performance of the vanilla backbones is merely competitive or lower than the SOTA methods such as CViT and Capsule. With the help of our PRLE, these backbones can mostly surpass other baselines by a large margin. For example, EfficientNet attains a 9% improvement regarding the average AUC and F\({}^{3}\)-Net gains 15% AUC in DF-1.0. Moreover, when compared with the baselines that also use Xception as the backbone (apple-to-apple comparison), e.g., SRM, and RFM, our method shows more performance improvements of 4% and 3%, respectively. #### Iv-C2 Comparison on FF++ subsets Following previous studies [10, 17], we conducted a fine-grained cross-manipulation evaluation using the FF++ dataset. Specifically, we trained the models on a single manipulation technique and tested them on the remaining techniques. The results presented in Table III demonstrate that our method outperforms the five backbones on most occasions. With the assistance of PRLE, Xception trained on DF shows 6% and 5% improvements in F2F and Fsh, respectively. Additionally, ResNet-50 trained on FS exhibits 12% improvements in both DF and F2F. These results effectively validate the efficacy of our method in this cross-manipulation scenario. We also evaluated these models on three testing datasets. As shown in Figure 5, it can be observed that when trained on DF and FS, all five backbones achieve a significant improvement in average AUC scores. This implies that even under resource-constrained conditions, PRLE can effectively enhance models' generalization capability. ### _Ablation Studies_ #### Iv-D1 Comparison on alternative region masks In contrast to the proposed PRLE method, which utilizes attention maps to locate primary regions, we employed two alternative approaches to generate masks: i) **Random Selection Approach**, where masks are derived from randomly selected rectangular regions within the image, and ii) **Landmark-based \begin{table} \begin{tabular}{l|c|c c c} \hline \hline Method & & DFDC & DF-1.0 & Celeb-DF \\ \hline Xception & + Primary Region & **69.38** & **66.62** & **65.94** \\ & + Random & 66.56(-2.82) & 62.38(-4.24) & 61.15(-4.79) \\ & + Landmark & 68.96(-0.42) & 59.15(-7.47) & 62.12(-3.82) \\ \hline \hline \multicolumn{5}{l}{EfficientNet + Primary Region} & **69.64** & **74.72** & **70.67** \\ & + Random & 67.85(-1.79) & 59.08(-1.54) & 65.98(-4.69) \\ & + Landmark & 68.22(-1.42) & 67.97(-6.75) & 66.76(-3.91) \\ \hline \hline \multicolumn{5}{l}{VGG} & **+ Primary Region** & **62.54** & **68.73** & **68.77** \\ & + Random & 59.36(-3.18) & 65.97(-2.76) & 64.93(-4.38) \\ & + Landmark & 61.40(-1.14) & 66.95(-1.78) & 65.41(-3.36) \\ \hline \hline \multicolumn{5}{l}{\({\rm F}^{3}\)-Net + Primary Region} & **69.47** & **74.30** & **68.17** \\ & + Random & 64.58(-4.89) & 71.29(-3.01) & 64.22(-3.95) \\ & + Landmark & 64.46(-5.01) & 71.30(-3.00) & 64.54(-3.63) \\ \hline \hline \multicolumn{5}{l}{ResNet-50 + Primary Region} & **66.62** & **70.52** & **61.26** \\ & + Random & 65.08(-0.94) & 63.37(-7.15) & 59.27(-1.99) \\ & + Landmark & 62.72(-3.30) & 63.71(-7.81) & 59.95(-1.31) \\ \hline \hline \end{tabular} \end{table} TABLE IV: AUC (%) comparison of backbones with different primary region masks. \begin{table} \begin{tabular}{l|c|c|c c c c c} \hline \hline Method & & & & & & & \\ \hline Backbone & +P. & Train & DF & FS & F2F & Fsh & NT & AVG \\ \hline Xception & \(\times\) & & 99.91 & 28.91 & 74.59 & 66.34 & 83.74 & 70.70 \\ Xception & \(\times\) & & 99.89 & **30.25** & **80.17** & **71.90** & **84.02** & **73.25** \\ EfficientNet & \(\times\) & & 99.95 & 36.66 & 69.69 & **69.10** & **81.25** & 71.33 \\ EfficientNet & \(\times\) & & 98.59 & **38.28** & **75.82** & 68.89 & **85.32** & **73.38** \\ VGG & \(\times\) & & 99.67 & 26.60 & 63.40 & **66.34** & **73.96** & 65.99 \\ VGG & \(\times\) & DF & 99.59 & **34.63** & **65.44** & 65.29 & 70.74 & **67.13** \\ F\({}^{3}\)-Net & \(\times\) & & 99.62 & 21.67 & 60.37 & 70.98 & 76.06 & 65.75 \\ F\({}^{3}\)-Net & \(\times\) & & 99.54 & **24.62** & **67.51** & **75.54** & **71.91** & **69.26** \\ ResNet-50 & \(\times\) & & 99.88 & 24.99 & 65.74 & 72.04 & 74.91 & 67.34 \\ ResNet-50 & \(\times\) & & 99.88 & **26.65** & **68.69** & **72.48** & **75.22** & **68.32** \\ \hline Xception & \(\times\) & & 61.29 & 99.87 & 66.04 & 52.55 & 55.82 & 67.11 \\ Xception & \(\times\) & & 62.98 & **99.89** & **63.00** & **52.84** & **62.31** & **69.46** \\ EfficientNet & \(\times\) & & 66.02 & 99.91 & 67.07 & 53.38 & **54.63** & 68.20 \\ EfficientNet & \(\times\) & & **64.66** & 99.17 & **72.90** & **56.85** & 53.72 & **69.87** \\ VGG & \(\times\) & & 54.35 & 96.44 & 57.91 & 56.75 & 57.24 & 62.43 \\ VGG & \(\times\) & F\({}^{3}\)-Net & & **59.37** & **39.82** & **62.03** & **57.74** & **52.68** & **65.96** \\ F\({}^{3}\)-Net & \(\times\) & & 37.49 & 99.43 & 48.3 & 46.00 & 39.15 & 58.18 \\ F\({}^{3}\)-Net & \(\times\) & & **43.09** & **99.81** & **72.53** & **47.19** & **43.74** & **61.27** \\ ResNet-50 & \(\times\) & & 45.63 & 98.66 & 60.47 & 49.39 & 52.71 & 61.61 \\ ResNet-50 & \(\times\) & & **57.91** & **99.34** & **72.64** & **56.82** & **62.31** & **69.90** \\ \hline \hline \multicolumn{5}{l}{Xception} & \(\times\) & & 82.92 & 50.73 & 99.62 & **55.51** & 70.67 & 71.89 \\ Xception & \(\times\) & & **89.94** & **51.91** & 99.36 & 47.80 & **71.73** & **72.15** \\ EfficientNet & \(\times\) & & 86.43 & 54.51 & 99.96 & 53.61 & 64.76 & 71.80 \\ EfficientNet & \(\times\) & & **88.33** & **56.93** & 95.01 & 50.43 & **70.59** & **73.16** \\ VGG & \(\times\) & F2F & **80.32** & 32.77 & 99.40 & 54.77 & 54.20 & 64.29 \\ VGG & \(\times\) & & 76.19 & 55.32 & 99.50 & 55.1 & **55.57** & **68.34** \\ F\({}^{3}\)-Net & \(\times\) & & 78.27 & 66.09 & 99.21 & **49.96** & 60.01 & 70.86 \\ F\({}^{3}\)-Net & \(\times\) & & **79.58** & **64.25** & **98.89** & 44.02 & **65.76** & **70.90** \\ ResNet-50 & \(\times\) & & **85.14** & 41.36 & **99.05** **Approach**, where facial regions indicated by landmarks, such as the mouth or nose, are randomly selected and masked. The model performance with these alternative masks is summarized in Table IV. From the table, it can be observed that our proposed primary region masks outperform both randomly generated and landmark-based masks by a significant margin. For instance, on DF-1.0, the utilization of alternative masks resulted in a maximum performance decrease of 15%. Besides, on Celeb-DF, the average decline in performance is approximately 2.5%. This discrepancy can be attributed to the fact that neither of the alternative masks explicitly captures the primary evidence used by models to distinguish between real and fake images, potentially rendering them ineffective in mitigating overfitting. Among these two masks, the landmark-based masks perform better since randomly generated masks may fail to cover any facial regions, thus not contributing to the detection of real and fake faces. #### Iv-B2 Comparison on strategies for primary region localization As described in Section III, the primary regions maps are fused with the neighboring fusion strategy from the attention maps of three deepfake detectors, i.e., Xception, EfficientNet, and VGG. We therefore conducted experiments to verify the effectiveness of this fusion approach. **Masks from different fusion strategies.** We applied different fusion strategies to evaluate our PRAE. Specifically, we employed two average fusion methods with different \(\tau_{1}\), as depicted in Equation 6, as alternatives to the proposed neighboring fusion approach. The summarized results are reported in Table V. Notably, the models using average strategies perform worse compared to our PRAE. For instance, Xception experiences a decrease of 4.5% on DFDC, while EfficientNet decreases by 4.7% on DF-1.0. An intuitive explanation can be derived from Figure 3, where the attention maps generated by average strategies may contain excessive noise or incomplete regions. These factors can hinder the effective enhancement of model generalization. It is also observed that a lower \(\tau_{1}\) yields better performance compared to a higher value. This can be attributed to the dynamic exploitation stage, where we refine the mask and reduce some noise from the initial stage. **Masks from different detectors.** We further explored the impact of utilized detectors. Firstly, we employed EfficientNet as the backbone and trained it with masks from every single detector used in the proposed PRAE. In Table VI, using individual masks jeopardizes the model performance. For instance, the performance decreases by around 7% when trained with the masks from VGG and tested on DF-1.0. We attributed this result to the static localization stage integrating information from different backbones, enriching the information of the mask and alleviating the bias on a single detector. Furthermore, we investigated the influence of the number of detectors on model generalization. Specifically, we progressively fuse attention maps from different detectors to obtain masks. As shown in Figure 6, we reported the average AUC and ACC of Xception and EfficientNet with respect to different detectors. Generally, both models exhibit improvement as the number of detectors increases. For instance, the average AUCs of EfficientNet are 68.71%, 70.06%, 71.68%, 71.94%, and 72.25% for 1 to 5 detectors, respectively. However, this growth comes at the cost of increased computation, indicating the presence of a trade-off. Besides, this growth starts to decelerate when the number of detectors reaches three. Based on these findings, we have empirically employed three detectors, \begin{table} \begin{tabular}{l l|c c c} \hline \hline Method & & DFDC & DF-1.0 & Celeb-DF \\ \hline Xception & + Neighboring & **69.38** & **66.62** & **65.94** \\ & + Average (low \(\tau_{1}\)) & 67.07(2.31) & 66.28(0.34) & 64.46(1.48) \\ & + Average (high \(\tau_{1}\)) & 64.85(-4.53) & 65.11(-1.51) & 63.57(-2.37) \\ \hline EfficientNet & + Neighboring & **69.64** & **74.72** & **70.67** \\ & + Average (low \(\tau_{1}\)) & 68.90(-7.04) & 72.27(-2.45) & 69.34(-1.33) \\ & + Average (high \(\tau_{1}\)) & 67.31(-2.63) & 69.94(-4.78) & 69.18(-1.49) \\ \hline VGG & + Neighboring & **62.54** & **68.73** & **68.77** \\ & + Average (low \(\tau_{1}\)) & 68.27(-1.62) & 65.56(-3.17) & 64.47(-4.30) \\ & + Average (high \(\tau_{1}\)) & 67.02(-1.82) & 65.51(-3.22) & 62.18(-6.59) \\ \hline F\({}^{3}\)Net & + Neighboring & **69.47** & **74.30** & **68.17** \\ & + Average (low \(\tau_{1}\)) & 67.82(-1.65) & 70.04(-3.90) & 64.67(-3.50) \\ & + Average (high \(\tau_{1}\)) & 66.69(-2.78) & 70.21(-4.09) & 63.47(-3.80) \\ \hline ResNet50 & + Neighboring & **66.02** & **70.52** & **61.26** \\ & + Average (low \(\tau_{1}\)) & 65.45(-0.57) & 66.23(-4.29) & 61.04(-0.22) \\ & + Average (high \(\tau_{1}\)) & 64.93(-1.09) & 62.99(-7.53) & 60.82(-0.44) \\ \hline \hline \end{tabular} \end{table} TABLE V: AUC (%) comparison of backbones when applying different fusion strategies. \begin{table} \begin{tabular}{l|c c c c} \hline \hline Method & DFDC & DF-1.0 & Celeb-DF \\ \hline EfficientNet & **69.64** & **74.72** & **70.67** \\ _vs/o_ exploitation & 68.86(-0.78) & 70.42(-4.30) & 65.45(-5.22) \\ \hline ResNet-50 & **66.02** & **70.52** & **61.26** \\ _vs/o_ exploitation & 65.86(-0.16) & 63.32(-7.20) & 60.83(-0.43) \\ \hline \hline \end{tabular} \end{table} TABLE VII: AUC (%) comparison of EfficientNet and ResNet-50 backbones with the removal of the dynamic exploitation. Fig. 6: Performance (%) of (a) Xception and (b) EfficientNet when utilizing attention maps obtained from different detectors. \(x\)-axis - the sequential addition of attention maps from detectors Xception (+X.), EfficientNet (+E.), VGG (+V.), F\({}^{3}\)-Net (+F.), and ResNet-50 (+R.) into the fusion strategy. \begin{table} \begin{tabular}{l l|c|c c c} \hline \hline \multicolumn{2}{c|}{Mask} & \multicolumn{4}{c}{Dataset} \\ \hline X. & E. & V. & DFDC & DF-1.0 & Celeb-DF \\ \hline ✓ & ✓ & ✓ & **69.64** & **74.72** & **70.67** \\ \hline ✓ & \(\times\) & \(\times\) & 68.22(-1.42) & 70.83(-3.89) & 67.07(-3.60) \\ \(\times\) & ✓ & \(\times\) & 63.69(-5.95) & 72.28(-2.44) & 68.91(-1.76) \\ \(\times\) & \(\times\) & ✓ & 68.31(-1.33) & 67.57(-1.75) & 64.22(-6.45) \\ \hline \hline \end{tabular} \end{table} TABLE VI: AUC (%) comparison of EfficientNet when adopting masks from a single backbone, namely, Xception (X.), EfficientNet (E.), and VGG (V.). Training with the individual masks will result in a performance decrease. i.e., Xception, EfficientNet, and VGG, to strike a reasonable balance between performance and computational costs. #### Vi-E3 Comparison on mask exploitation We validated the importance of our dynamic exploitation module, which augments images with variable \(\alpha\). Specifically, we discarded the dynamic exploitation and converted the fused attention maps to binary masks directly. The results are reported in Table VII. One can observe that dynamic exploitation plays a key role in improving generalizability, as it contributes 7% improvement to ResNet-50 on DF-1.0. Besides, EfficientNet degrades around 3% across testing sets without exploitation. ### _Qualititive Studies_ #### Vi-E1 Mask ratio \(\alpha\) We studied the impact of different \(\alpha\) values and visualized the corresponding masks in Figure 7. The visualization demonstrates that as \(\alpha\) increases, the masked regions expand from a central point toward the peripheral areas. For instance, cases 1 and 2 primarily focus on the **lips** region, while cases 3 and 4 highlight the **nose**. Cases 5 and 6 concentrate on the **cheek** and **eyes**. This variation in masked regions is attributed to the fact that different forgery methods tend to target specific regions. For example, cases 1 and 2 are from the NT subset of FF++, whose main modification involves mouth movements, resulting in detectors placing significant attention on the lips. We further investigated the performance change of Xception using different \(\alpha\) in Figure 8. Specifically, the model is trained only on raw images from a specific subset and tested on the corresponding masked images. We can see that the model performance drops with the increase of \(\alpha\) and then converges at around 50-70%. This suggests that the masking strategy gradually removes the key clues that the model depends on to determine authenticity. It is important to clarify that the primary regions describe the areas that detectors utilize to determine authenticity. However, these regions may not encompass the complete regions of all manipulated artifacts. #### Vi-E2 Heat maps To qualitatively evaluate our method, we employed two backbones, i.e., EfficientNet and F\({}^{3}\)-Net, to demonstrate several heat maps under the cross-dataset setting in Figure 9. We can observe that for both vanilla backbones, the attention regions are concentrated on the nose and upper lip, indicating that these two are overfitted on the training set FF++. Compared with the backbones, PRLE focuses on broader face regions, such as eyes and chins, showing that more cues beyond the primary regions are utilized for detection. Moreover, the last two columns illustrate the forged images that the backbones cannot distinguish from the real ones. The attention maps show that the backbones fail to leverage accurate clues, and thus lead to wrong predictions. In contrast, PRLE predicts correctly by exploring more regions. Fig. 8: Performance (%) of Xception when tested on the masked images from different FF++ subsets. Fig. 7: Visualization of masks with \(\alpha\) ranging from 1.0 to 0.1. ### _User Study_ We performed a detailed user study which includes 86 participants in total. Subjects are asked to classify the deepfake images with primary region masks, and we ensured that each identity appeared only once. Our evaluation requires each participant to take several minutes to look at 50 images, resulting in 4,300 human decisions. The results are presented in Figure 10a. Compared to the user evaluation on FF++ without primary region masks, subjects' performance decreased by 10%, indicating that the primary regions influence the real/fake judgment of images. In addition, we conducted user experiments with a fixed \(\alpha\). Figure 10b illustrates that as the masked area increases, the accuracy continues to drop. Combining the results from these human decisions, we conclude that the primary region masks play an important role in affecting the detection judgment. ### _Efficiency Analysis_ Table VIII illustrates the efficiency impact of our PRLE method. We compared the computational complexity (Giga floating point operations, GFLOPs), model parameters, and video throughput (videos/s) during training and inference of the Xception and ResNet-50 backbones. It can be observed that PRLE introduces only negligible computational time to the original detectors. For instance, incorporating PRLE into Xception incurs less than a 3% additional time cost. This marginal decrease in speed primarily results from the refinement of masks in the dynamic exploitation stage. Considering the generalization improvement brought about by PRLE, this slight time overhead is deemed acceptable. Moreover, apart from a minor computational cost during data processing, PRLE does not affect the computational complexity and parameters of the backbone model, thus maintaining the same inference speed as the model without PRLE. Admittedly, there is a small one-time upfront cost associated with integrating attention maps into primary region masks during the static localization stage. However, this cost is relatively minor. In our experiments, this stage consumes ten images per second. ## V Conclusion and Discussion Deepfake detectors tend to overfit primary regions, limiting their generalization to unseen data or algorithms. In this work, we address this challenge from a novel regularization view and propose an effective data augmentation method. Our static localization and then dynamic exploitation of primary regions strategy enables models to explore more cues for authenticity detection. Due to its plug-and-play virtue, our method is compatible with most existing deepfake detectors. Unlike conventional data augmentation approaches, our method does not enlarge the dataset size, avoiding burdening the training efficiency. We apply this method to several strong backbones and observe significant performance improvement in terms of generalization. In addition to its effectiveness in generalizable deepfake detection, our data augmentation approach can also help other promising future research, such as deepfake localization and segmentation.
2310.17083
A Central Limit Theorem for intransitive dice
Intransive dice $D^{(1)}, \ldots, D^{(\ell)}$ are dice such that $D^{(1)}$ has advantage with respect to $D^{(2)}$, dice $D^{(2)}$ has advantage with respect to $D^{(3)}$ and so on, up to $D^{(\ell)}$, which has advantage over $D^{(1)}$. In this twofold work, we present: first, (deterministic) results on existence of general intransitive dice. Second and mainly, a central limit theorem for the vector of normalized victories of a die against the next one in the list when the faces of a die are i.i.d.\ random variables and all dice are independent, but different dice may have distinct distributions associated to, as well as they may have distinct number of faces. From this central limit theorem we derive a criteria to assure that the asymptotic probability of observing intransitive dice is null, which applies for many cases, including all continuous distributions and many discrete ones.
Luis G. Coelho, Tertuliano Franco, Lael V. Lima, João P. C. de Paula, João V. A. Pimenta, Guilherme L. F. Silva, Daniel Ungaretti
2023-10-26T01:04:46Z
http://arxiv.org/abs/2310.17083v1
# A central limit theorem for intransitive dice ###### Abstract. Intransive dice \(D^{(1)},\ldots,D^{(\ell)}\) are dice such that \(D^{(1)}\) has advantage with respect to \(D^{(2)}\), dice \(D^{(2)}\) has advantage with respect to \(D^{(3)}\) and so on, up to \(D^{(\ell)}\), which has advantage over \(D^{(1)}\). In this twofold work, we present: first, (deterministic) results on existence of general intransitive dice. Second and mainly, a central limit theorem for the vector of normalized victories of a die against the next one in the list when the faces of a die are i.i.d. random variables and all dice are independent, but different dice may have distinct distributions associated to, as well as they may have distinct number of faces. From this central limit theorem we derive a criteria to assure that the asymptotic probability of observing intransitive dice is null, which applies for many cases, including all continuous distributions and many discrete ones. ###### Contents * 1 Introduction * 2 Statement of results * 2.1 Main results for deterministic models of dice * 2.2 Main results for random models of dice * 2.3 Organization of the remainder of the paper * 3 Examples * 4 On deterministic intransitive dice * 4.1 A bijection between dice and words * 4.2 Proof of Theorem 1 * 4.3 Proof of Proposition 8-(iii) * 4.4 On the number of intransitive words * 4.5 Some numerical aspects on the number of intransitive words * 5 Some generalities on the counting functions and Gaussian vectors * 5.1 Properties of the counting functions * 5.2 Gaussian vectors associated to the structured covariance matrix * 6 Proofs of Theorems 5 and 6 * 7 Proof of Theorem 4 * 7.1 From moments to combinatorics of graphs * 7.2 Estimating the contributions from each class of graphs * 7.3 Computing the leading contribution, and the conclusion of the proof of Theorem 4 ## 1. Introduction Intransitivity is an inherent facet of nature, it is part of the equilibrium in evolutionary dynamics, where different relations between predators and prey create the balance for common existence. This phenomenon is noted for instance in the eighteenth century Condorcet's paradox, in which three candidates are intransitive in the sense that the candidate \(A\) wins when running against \(B\), the candidate \(B\) wins when running against candidate \(C\) and the candidate \(C\) wins when running against candidate \(A\). It is worthy mentioning that the Condorcet's paradox is intrinsically related to the classical Arrow's Theorem. Intransitivity also manifests itself in sports leagues, network relations, interactions between different medications, and an ever-expanding array of scenarios. While being a fundamental mathematical concept, intransitivity can lead to intriguing outcomes even in simple models. Consider a basic dice game as an example: there are two players, each toss a (possibly different) die and the one with the highest outcome wins. It is possible to construct three dice, \(A\), \(B\) and \(C\), for which \(A\) is better than \(B\) (in the sense that the player with die \(A\) has a higher chance of winning against the player with die \(B\)), \(B\) is better than \(C\) and \(C\) is better than \(A\)? What about constructing an intransitive chain of more than three dice? And dice with a very large number of faces? The answer about existence of such dice is positive, and to the best of our knowledge, the intransitivity of dice was first addressed in Martin Gadner's column [5]. However, it is worthy mentioning that the particular intransitive dice therein cited was previously mentioned in the sixties by Bradley Effron, which are \[A=(0,0,4,4,4,4),\quad B=(3,3,3,3,3,3)\] \[C=(2,2,2,2,6,6),\quad D=(1,1,1,5,5,5).\] For those dice it holds that the probability of \(A\) beats \(B\), \(B\) beats \(C\), \(C\) beats \(D\) and \(D\) beats \(A\) are all equal to \(2/3\). Intransitive dice are also natural examples of the _Steinhaus and Trybuta's paradox_ (see [11]), consisting on the existence of independent random variables \(X\), \(Y\), and \(Z\) such that \(\mathbb{P}(X>Y)>1/2\), \(\mathbb{P}(Y>Z)>1/2\), and \(\mathbb{P}(Z>X)>1/2\) (related to Steinhaus and Trybula's paradox, see [7]). The property of intransitivity can be found in various domains, such as Statistics (see [1]) and voting systems (see [6]). From a probabilistic point of view, there has been a recent upraise of interest in intransitive dice phenomena. In part, such recent trend started with a discussion by Conrey, Gabbard, Grant, Liu and Morrisson in [2], where the authors considered a model of random dice where the \(n\) faces of a given random die are given by uniformly choosing \(n\) entries among positive integers conditioned to sum to \(n(n+1)/2\), which they call _a balanced model_. For instance, for \(n=4\), the faces of a die are chosen by picking uniformly one of the multisets below: \[(1,1,4,4),\quad(1,2,3,4),\quad(1,3,3,3),\quad(2,2,2,4),\quad(2,2,3,3)\,.\] In [2], supported by computational evidence, it has been posed two conjectures for the set of three dice: first, that the asymptotic probability of ties is zero, and second, that the asymptotic probability of picking a set intransitive dice is \(1/4\) (see also [9] which evaluates some exact probabilities for three and four dice). These two conjectures were proved later by the Polymath group (see [10]). In our present work, we consider a general scenario where we do not impose any constraints regarding the sum of their entries, but consider intransitivity phenomenon both from deterministic and random perspectives, in particular allowing for arbitrary number of dice, and also in an asymptotic regime where the number of faces of each die grows large in a proportional (but not necessarily equal) way. The first part of this paper is devoted to study existence of intransitive dice: in a deterministic setup that does not allow for ties among different dice faces, we are able to characterize completely when an ordered collection of intransitive dice exist, in terms of the size of the dice and the number of different entries used for the faces. In short terms, we essentially prove that there are always intransitive collections with arbitrarily large number of faces, provided each die has at least 3 faces. Naturally, one is faced with the question of how many of these collections there are. We are able to show that the proportion of ordered collections of intransitive dice among all possible collections (not necessarily intransitive) decays with the number of faces of the dice, and we perform numerical experiments on the decay rate. For both the previously mentioned existence results and decay of the proportion of intransitive dice, the key observation is a bijection between collections of dice, not necessarily intransitive, and words with appropriate number of letters. We explore this connection to construct, from a given collection of intransitive dice, a new collection with a larger number of faces of each die or with a larger number of dice, while preserving the intransitivity. This construction is algorithmic, and as we mentioned it is based on the connection between intransitive dice and words with a particular combinatorial property which may be of independent interest. The second and main part of this paper deals with models of random dice, where the numbers on the faces of a given dice are independent random variables, but the distributions of different dice may vary. Our main interest lies in determining the chance that a finite collection of random dice is intransitive, when the number of faces of each die grows large. To do so, we prove a central limit theorem for the vector of number of victories of the faces of die against the faces of the next die in the list (whose entries are strongly correlated). The proof of this CLT is based on the moment method, where the crucial step consists of a careful estimate of the moments via an identification with a combinatorial problem in graph theory. This is much inspired by the now classical moment method used in the proof of Wigner's semicircle law in random matrix theory. The vector of victories is actually connected to intransitivity, which is simple to illustrate when there are no ties and that each die has \(n\) faces: in this particular case, intransitity of the list of dice is equivalent to the fact that each entry of the vector of victories is larger than \(n^{2}/2\). This can be properly generalized to account for possible ties among entries, and when combined with the CLT obtained, we are able to deduce a criteria to ensure that the probability of finding intransitive collections of dice decays to zero when the number of faces grows. Such result is obtained under rather mild conditions on the distribution of the random variables determining the faces. _Grosso modo_, the two conditions are a bound from above on the number of ties and a bound from below on the variance of victories of a die against another one in the list, avoiding degeneracy in the central limit setting. These mild conditions cover many situations. For instance, it includes the scenarios where all dice have same distribution (including all continuous distributions and many discrete ones), and also situations where the underlying distributions of faces depend on scaling parameters. And also the case where dice have different distributions in some cases. We also provide a way of constructing asymptotic intransitive dice (not satisfying the previous conditions, of course), which is argued via a concentration inequality. We now move forward to the discussion of our main findings. ## 2. Statement of results We split the discussion of these major results into two subsections, first for deterministic dice and then for random dice. As we hope to convey with this text, simple models of dice display rather interesting and rich aspects worth investigating more deeply. However, many interesting phenomena may depend on somewhat subtle specific features of the model considered. Nevertheless, questions surrounding intransitive dice phenomena are rather simple to state. For the latter reason, we mostly introduce new terminology and notation along the text, reserving formal definitions solely for more technical assumptions needed. For convenience, such definitions along the text are highlighted in bold. ### Main results for deterministic models of dice An \(n\)**-sided die** is a pair \((D,X)\), where \(D=(D_{1},\ldots,D_{n})\) is a real-valued vector where each \(D_{k}\) represents the number on the \(k\)-th face, and \(X\) is a random variable taking values on \([n]\coloneqq\{1,2,\ldots,n\}\) that represents the label of the face in the outcome of a toss. The number \(n\) is the number of faces, or simply size, of the die \(D\). The die is said to roll the face \(k\) with probability \(\mathbb{P}(X=k)\) and results \(D_{k}\). If this probability equals \(1/n\) for every \(k\), the die is **honest** or **fair**. Otherwise the die is **unfair** or **biased**. If there is no ambiguity, the die will be denoted as \(D\), and in that case, it is useful to denote the random result of \(D\) in a roll by \(D_{X}\). Thus, in general, the entries of \(D\) need not be integer-valued, nor even positive. We reserve capital letters \(A\), \(B\), \(C\) etc. to represent dice, and lower indices \(A_{i}\), \(B_{i}\) etc. to represent a entries of the dice \(A\), \(B\). It is also useful to distinguish different dice with an upper index, writing for instance \(D^{(1)}\), \(D^{(2)}\) etc, and the corresponding entries by \(D^{(1)}_{i}\), \(D^{(2)}_{i}\) etc. A die \(A\) is said to be **better than** a die \(B\), and it is denoted by \(A\triangleright B\) if the probability of \(A\) rolling a higher value than \(B\) is greater than the probability of \(B\) rolling a higher value than \(A\). To the same extent, the die \(B\) is said to be **worse than**\(A\), and it is denoted by \(B\triangleleft A\). In mathematical terms, one way to verify whether a fair die \(A\) is better than a fair die \(B\) is by counting against how many faces of \(B\) a given face of \(A\) wins, summing the result over all possible faces of \(A\), and comparing with the count we obtain when we do the same interchanging the roles of \(A\) and \(B\). In other words, \(A\triangleright B\) if, and only if, the inequality \[\sum_{A_{i}>B_{j}}1\ >\ \sum_{B_{j}>A_{i}}1\] is satisfied. With \(n_{A}\) and \(n_{B}\) being the number of faces of \(A\) and \(B\), respectively, there are in total \(n_{A}n_{B}\) pairs of faces from \(A\) and \(B\) to compare, and \(A\triangleright B\) if, and only if, \[\sum_{A_{i}>B_{j}}1\ >\ \frac{1}{2}n_{A}n_{B}-\frac{1}{2}\sum_{A_{i}=B_{j}}1\,. \tag{2.1}\] An ordered collection of dice \(\mathbf{D}=(D^{(1)},\ldots,D^{(\ell)})\) is said to be **intransitive** if \(D^{(1)}\triangleright\cdots\triangleright D^{(\ell)}\triangleright D^{(1)}\). Note that while \(\triangleright\) is an asymmetric relation, it is not necessarily transitive, so it does not define an order relation. When computing whether a given collection \(\mathbf{D}=(D^{(1)},\ldots,D^{(\ell)})\) of dice is intransitive, the ordering of the entries does matter, and it is possible that \(\mathbf{D}\) is not intransitive, but for some permutation \(\sigma\) of length \(\ell\) a reordering \((D^{(\sigma(1))},\ldots,D^{(\sigma(\ell))})\) is intransitive. We will either be interested in existence results for deterministic collections \(\mathbf{D}\), or in asymptotic results when the distributions of the entries of each die are rather arbitrary, so this ordering will not be relevant in any essential way. By a **no-tie collection** of dice we mean that no pair of faces, either from the same die or from different dice, share the same number. Our first two results concern intransitive families of deterministic dice. The first result deals with the existence of intransitive, fair dice. **Theorem 1**.: _Consider dice whose face entries are positive integers. For every \(\ell\geq 3\) and \(n\geq 3\) there exists a no-tie collection of \(\ell\) honest \(n\)-sided dice which is intransitive. Furthermore, for any \(\ell\geq 3\) there does not exist a no-tie family of \(\ell\) honest \(2\)-sided dice which is intransitive._ The notion of a die \(A\) being better than \(B\) is not a relation on the specific numbers on their faces, but rather among relative ordering of these numbers. For instance, the die \(A=(2,4,9,10,11)\) is better than the die \(B=(1,5,7,9,10)\). Now, increase, say, the first entry of \(A\) to a new die \(\widetilde{A}=(x,4,9,10,11)\) with any choice \(x=3,4\). Then when we choose either one of dice \(A\) or \(\widetilde{A}\), the chance of winning against a roll of die \(B\) is the same. So, in terms of _chance of winning_ against \(B\), the dice \(A\) and \(\widetilde{A}\) are indistinguishable. In that sense, when we talk about comparison of \(\ell\) non-tie dice with \(n\) faces each, it suffices to distribute the numbers in the set \([\ell n]\) among the faces of the dice, without repetition. In fact, the proof of the existence claim in Theorem 1 is inductive/constructive, and shows that such \(\ell\) honest \(n\)-sided dice can always be chosen with distinct entries in \([\ell n]\). For a given choice of positive integers \(n_{1},\ldots,n_{\ell}\), let \(\mathcal{D}(n_{1},\ldots,n_{\ell})\) be the set of collections of dice \(\mathbf{D}=(D^{(1)},\ldots,D^{(\ell)})\) for which \(D^{(j)}\) has exactly \(n_{j}\) faces, and where each number in \([n_{1}+\cdots+n_{\ell}]\) appears exactly once in the faces in \(\mathbf{D}\). In other words, the dice are filled with numbers in \([n_{1}+\cdots+n_{\ell}]\), without repetition. Observe that with this definition, in \(\mathcal{D}(n_{1},\ldots,n_{\ell})\) we do not distinguish between the ordering of faces in each die. Or, alternatively, dice in \(\mathcal{D}(n_{1},\ldots,n_{\ell})\) are always viewed in increasing order, so that for instance the dice \((1,2,3)\) and \((2,1,3)\) are the same and are always represented by \((1,2,3)\). But we do distinguish between orderings within a collection, so that the collections \(\mathbf{D}=((1,2,4),(3,5,6)),\widehat{\mathbf{D}}=((3,5,6),(1,2,4))\) are distinct elements of \(\mathcal{D}(3,3)\). In other words, \[\mathcal{D}(n_{1},\ldots,n_{\ell})\;\coloneqq\;\Big{\{}\mathbf{D }=(D^{(1)},\ldots,D^{(\ell)}):D^{(j)}=(D^{(j)}_{1},\ldots D^{(j)}_{n_{j}})\in \mathbb{Z}^{n_{j}},\] \[0<D^{(j)}_{1}<\cdots<D^{(j)}_{n_{j}}\text{ for }j=1,\ldots,\ell,D^{( j_{1})}_{i_{1}}\neq D^{(j_{2})}_{i_{2}}\text{ for }j_{1}\neq j_{2},\{D^{(j)}_{i}\}_{i,j}=[n_{1}+\cdots+n_{\ell}]\Big{\}}.\] We denote by \(\mathcal{D}_{\triangleright}(n_{1},\ldots,n_{\ell})\) the subset of \(\mathcal{D}(n_{1},\ldots,n_{\ell})\) that consists of intransitive dice, that is, \[\mathcal{D}_{\triangleright}(n_{1},\ldots,n_{\ell})\;\coloneqq\;\big{\{}\mathbf{ D}\in\mathcal{D}(n_{1},\ldots,n_{\ell}):D^{(1)}\triangleright\cdots\triangleright D^{( \ell)}\triangleright D^{(1)}\big{\}},\] and additionally also set \[\mathcal{D}_{\ell}(n)\;\coloneqq\;\mathcal{D}(\underbrace{n,\ldots,n}_{\ell \text{ times}}),\quad\mathcal{D}_{\triangleright,\ell}(n)\;\coloneqq\;\mathcal{D}_{ \triangleright}(\underbrace{n,\ldots,n}_{\ell\text{ times}})\,. \tag{2.2}\] Exploring a connection between non-tie dice with integer entries and the set of words in a given alphabet which is explained in Section 4.1, we will be able to estimate the size of \(\mathcal{D}_{\triangleright}(n,n,n)\). **Theorem 2**.: _For each \(\ell\geq 3\), there exists a constant \(L(\ell)\geq 0\) for which_ \[|\mathcal{D}_{\triangleright,\ell}(n)|\;=\;\mathrm{e}^{nL(\ell)+o(n)}\quad\text{ as }n\to\infty\,.\] For any \(n\geq 1\), a simple combinatorial argument shows that \(|\mathcal{D}_{\ell}(n)|=(\ell n)!/(n!)^{\ell}\), and by combining Theorem 2 with Stirling's approximation we see that \[\frac{|\mathcal{D}_{\triangleright,\ell}(n)|}{|\mathcal{D}_{\ell}(n)|}\ =\ \frac{ \mathrm{e}^{-n(\ell\log\ell-L(\ell))+o(n)}}{(2\pi n\ell)^{(\ell-1)/2}}\,. \tag{2.3}\] Equipping \(\mathcal{D}(\ell)\) with the uniform distribution, one may view the quantity \(\frac{|\mathcal{D}_{\triangleright,\ell}(n)|}{|\mathcal{D}_{\ell}(n)|}\) as the probability of selecting an \(\ell\)-uple of intransitive dice from this distribution. As a consequence of Theorem 6 to be seen in a moment, applied to random dice with uniform law on \([0,1]\) we can infer that \[\lim_{n\to\infty}\frac{|\mathcal{D}_{\triangleright,\ell}(n)|}{|\mathcal{D}_{ \ell}(n)|}\ =\ 0\,.\] When \(\ell=3\), we will explain in Subsection 4.5 how to obtain the estimate \(2.445<L(3)\leq 3\log 3\), and also display some numerical experiments that indicate that \(L(3)=3\log 3\). It may be natural to expect that \(L(\ell)=\ell\log\ell\) for any \(\ell\geq 3\), but besides the case \(\ell=3\) we do not have numerical evidence to support this conjecture. We remark that the definition of the sets \(\mathcal{D}_{\ell}(n)\) and \(\mathcal{D}_{\triangleright,\ell}(n)\) as above do not account for possible permutations of dice when checking intransitivity, that is, the list of dice has a fixed order. We stick to this convention as well as in the central limit theorem to be stated in the sequel. ### Main results for random models of dice When each \(D_{i}\) is a random variable, we say that the corresponding die \(D=(D_{1},\ldots,D_{n})\) is a **random die**. Whenever we say that the **law** of a die \(D\) is \(\mathcal{L}^{D}\), we mean that the entries \(D_{i}\) are all i.i.d. random variables with law \(\mathcal{L}^{D}\). We say that the dice in a collection \(\mathbf{D}=(D^{(1)},\ldots,D^{(\ell)})\) are **independent** if the family of random variables \(\{D_{i}^{(k)}\}\) are all mutually independent. We stress that for independent dice the laws \(\mathcal{L}^{(1)}\coloneqq\mathcal{L}^{D^{(1)}},\ldots,\mathcal{L}^{(\ell)} \coloneqq\mathcal{L}^{D^{(\ell)}}\) need not coincide, but entries within the same die are i.i.d. random variables. Our main goal is to determine whether a sequence of random dice may be intransitive when they grow in size. Fix an integer \(\ell\geq 3\) and consider a sequence \(\{\mathbf{D}_{m}\}_{m}\) of collections \(\mathbf{D}_{m}=(D^{(1)},\ldots,D^{(\ell)})\) of random independent dice. Each die \(D^{(k)}=D^{(k)}(m)\) depends on the index \(m\) of the sequence \(\{\mathbf{D}_{m}\}_{m}\), but to lighten notation we mostly omit this dependence. We assume each die \(D^{(k)}=D^{(k)}(m)\) has \(n_{k}=n_{k}(m)\leq m\) faces, which may vary with \(m\), and we set \[f_{k}=f_{k}(m)\ \coloneqq\ \frac{n_{k}}{m}\quad\text{so that $D^{(k)}$ has size $n_{k}\ =\ f_{k}m$},\quad k=1,\ldots,\ell. \tag{2.4}\] The assumption \(n_{k}=n_{k}(m)\leq m\) is made solely for convenience as in this case \(f_{k}\leq 1\). We can always re-index the sequence to accomplish this restriction. Although there are no further relations imposed between the sizes \(n_{k}\) and \(m\), it is instructive to think about \(m\) as essentially given the size of the die with largest number of faces. As we already mentioned before, we always assume that different entries of the same die \(D^{(k)}\) are independent random variables with the same law \(\mathcal{L}^{(k)}=\mathcal{L}^{D^{(k)}}\) which may now vary with \(m\), and we write \(\mathcal{L}_{m}^{(k)}\) when we want to emphasize this dependence. The main question we investigate is the probability of intransitivity, namely \[\mathbb{P}\big{(}D^{(1)}\triangleright D^{(2)}\triangleright\cdots\triangleright D ^{(\ell)}\triangleright D^{(1)}\big{)} \tag{2.5}\] as the numbers of faces of our dice go to infinity, which we measure by sending \(m\to\infty\). The intransitivity event in (2.5) is the intersection of \(D^{(k)}\triangleright D^{(k+1)}\) for \(1\leq k\leq\ell\), where we convention for the rest of the paper that \(D^{(0)}=D^{(\ell)}\) and \(D^{(\ell+1)}=D^{(1)}\). Such events are intimately connected to the values of the random variables \[N_{k}\;\coloneqq\;\sum_{i=1}^{n_{k}}\sum_{j=1}^{n_{k+1}}\mathbb{1}_{D^{(k)}_{i} >D^{(k+1)}_{j}},\quad k=1,\ldots,\ell \tag{2.6}\] and \[E_{k}\;\coloneqq\;\sum_{i=1}^{n_{k}}\sum_{j=1}^{n_{k+1}}\mathbb{1}_{D^{(k)}_{ i}=D^{(k+1)}_{j}},\quad k=1,\ldots,\ell. \tag{2.7}\] From the inequality (2.1) we learn that \(D^{(k)}\triangleright D^{(k+1)}\) if, and only if, the inequality \[N_{k}\;>\;\frac{1}{2}n_{k}n_{k+1}-\frac{1}{2}E_{k}\] is satisfied, and therefore \[\mathbb{P}\left(D^{(1)}\triangleright\cdots\triangleright D^{(\ell)} \triangleright D^{(1)}\right)\;=\;\mathbb{P}\Big{(}N_{k}>\frac{1}{2}n_{k}n_{k+1 }-\frac{1}{2}E_{k}\,,\;k=1,\ldots,\ell\Big{)}, \tag{2.8}\] which will be at the core of our method to analyze (2.5), and shows the relevance of \(N_{k}\) and \(E_{k}\). One should view the \(N_{k}\) as the _relative strength_ of the die \(D^{(k)}\) against \(D^{(k+1)}\). Observe that for dice coming from a sequence \(\{\mathbf{D}_{m}\}_{m}\), the random variables \(N_{k}=N_{k}(m)\) and \(E_{k}=E_{k}(m)\) also depend on \(m\), and \(N_{k}\), \(N_{k+1}\), \(E_{k}\) and \(E_{k+1}\) are all pairwise strongly correlated. We will analyze (2.8) in the limit \(m\to\infty\) via a Central Limit Theorem (CLT) for the vector \((N_{1},\ldots,N_{\ell})\). For this CLT some probabilities associated to the underlying laws of the dice are of utmost importance. By \[\mathbf{p}_{k}\;=\;\mathbf{p}(\mathcal{L}^{(k)},\mathcal{L}^{(k+1)})\;\coloneqq \;\mathbb{P}\left(D^{(k)}_{1}>D^{(k+1)}_{1}\right)\;=\;\mathbb{E}\left( \mathbb{1}_{D^{(k)}_{1}>D^{(k+1)}_{1}}\right) \tag{2.9}\] we denote the probability that a given face of the \(k\)-th die beats a given face of the \((k+1)\)-th die. By \[\mathbf{q}_{k}\;=\;\mathbf{q}(\mathcal{L}^{(k)},\mathcal{L}^{(k+1)})\;\coloneqq \;\mathbb{P}\left(D^{(k)}_{1}>D^{(k+1)}_{1},D^{(k)}_{2}>D^{(k+1)}_{1}\right) \tag{2.10}\] we denote the probability that two given faces of the \(k\)-th die beat a given face of the \((k+1)\)-th die. By \[\mathbf{r}_{k}\;=\;\mathbf{r}(\mathcal{L}^{(k)},\mathcal{L}^{(k+1)})\;\coloneqq \;\mathbb{P}\left(D^{(k)}_{1}>D^{(k+1)}_{1},D^{(k)}_{1}>D^{(k+1)}_{2}\right) \tag{2.11}\] we denote the probability that a given face of the \(k\)-th die beats two given faces of the \((k+1)\)-th die. Finally, also set \[\mathbf{s}_{k}\;=\;\mathbf{s}(\mathcal{L}^{(k-1)},\mathcal{L}^{(k)},\mathcal{ L}^{(k+1)})\;\coloneqq\;\mathbb{P}\left(D^{(k-1)}_{1}>D^{(k)}_{1}>D^{(k+1)}_{1} \right), \tag{2.12}\] which is the probability that a given face of \(D^{(k-1)}\) beats a given face of \(D^{(k)}\) at the same time that the latter beats a given face of \(D^{(k+1)}\). As we will see in a moment, these quantities will play a role in understanding the covariance between different dice. We use cyclic notation for these quantities, so that \(\mathbf{p}_{\ell+1}\coloneqq\mathbf{p}_{1}\), \(\mathbf{q}_{\ell+1}\coloneqq\mathbf{q}_{1}\) and so forth. Our main tool to analyze the probability (2.5) is a CLT for the correlated random variables \(N_{1},\ldots,N_{\ell}\), so it is natural to introduce their normalized version \[\widetilde{N}_{k}\;\coloneqq\;\frac{N_{k}-\mathbb{E}(N_{k})}{\sqrt{\operatorname {Var}\left(N_{k}\right)}}\,. \tag{2.13}\] Let \[\sigma_{k}\;=\;\sigma_{k}(\mathbf{p}_{k},\mathbf{q}_{k},\mathbf{r}_{k},\mathbf{s}_ {k})\;\coloneqq\;\left[f_{k}f_{k+1}\left(f_{k}(\mathbf{q}_{k}-\mathbf{p}_{k}^{2 })+f_{k+1}(\mathbf{r}_{k}-\mathbf{p}_{k}^{2})\right)\,\right]^{1/2} \tag{2.14}\] and \[\gamma_{k}\;\coloneqq\;\frac{1}{\sigma_{k-1}\sigma_{k}}f_{k-1}f_{k}f_{k+1}( \mathbf{s}_{k}-\mathbf{p}_{k-1}\mathbf{p}_{k})\,. \tag{2.15}\] A straightforward calculation (see Lemma 14 below) shows that, as \(m\to\infty\), \[\begin{split}\mathbb{E}(N_{k})&\;=\;f_{k}f_{k+1}m^{ 2}\mathbf{p}_{k}\,,\\ \operatorname{Var}\left(N_{k}\right)&\;=\;\sigma_{k} ^{2}m^{3}+o(m^{3})\,,\quad\text{and}\\ \operatorname{Corr}\left(N_{k-1},N_{k}\right)&\;=\; \gamma_{k}+o(1)\,.\end{split} \tag{2.16}\] We stress that the values \(\sigma_{k}=\sigma_{k}(m)\) and \(\gamma_{k}=\gamma_{k}(m)\) depend explicitly on probabilities associated to the laws \(\mathcal{L}_{m}^{(k-1)},\mathcal{L}_{m}^{(k)}\) and \(\mathcal{L}_{m}^{(k+1)}\). Moreover, they are \(O(1)\) as \(m\to\infty\), and they do not depend on regularity features of these laws, such as existence of moments or tail behavior. Since we are considering a sequence \(\{\mathbf{D}_{m}\}_{m}\) of collections of independent dice, all the quantities we just introduced depend on \(m\), and when needed to stress such dependence we write \(\mathbf{p}_{k}=\mathbf{p}_{k}(m),\sigma_{k}=\sigma_{k}(m),\gamma_{k}=\gamma_{ k}(m)\) etc. Our main working assumptions are the following. **Assumption 3**.: _Fix \(\ell\geq 3\). We assume that the sequence \(\{\mathbf{D}_{m}\}_{m}\) is a collection of \(\ell\) independent random dice, each with number of faces \(n_{k}=f_{k}m\) as in (2.4), and satisfying the following conditions:_ 1. _For_ \(k=1,\ldots,\ell\)_, the relative sizes_ \(f_{k}=f_{k}(m)\) _satisfy_ \[f_{k}(m)\;\to\;f_{k}(\infty)\in(0,1]\,,\quad\text{as $m\to\infty$}.\] 2. _For_ \(k=1,\ldots,\ell\)_, the rate of growth of the mean and variance of_ \(N_{k}\)_, and covariance between_ \(N_{k-1}\) _and_ \(N_{k}\) _satisfy_ \[\mathbf{p}_{k}(m) \;\to\;\mathbf{p}_{k}(\infty)\in(0,1],\] \[\sigma_{k}(m) \;\to\;\sigma_{k}(\infty)\in(0,\infty)\,,\] \[\gamma_{k}(m) \;\to\;\gamma_{k}(\infty)\in[-1,1]\,,\] _as_ \(m\to\infty\)_._ We insist that the values \(\mathbf{p}_{k}(m)\) and \(\sigma_{k}=\sigma_{k}(m)\) depend only on probabilities associated to the underlying laws rather than on qualitative features of them. In particular, Assumption 3-(i) is solely a non-degeneracy condition, which ensures that the number of faces of the dice are all growing, with the same speed \(m\) but possibly different rates. With (2.9) in mind, condition (ii) on \(\mathbf{p}_{k}\) is essentially saying that the limiting laws do not generate to a deterministic situation when intransitivity does not occur by degeneration. Also, as we said earlier, under Assumption 3-(i) the values \(\sigma_{k}=\sigma_{k}(m)\) are bounded functions of \(m\). Thus, with (2.16) in mind, the second condition in (ii) says that the variance in the relative strength of consecutive dice is growing at true speed \(m^{3}\) and not slower. The quantities \(\gamma_{k}(m)\) are correlation coefficients, so they are always bounded, and the third convergence condition in (ii) can always be achieved with a replacement of the original sequence of dice \(\{\mathbf{D}_{m}\}_{m}\) by a subsequence of it. **Theorem 4**.: _Fix \(\ell\geq 3\) and for each \(m\) let \(\mathbf{D}_{m}=(D^{(1)}(m),\ldots,D^{(\ell)}(m))\) be a collection of random independent dice, for which \(\{\mathbf{D}_{m}\}_{m}\) satisfies Assumption 3, and let \((\widetilde{N}_{1}(m),\ldots,\widetilde{N}_{\ell}(m))\) be the corresponding variables from (2.13)._ _Then, as \(m\to\infty\), the random vector \((\widetilde{N}_{1}(m),\cdots,\widetilde{N}_{\ell}(m))\) converges in distribution to a centered Gaussian vector \((X_{1},\ldots,X_{\ell})\) whose covariance matrix is given by_ \[\Sigma\;=\;\left(\begin{array}{ccccc}1&\gamma_{2}(\infty)&0&\cdots&0&\gamma_{ 1}(\infty)\\ \gamma_{2}(\infty)&1&\gamma_{3}(\infty)&\cdots&0&0\\ 0&\gamma_{3}(\infty)&1&\cdots&0&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&0&\cdots&1&\gamma_{\ell}(\infty)\\ \gamma_{1}(\infty)&0&0&\cdots&\gamma_{\ell}(\infty)&1\end{array}\right), \tag{2.17}\] _where the coefficients \(\gamma_{k}(\infty)\) are the ones in Assumption 3-(ii)._ As said, Theorem 4 will be central in our understanding of intransitivity as \(m\to\infty\). In general, the very definition of \(\mathbf{p}\) in (2.9) would say that \[1\;=\;\mathbb{P}\big{(}D_{1}^{(k)}>D_{1}^{(k+1)}\big{)}+\mathbb{P }\big{(}D_{1}^{(k)}<D_{1}^{(k+1)}\big{)}+\mathbb{P}\big{(}D_{1}^{(k)}=D_{1}^{(k +1)}\big{)} \tag{2.18}\] \[\;=\;\mathbf{p}(\mathcal{L}^{(k)},\mathcal{L}^{(k+1)})+\mathbf{p }(\mathcal{L}^{(k+1)},\mathcal{L}^{(k)})+\mathbb{P}\big{(}D_{1}^{(k)}=D_{1}^{ (k+1)}\big{)}\,. \tag{2.19}\] In order for the die \(D^{(k)}\) to be sufficiently stronger than the die \(D^{(k+1)}\), we would expect that \(\mathbf{p}(\mathcal{L}^{(k)},\mathcal{L}^{(k+1)})>\mathbf{p}(\mathcal{L}^{(k+ 1)},\mathcal{L}^{(k)})\), and in such a case we would expect \(D^{(k)}\triangleright D^{(k+1)}\) with high probability. Likewise, if \(\mathbf{p}(\mathcal{L}^{(k+1)},\mathcal{L}^{(k)})<\mathbf{p}(\mathcal{L}^{(k )},\mathcal{L}^{(k+1)})\) then we would instead expect \(D^{(k+1)}\triangleright D^{(k)}\) with high probability. Hence, intransitivity becomes a nontrivial question precisely when \(\mathbf{p}(\mathcal{L}^{(k)},\mathcal{L}^{(k+1)})\approx\mathbf{p}(\mathcal{ L}^{(k+1)},\mathcal{L}^{(k)})\) asymptotically as \(m\to\infty\), in which case the equality above becomes \[\mathbf{p}(\mathcal{L}^{(k+1)},\mathcal{L}^{(k)})+\frac{1}{2}\mathbb{P}\big{(} D_{1}^{(k)}=D_{1}^{(k+1)}\big{)}=\mathbf{p}_{k}+\frac{1}{2}\mathbb{P}\big{(}D_{1}^ {(k)}=D_{1}^{(k+1)}\big{)}\;\approx\;\frac{1}{2}\,,\] and our next result gives a rate of decay of such approximation under which we can use the CLT to estimate the probability of intransitivity in the large-dice limit \(m\to\infty\). **Theorem 5**.: _Fix \(\ell\geq 3\) and for each \(m\) let \(\mathbf{D}_{m}=(D^{(1)}(m),\ldots,D^{(\ell)}(m))\) be a collection of random independent dice, for which \(\{\mathbf{D}_{m}\}_{m}\) satisfies Assumption 3, and let \((X_{1},\ldots,X_{\ell})\) be a Gaussian vector with covariance matrix (2.17). Suppose that there exists \(\delta>0\) and a function \(r(m)\) with \(\lim_{m\to\infty}r(m)=+\infty\), for which_ \[\frac{1}{2}-\mathbf{p}_{k}-\frac{1}{2}\mathbb{P}(D_{1}^{(k)}=D_{1}^{(k+1)})\; \geq\;-\frac{\delta}{m^{1/2}r(m)}\,,\quad k=1,\ldots,\ell, \tag{2.20}\] _for every \(m\) sufficiently large, and in addition_ \[\lim_{m\to\infty}\mathbb{P}\left(D_{1}^{(k)}(m)=D_{1}^{(k+1)}(m)\right)\;=\;0 \quad\text{for }k=1,\ldots,\ell. \tag{2.21}\] _Then_ \[\limsup_{m\to\infty}\mathbb{P}\left(D^{(1)}\triangleright\cdots\triangleright D^{ (\ell)}\triangleright D^{(1)}\right)\;\leq\;\mathbb{P}\left(X_{j}\geq 0,j=1, \ldots,\ell\right). \tag{2.22}\] In the particular case when all the laws are the same \(\mathcal{L}^{(1)}=\cdots=\mathcal{L}^{(\ell)}\), then (2.18) yields that \[\frac{1}{2}-\mathbf{p}_{k}-\frac{1}{2}\mathbb{P}\left(D_{1}^{(k)}=D_{1}^{(k+1 )}\right)=0,\quad k=1,\ldots,\ell,\] and (2.20) always holds true. Note that (2.21) says that there are no ties between different dice in the asymptotic limit. Similarly as for \(N_{k}\), the mean and variance of the \(E_{k}\)'s are given in terms of probabilities associated to the underlying laws. For arbitrary underlying laws of the entries of the dice, they satisfy the rough bound \[\mathbb{E}(E_{k})=O(m^{2})\quad\text{and}\quad\operatorname{Var}\left(E_{k} \right)=O(m^{3})\quad\text{as }m\to\infty, \tag{2.23}\] see Lemma 15 below. These quantities have the same order as the corresponding quantities for \(N_{k}\) (compare (2.16) with (2.23)). Lemma 16 below shows that (2.21) implies \[\mathbb{E}(E_{k})=o(m^{2}),\quad\operatorname{Var}\left(E_{k}\right)=o(m^{3}) \quad\text{as }m\to\infty,\ \text{ for }k=1,\dots,\ell. \tag{2.24}\] Thus, condition (2.21) in Theorem 5 may also be interpreted as saying that whenever the \(E_{k}\)'s grow slightly slower than \(N_{k}\)'s, either in their mean or in their variance, then the intransitive dice problem can be bounded from above by the Gaussian probability (2.22). Exploring this limit, we conclude our next result. **Theorem 6**.: _Let \(\{\mathbf{D}_{m}\}_{m}\) be a sequence of random independent dice satisfying the conditions of Theorem 5. Then_ \[\lim_{m\to\infty}\mathbb{P}\left(D^{(1)}\triangleright\dots\triangleright D^{( \ell)}\triangleright D^{(1)}\right)\;=\;0\,.\] In words, under the assumptions of Theorem 6 above, the proportion of intransitive random independent dice becomes negligible as the number of faces grows. The proof of Theorem 4 is the most involved proof in this paper, and is based on the moment method. We look at the moment generating function of \(\widetilde{N}_{k}\)'s, which always exist because these are rescaled Bernoulli random variables. The proof of Theorem 5 is based on Theorem 4. We look at the probability in (2.22), and with the help of Chebyshev's inequality we condition on the event of no ties, reducing the right-hand side of (2.8) to a probability that involves only the \(N_{k}\)'s plus an additional term which is small in virtue of the variance control (2.24). The right-hand side then naturally arises when taking the large \(n\) limit. In virtue of a particular structure of the coefficients \(\gamma_{k}(\infty)\) in the covariance matrix (2.17), we are able to show that the probability on the right-hand side of (2.22) vanishes, and Theorem 6 follows. ### Organization of the remainder of the paper The remainder of the paper is structured as follows. In Section 3 we discuss examples of random dice, in particular when our core Assumption 3 and the variance control (2.24) are satisfied, allowing to apply our main result. We also provide a sequence of random independent dice that do not satisfy these conditions and for which intransitivity survives in the limit. In Section 4 we discuss intransitivity in deterministic contexts, and in particular we explore a connection between intransitive dice and combinatorics of words in order to construct intransitive dice. We then turn to the context of random dice. In Section 5 we briefly discuss the counting functions \(N_{k}\) and \(E_{k}\) from (2.6)-(2.7), which correspond to victories and ties, respectively, and which play a central role in the connection between our CLT and intransitivity. In our CLT, Gaussian vectors with a covariance matrix of a particular structure appear (see (2.17)), and in Section 5 we also collect several properties of them in a form suitable for our needs. In Section 6, we assume Theorem 4, which is our central limit theorem, and we use it to prove Theorems 5 and 6, which are tests of asymptotic intransitivity. Finally, at Section 7, we prove the Theorem 4. ### Acknowledgments T.F. acknowledges support by the National Council for Scientific and Technological Development (CNPq) via a Universal Grant (number 406001/2021-9) and a Bolsa de Produtividade (number 311894/2021-6). G.S. acknowledges support by Sao Paulo Research Foundation (FAPESP) under Grants # 2019/16062-1 and # 2020/02506-2, and by Brazilian National Council for Scientific and Technological Development (CNPq) under Grant # 315256/2020-6. J.P. acknowledges support by Sao Paulo Research Foundation (FAPESP) under Grant # 2023/02674-0. J.Pa. acknowledges support by Brazilian National Council for Scientific and Technological Development (CNPq) under Grant # 118536/2023-0 L.C. acknowledges support by Sao Paulo Research Foundation (FAPESP) under Grant # 2023/02240-0. L.L. acknowledges support by Sao Paulo Research Foundation (FAPESP) under Grant # 2023/02397-7. Part of this work was carried out during the undergraduate research program "Jornadas de Pesquisa em Matematica do ICMC 2023" held at the Instituto de Ciencias Matematicas e de Computacao (ICMC) - Universidade de Sao Paulo (USP), and which was partially supported by the Centro de Ciencias Matematicas Aplicadas a Industria (CeMEAI - CEPID) under FAPESP Grant # 2013/07375-0. Research carried out using the computational resources of the Center for Mathematical Sciences Applied to Industry (CeMEAI) funded by FAPESP (grant 2013/07375-0). We thank the hospitality of ICMC-USP during the program. ## 3. Examples In this section we describe examples of random dice such that the probability of observing intransitivity is asymptotic null by applying the Theorem 6 and we also illustrate some cases of asymptotically intransitive random dice. We start by pointing out that condition (2.20) is satisfied, by symmetry, whenever all the dice have same law, which is the case in what follows, except in the last example. The first example has been already commented below Theorem 4: assuming that each die has the same number of faces, and those faces are i.i.d. random variables with the same continuous (but not necessarily absolutely continuous) law \(\mathcal{L}\), the probability that the random dice \((D^{(1)},\ldots,D^{(\ell)})\) are intransitive goes to zero as \(m\to\infty\). Theorem 6 straightforwardly extends this to a more general situation, as we explain in the next paragraph. If the law of any die is given by a same continuous law \(\mathcal{L}\), there will be no ties, so (2.21) holds trivially. Moreover, \(\mathbf{p}_{k}(m),\sigma_{k}(m)\) and \(\gamma_{k}(m)\) do not depend on \(m\) neither \(k\), hence it is trivial to check Assumption 3-(ii). Assuming that the quantity of faces in the \(k\)th die is given by \(n_{k}=f_{k}m\), where \(f_{k}\), \(k=1,\ldots,\ell\), are positive constants, we verify Assumption 3-(i). Putting these conditions together leads to conclude by Theorem 6 that the sequence of dice in this way constructed has probability asymptotically null of being intransitive. That is, under a continuous law, intransitivity is not achievable regardless of the quantities of faces in each die, provided these quantities are proportional to the scaling parameter \(m\). Let us see now a discrete example. Assume that all \(\ell\) dice have same law \(\mathcal{L}_{m}\), the law of a geometric random variable of parameter \(p\). Since \[\mathbb{P}\left(D_{1}^{(k)}(m)=D_{1}^{(k+1)}(m)\right)\;=\;\sum_{i=1}^{\infty }(1-p)^{2(i-1)}p^{2}\;=\;\frac{p}{2-p}\,,\] in order to assure condition (2.21) on ties, it is necessary to impose that \(p=p(m)\to 0\) as \(m\to\infty\). A long but elementary calculation yields that \[\mathbf{p}_{k}(m) \;=\;\frac{1-p}{2-p}\,,\] \[\mathbf{q}_{k}(m) \;=\;r_{k}(m)\;=\;\frac{(1-p)^{2}}{3-3p-p^{2}}\,,\] \[\mathbf{s}_{k}(m) \;=\;\frac{(1-p)^{3}}{(2-p)(2-2p-p^{2})}\,.\] Recalling the formulas (2.14) and (2.15) for \(\sigma_{k}\) and \(\gamma_{k}\), respectively, gives us that, as \(m\to\infty\), \[\mathbf{p}_{k}(m) \;\longrightarrow\;\frac{1}{2}\in(0,1]\,,\] \[\sigma_{k}(m) \;\longrightarrow\;\sigma(\infty)=\sqrt{\frac{f_{k}f_{k+1}(f_{ k}+f_{k+1})}{6}}\in(0,\infty)\,,\] \[\gamma_{k}(m) =\frac{f_{k-1}f_{k}f_{k+1}(\mathbf{s}_{k}-\mathbf{p}_{k-1} \mathbf{p}_{k})}{\sigma_{k-1}(m)\sigma_{k}(m)}\;\longrightarrow\;0\in[-1,1]\,,\] so all assumptions of Theorem 6 have been checked, hence the probability of observing a sequence of intransitive dice is asymptotically null. A similar example may be constructed choosing the Poisson distribution with parameter \(\lambda=\lambda(m)\to\infty\) as \(m\to\infty\). The previous examples may give us the impression that asymptotic intransitivity with positive probability is never attainable when using dice with i.i.d. faces. This is not true and the idea to construct such sequence of random dice is indeed simple: starting from a list of deterministic dice that is intransitive, we chose the distribution of the faces of each random die according to that list, and concentration inequality will then assure that the sequence of random dice in this way constructed is asymptotically intransitive. This is explained in the next proposition. **Proposition 7**.: _Let \(A^{(k)}=(a_{1}^{(k)},\ldots,a_{m}^{(k)})\) for \(k\in[\ell]\) be a set of \(\ell\) deterministic honest dice with \(m\) faces that is known to be intransitive: \(A^{(k)}\triangleright A^{(k+1)}\) for every \(k\). Consider random dice \((B^{(k)}:k\in[\ell])\), each with \(n\) faces, where the faces of die \(B^{(k)}\) are independently chosen with law_ \[B^{(k)}_{j}\;\sim\;\mathrm{U}\{a_{i}^{(k)}:i\in[m]\}\,,\] _that is, uniformly over the faces of die \(A^{(k)}\). Then, there is a constant \(c>0\), depending only on the set of dice \(A^{(k)}\), such that_ \[\mathbb{P}\big{(}B^{(1)}\triangleright B^{(2)}\triangleright\cdots\triangleright B ^{(\ell)}\triangleright B^{(1)}\big{)}\;=\;1+o(e^{-cn})\,,\quad\text{as $n\to\infty$}. \tag{3.1}\] Proof.: The deterministic dice \(A^{(k)}\), \(k\in[\ell]\) form an intransitive cycle. This can be translated into the following collection of inequalities: \[\sum_{\begin{subarray}{c}i,j:\\ a_{i}^{(k)}>a_{j}^{(k+1)}\end{subarray}}\;1\;>\;\sum_{\begin{subarray}{c}i,j: \\ a_{i}^{(k)}<a_{j}^{(k+1)}\end{subarray}}\;1\,,\quad\text{for every $k\in[\ell]$}. \tag{3.2}\] Let \(N_{k,i}\) be the random variable that counts the number of appearances of face \(a_{i}^{(k)}\) at dice \(B^{(k)}\). As discussed in Section 4, the quantity \(N_{k,i}\) represents the weight of face \(a_{i}^{(k)}\) in die \(B^{(k)}\). Hence, we can write event \(B^{(k)}\triangleright B^{(k+1)}\) as a function of \(N_{k,i}\) and \(N_{k+1,j}\): \[\sum_{\begin{subarray}{c}i,j:\\ a_{i}^{(k)}>a_{j}^{(k+1)}\end{subarray}}N_{k,i}N_{k+1,j}\;>\;\sum_{ \begin{subarray}{c}i,j:\\ a_{i}^{(k)}<a_{j}^{(k+1)}\end{subarray}}N_{k,i}N_{k+1,j}\,,\quad\text{for every $k\in[\ell]$}. \tag{3.3}\] It is clear that \(N_{k,i}\) has a binomial distribution with parameters \(n\) and \(\frac{1}{m}\), and by Hoeffding's inequality \[\mathbb{P}\Big{(}\Big{|}N_{k,i}-\frac{n}{m}\Big{|}>\varepsilon n\Big{)}\;\leq \;2e^{-2\varepsilon^{2}n}\,. \tag{3.4}\] Define the event \(G:=\bigcap_{k,i}\big{\{}\big{|}N_{k,i}-\frac{n}{m}\big{|}\leq\varepsilon n \big{\}}\). By union bound, \[\mathbb{P}(G^{c})\;=\;\mathbb{P}\Big{(}\bigcup_{k,i}\big{\{}|N_{k,i}-\frac{n} {m}\big{|}>\varepsilon n\big{\}}\Big{)}\;\leq\;2\ell m\,e^{-\varepsilon^{2}n}\,.\] Notice that on event \(G\) we have that \[n^{2}\Big{(}\frac{1}{m}-\varepsilon\Big{)}^{2}\;<\;N_{k,i}N_{k+1,j}\;<\;n^{2} \Big{(}\frac{1}{m}+\varepsilon\Big{)}^{2}. \tag{3.5}\] From (3.2) it is clear that \[\sum_{\begin{subarray}{c}i,j:\\ a_{i}^{(k)}>a_{j}^{(k+1)}\end{subarray}}1\;-\;\sum_{\begin{subarray}{c}i,j:\\ a_{i}^{(k)}<a_{j}^{(k+1)}\end{subarray}}1\;\geq\;1\,,\quad\text{for every $k\in[\ell]$}.\] By continuity, one can choose \(\varepsilon>0\) such that \[\Big{(}\frac{1}{m}-\varepsilon\Big{)}^{2}\sum_{\begin{subarray}{c}i,j:\\ a_{i}^{(k)}>a_{j}^{(k+1)}\end{subarray}}1\quad-\quad\Big{(}\frac{1}{m}+ \varepsilon\Big{)}^{2}\sum_{\begin{subarray}{c}i,j:\\ a_{i}^{(k)}<a_{j}^{(k+1)}\end{subarray}}1\;>\;\frac{1}{2m^{2}}\;>\;0\,,\quad \text{for every $k\in[\ell]$}.\] Apply the upper estimate of (3.5) for pairs \(i,j\) with \(a_{i}^{(k)}<a_{j}^{(k+1)}\) and the lower estimate for pairs with \(a_{i}^{(k)}>a_{j}^{(k+1)}\). Then, on the event \(G\) we have \[\sum_{\begin{subarray}{c}i,j:\\ a_{i}^{(k)}<a_{j}^{(k+1)}\end{subarray}}N_{k,i}N_{k+1,j} < n^{2}\Big{(}\frac{1}{m}+\varepsilon\Big{)}^{2}\sum_{\begin{subarray} {c}i,j:\\ a_{i}^{(k)}<a_{j}^{(k+1)}\end{subarray}}1\;<\;\;n^{2}\Big{(}\frac{1}{m}- \varepsilon\Big{)}^{2}\sum_{\begin{subarray}{c}i,j:\\ a_{i}^{(k)}>a_{j}^{(k+1)}\end{subarray}}1\;<\;\;\sum_{\begin{subarray}{c}i,j: \\ a_{i}^{(k)}>a_{j}^{(k+1)}\end{subarray}}N_{k,i}N_{k+1,j}\,,\] implying that on \(G\) we have \(B^{(k)}\triangleright B^{(k+1)}\) for every \(k\in[\ell]\). As an application of above, take for instance the Effron's dice, which are given by \[A=(0,0,4,4,4,4),\quad B=(3,3,3,3,3,3)\] \[C=(2,2,2,2,6,6),\quad D=(1,1,1,5,5,5),\] as mentioned in the Introduction. In this case, the laws associated to each die would be: \[\mathcal{L}_{A}=\frac{1}{3}\delta_{0}+\frac{2}{3}\delta_{4}\,,\quad\mathcal{ L}_{B}=\delta_{3}\,,\quad\mathcal{L}_{C}=\frac{2}{3}\delta_{2}+\frac{1}{3} \delta_{6}\,,\quad\text{ and }\quad\mathcal{L}_{D}=\frac{1}{2}\delta_{1}+\frac{1}{2} \delta_{5}\,.\] Of course, since the corresponding sequence of dice with the above laws is asymptotically intransitive, it cannot fulfill the assumptions of Theorem 6. Note that there are no ties in Effron's dice example, so condition (2.21) is trivially satisfied. The assumptions of Theorem 6 are not satisfied because the quantity of victories of a die against another die becomes deterministic (in view of the law of large numbers) as the number of faces in each die increases, and its variance degenerates to zero. ## 4. On deterministic intransitive dice The main goal of this section is to prove Theorems 1 and 2. We keep using the notation introduced in Section 2.1, but in this section every dice considered is deterministic, that is, the entries \(D_{i}^{(j)}\) of the die \(D^{(j)}\) in a collection \(\mathbf{D}=(D^{(1)},\ldots,D^{(\ell)})\) are always prescribed deterministic numbers rather than nontrivial random variables. As explained after Theorem 1, when investigating existence of _no-tie_ intransitive dice, only the relative ordering of the faces of the dice matters, but not their particular value. Thus, in this section we also restrict to dice whose faces' entries are positive integer numbers, and always pairwise distinct. ### A bijection between dice and words We look at the set of dice labels \(\mathcal{A}=\{D^{(1)},\ldots,\)\(D^{(\ell)}\}\) as an alphabet, and now explain how to map dice to words. Let \(\mathcal{W}(n_{1},\ldots,n_{\ell})\) be the set of strings (or words) with \(n_{1}+\cdots+n_{\ell}\) letters in the alphabet \(\mathcal{A}\), such that each letter \(D^{(k)}\) appears exactly \(n_{k}\) times. There is a natural bijection1\(\mathcal{D}(n_{1},\ldots,n_{\ell})\stackrel{{\pi}}{{\mapsto}} \mathcal{W}(n_{1},\ldots,n_{\ell})\): a collection of dice \(\mathbf{D}\) is mapped to the word \(\mathbf{W}=W_{1}\cdots W_{n}\) determined uniquely by the rule that the letter \(W_{i}\) is equal to \(D^{(k)}\) if the number \(n-i+1\) appears in a face of the die \(D^{(k)}\). This bijection for \(\mathcal{D}(4,4,4)\) is represented in Figure 1. Footnote 1: The idea of translating the dice as strings were inspired by a video on the YouTube channel Polylog: “We designed special dice using math, but there’s a catch”, available at [https://youtu.be/~64UT8yiking](https://youtu.be/~64UT8yiking). Recall that \(\mathcal{D}_{\triangleright}(n_{1},\ldots,n_{\ell})\) is the subset of \(\mathcal{D}(n_{1},\ldots,n_{\ell})\) that consists of intransitive words, and denote by \(\mathcal{W}_{\triangleright}(n_{1},\ldots,n_{\ell})\) the corresponding image of \(\mathcal{D}_{\triangleright}(n_{1},\ldots,n_{\ell})\) by the bijection \(\pi\). Given the word representation \(\mathbf{W}\) of a collection of dice \(\mathbf{D}\), it is also possible to compare which one of two dice \(D^{(i)}\) and \(D^{(j)}\) in \(\mathbf{D}\) is stronger: one sums how many letters \(D^{(i)}\) are to the right of every letter \(D^{(j)}\). The result is how many possible victories \(D^{(j)}\) has over \(D^{(i)}\), and if this result is larger than half the total number of combinations \(n_{j}n_{i}\), then \(D^{(j)}\triangleright D^{(i)}\). In particular, repeating this process over consecutive letters in a given word \(\mathbf{W}\), it is possible to determine whether it belongs to \(\mathcal{W}_{\triangleright}(n_{1},\dots,n_{\ell})\). To illustrate this process in the dice from Figure 1, introduce some auxiliary labels in the letters \(\boldsymbol{B}\) from \(\mathbf{W}\) as \[\mathbf{W}=ABCCABBCAABC=A_{1}BCCA_{2}BBCA_{3}A_{4}BC. \tag{4.1}\] There are \(4\)\(B\)'s to the right of \(A_{1}\), \(3\)\(B\)'s to the right of \(A_{2}\), and \(1\)\(B\) to the right of each of \(A_{3}\) and \(A_{4}\). Thus, the number of victories of the die \(A\) over \(B\) is \(4+3+1+1=9\). By symmetry, the number of victories of \(B\) over \(A\) is \(16-9=7\), and in this case \(A\triangleright B\). Also, to compare which of two given dice \(D^{(i)}\) and \(D^{(j)}\) of a collection \(\mathbf{D}\) is better, it suffices to know the sub-word in \(\pi(\mathbf{D})\) obtained when we remove all letters different from \(D^{(i)}\) and \(D^{(j)}\). For instance, in the example just explained we could have compared the dice \(A\) and \(B\) by looking solely at the sub-word \(ABABBAAB\) obtained when we remove the \(C\)'s from \(\mathbf{W}\) in (4.1). It is convenient to introduce the quantities \[N_{i,j}(\mathbf{D})\coloneqq\sum_{\begin{subarray}{c}k_{1},k_{2}\\ D_{k_{1}}^{(i)}>D_{k_{2}}^{(j)}\end{subarray}}1,\quad\text{and its induced version on $\mathbf{W}$, namely $N_{i,j}(\mathbf{W})\coloneqq N_{i,j}(\pi^{-1}(\mathbf{D}))$.}\] In general, for \(\mathbf{W}\in\mathcal{W}(n_{1},\dots,n_{\ell})\) the numbers \(N_{i,j}(\mathbf{W})\) satisfy \[N_{i,j}(\mathbf{W})+N_{j,i}(\mathbf{W})=n_{i}n_{j}, \tag{4.3}\] and the statement \(\mathbf{D}^{(i)}\triangleright\mathbf{D}^{(j)}\) is equivalent to saying that \[N_{i,j}(\mathbf{W})\;>\;\frac{n_{i}n_{j}}{2}. \tag{4.4}\] Furthermore, if \(\mathbf{W}\) is any sub-word of \(\widetilde{\mathbf{W}}\) obtained without removing two given letters \(D^{(j)}\) and \(D^{(k)}\), then \[N_{i,j}(\mathbf{W})\;=\;N_{i,j}(\widetilde{\mathbf{W}})\,, \tag{4.5}\] which follows from the interpretation of \(N_{i,j}(\cdot)\) as the number of \(D^{(i)}\)'s to the left of \(D^{(j)}\)'s in the given letter. ### Proof of Theorem 1 We now focus on dice with the same number of faces \(n\), that is, we fix \(\ell\) and look at \(\mathcal{D}_{\ell}(n)\) and \(\mathcal{D}_{\triangleright,\ell}(n)\) from (2.2), and their corresponding images \[\mathcal{W}_{\ell}(n)\coloneqq\pi(\mathcal{D}_{\ell}(n))\quad\text{and}\quad \mathcal{W}_{\triangleright,\ell}(n)\coloneqq\pi(\mathcal{D}_{\triangleright, \ell}(n))\,.\] The proof of Theorem 1 is based on the next result. **Proposition 8**.: _The following properties holds._ 1. _For_ \(\ell\geq 3\)_, the sets_ \(\mathcal{W}_{\triangleright,\ell}(2)\) _is empty._ 2. _The sets_ \(\mathcal{W}_{\triangleright,3}(3)\) _and_ \(\mathcal{W}_{\triangleright,3}(4)\) _are both non-empty._ 3. _If the set_ \(\mathcal{W}_{\triangleright,\ell}(n)\) _is non-empty, then both sets_ \(\mathcal{W}_{\triangleright,\ell}(n+2)\) _and_ \(\mathcal{W}_{\triangleright,\ell+1}(n)\) _are also non-empty._ Proof.: To prove (i), let \(\mathbf{W}\in\mathcal{W}_{\ell}(2)\) for which \(D^{(1)}\triangleright D^{(2)}\triangleright\dots\triangleright D^{(\ell)}\). In this case, we have that \[N_{j,k}(\mathbf{W})+N_{k,j}(\mathbf{W})=4\quad\text{for any $j\neq k$,}\] so in this case \(N_{j,k}\geq 3\) whenever \(D^{(j)}\triangleright D^{(k)}\). We learned the following: any sub-word of \(\mathbf{W}\) in two different letters \(D^{(j)}\) and \(D^{(k)}\) for which \(D^{(j)}\triangleright D^{(k)}\) has to be either one of the following two words \[D^{(j)}D^{(j)}D^{(k)}D^{(k)}\quad\text{or}\quad D^{(j)}D^{(k)}D^{(j)}D^{(k)}. \tag{4.6}\] Thus, in \(\mathbf{W}\) there is always a \(D^{(1)}\) to the left of the two occurrences of \(D^{(2)}\), there is always a \(D^{(2)}\) to the left of the two ocurrences of \(D^{(3)}\) etc. Consequently, there is always a \(D^{(1)}\) to the left of the two occurrences of \(D^{(k)}\), for any \(k\geq 1\). Hence, the sub-word of \(\mathbf{W}\) in \(D^{(1)}\) and \(D^{(\ell)}\) cannot be of the form (4.6) with \(j=\ell\) and \(k=1\), so the relation \(D^{(\ell)}\triangleright D^{(1)}\) is not verified in \(\mathbf{W}\). For (ii), examples of words in \(\mathcal{W}_{\triangleright,3}(3)\) and \(\mathcal{W}_{\triangleright,3}(4)\), and their corresponding dice, are displayed in Figure 2. The proof of part (iii) is postponed to Section 4.3. With Proposition 8 at hand, we are ready to prove Theorem 1. Proof of Theorem 1.: From the definition of \(\mathcal{D}_{\triangleright,\ell}(n)\) it immediately follows that Theorem 1 is equivalent to the following two claims: 1. For any \(\ell\geq 3\), the set \(\mathcal{D}_{\triangleright,\ell}(2)\) is empty. 2. For any \(n\geq 3\) and \(\ell\geq 3\), the set \(\mathcal{D}_{\triangleright,\ell}(n)\) is non-empty. The claim (1) above follows from the definition of the bijection \(\pi\) and Proposition 8-(i). In turn, claim (2) follows applying Proposition 8-(iii) recursively, having in mind that \(\mathcal{D}_{\triangleright,3}(3)\) and \(\mathcal{D}_{\triangleright,4}\) are both non-empty by Proposition 8-(ii). Figure 2. On top, a collection of three \(3\)-sided dice corresponding to the word \(\mathbf{W}=ABCCABBCA\in\mathcal{W}_{\triangleright,3}(3)\). On bottom, a collection of three \(4\)-sided dice corresponding to the word \(\mathbf{W}=ABCBCCAABABC\in\mathcal{W}_{\triangleright,3}(4)\) ### Proof of Proposition 8-(iii) To prove Proposition 8 we need some additional notions and lemmas. For what comes next, we recall that the numbers \(N_{i,j}(\mathbf{W})\) were introduced in (4.2). The **dual word**\(\mathbf{W}^{*}\in\mathcal{W}(n_{1},\ldots,n_{\ell})\) is obtaining reversing the ordering of the letters in a given word \(\mathbf{W}\in\mathcal{W}(n_{1},\ldots,n_{\ell})\); in the Example (4.1) the result is \[\mathbf{W}^{*}\;=\;CBAACBBACCBA\,.\] **Lemma 9**.: _Let \(\mathbf{W}\in\mathcal{W}_{\ell}(n)\) and \(\mathbf{W}^{*}\) its dual word. Then_ \[N_{i,j}(\mathbf{W})\;=\;N_{j,i}(\mathbf{W}^{*})\,.\] Proof.: The number \(N_{i,j}(\mathbf{W})\) counts the number of times the letter \(D^{(i)}\) appears to the left of each \(D^{(j)}\) in \(\mathbf{W}\). The dual letter \(\mathbf{W}^{*}\) is obtained from \(\mathbf{W}\) by reading the letters from \(\mathbf{W}\) in the backwards manner, interpretation from which the lemma follows. We say a collection of dice \(\mathbf{D}\in\mathcal{D}(n_{1},\ldots,n_{\ell})\), or its corresponding letter \(\mathbf{W}=\pi(\mathbf{D})\), is **neutral** if any given die in \(\mathbf{D}\) beats any other given die in \(\mathbf{D}\) the same amount of times. In terms of \(N_{i,j}(\mathbf{W})\) this is equivalent to verifying that \[N_{i,j}(\mathbf{W})\;=\;N_{j,i}(\mathbf{W})\,. \tag{4.7}\] From this relation and (4.3) it follows that if \(\mathbf{W}\in\mathcal{W}_{\ell}(n)\) is neutral, then \(n\) must be even. We also talk about concatenation of two words \(\mathbf{W}_{1}\) and \(\mathbf{W}_{2}\) into a new word \(\mathbf{W}=\mathbf{W}_{1}\mathbf{W}_{2}\); in the example (4.1), for instance, we can write \(\mathbf{W}=\mathbf{W}_{1}\mathbf{W}_{2}\) with \(\mathbf{W}_{1}=ABCCA\) and \(\mathbf{W}_{2}=BBCAABC\). When dealing with concatenations, the involved words need not be in the same letters, neither need they have the same size. However, when \(\mathbf{W}_{1}\in\mathcal{W}_{\ell}(n_{1})\) and \(\mathbf{W}_{2}\in\mathcal{W}_{\ell}(n_{2})\), then obviously \(\mathbf{W}_{1}\mathbf{W}_{2}\in\mathcal{W}_{\ell}(n_{1}+n_{2})\) and the identity \[N_{i,j}(\mathbf{W}_{1}\mathbf{W}_{2})\;=\;N_{i,j}(\mathbf{W}_{1})+n_{1}n_{2} +N_{i,j}(\mathbf{W}_{2}) \tag{4.8}\] holds true. **Lemma 10**.: _Given any word \(\mathbf{W}\in\mathcal{W}_{\ell}(n)\), the concatenation \(\widetilde{\mathbf{W}}=\mathbf{W}\mathbf{W}^{*}\in\mathcal{W}_{\ell}(2n)\) is neutral._ Proof.: Applying Lemma 9 to (4.8), we obtain that \(N_{i,j}(\mathbf{W})=N_{j,i}(\mathbf{W}^{*})\) for any \(i\neq j\). The proof is then completed using (4.7). We are ready to start applying recursive arguments that preserve intransitive words, starting with adding a new letter to a word known to be transitive. **Lemma 11**.: _If \(\mathcal{W}_{\triangleright,\ell}(n)\) is non-empty, then \(\mathcal{W}_{\triangleright,\ell+1}(n)\) is non-empty._ Proof.: From a given \(\mathbf{W}\in\mathcal{W}_{\ell}(n)\), create a new word \(\widetilde{\mathbf{W}}\in\mathcal{W}_{\ell+1}(n)\) obtained by replacing every occurrence of \(D^{(\ell)}\) by \(D^{(\ell)}D^{(\ell+1)}\). In the example \(\mathbf{W}\in\mathcal{W}_{3}(4)\) from (4.1), the new word \(\widetilde{\mathbf{W}}\in\mathcal{W}_{4}(4)\) is \[\widetilde{\mathbf{W}}\;=\;ABCDCDABBCDAAABCD.\] In virtue of (4.5), we see that the relation \(D^{(k)}\triangleright D^{(k+1)}\) for \(k=1,\ldots,n-1\) is preserved when going from a letter \(\mathbf{W}\in\mathcal{W}_{\triangleright,\ell}(n)\) to the corresponding letter \(\widetilde{\mathbf{W}}\in\mathcal{W}_{\ell+1}(n)\). Still by construction, the relations \[N_{\ell,\ell+1}(\widetilde{\mathbf{W}})\;>\;N_{\ell+1,\ell}(\widetilde{ \mathbf{W}})N_{\ell,1}(\mathbf{W})\;=\;N_{\ell+1,1}(\widetilde{\mathbf{W}})\] are of straightforward verification, since each letter \(\mathbf{D}^{(\ell)}\) appears immediately to the right of a letter \(D^{(\ell)}\) in \(\widetilde{\mathbf{W}}\). When \(\mathbf{W}\in\mathcal{W}_{\triangleright,\ell}(n)\), the inequality above shows that \(D^{(\ell)}\triangleright D^{(\ell+1)}\) in \(\widetilde{\mathbf{W}}\), and the equality above shows that the relation \(D^{(\ell)}\triangleright D^{(1)}\) in \(\mathbf{W}\) transfers to the relation \(D^{(\ell+1)}\triangleright D^{(1)}\) in \(\widetilde{\mathbf{W}}\). Adding new faces whilst preserving intransitivity is a little bit more involved, and will be based on the next two lemmas. **Lemma 12**.: _Fix \(k\geq 1\), and suppose that \(\mathbf{I}\in\mathcal{W}_{\ell}(2k)\) is a neutral word. Let \(\mathbf{W}\in\mathcal{W}_{\ell}(n)\). Then \(\mathbf{W}\in\mathcal{W}_{\triangleright,\ell}(n)\) if, and only if, \(\mathbf{I}\mathbf{W}\in\mathcal{W}_{\triangleright,\ell}(n+2k)\)._ Proof.: From (4.8) we learn that for any \(i\neq j\), \[N_{i,j}(\mathbf{I}\mathbf{W})\;=\;N_{i,j}(\mathbf{I})+2kn+N_{i,j}(\mathbf{W})\,.\] Using the symmetry (4.7) for the neutral word \(\mathbf{I}\), we obtain \[N_{k,k+1}(\mathbf{I}\mathbf{W})-N_{k+1,k}(\mathbf{I}\mathbf{W})\;=\;N_{k,k+1} (\mathbf{W})-N_{k+1,k}(\mathbf{W})\,,\quad k=1,\ldots,\ell-1.\] The result then follows from (4.4). **Lemma 13**.: _If \(\mathcal{W}_{\ell}(n)\) is non-empty, then \(\mathcal{W}_{\ell}(n+2)\) is non-empty._ Proof.: By Lemma 10, the word \(\mathbf{I}=\mathbf{S}\mathbf{S}^{*}\in\mathcal{W}_{\ell}(2)\) constructed from the choice \[\mathbf{S}\;=\;D^{(1)}D^{(2)}\cdots D^{(\ell)}\] is neutral. The result now follows from Lemma 12. We finally complete the proof of Proposition 8. Proof of Proposition 8-(iii).: Proposition 8-(iii) is now simply a combination of Lemmas 11 and 13. ### On the number of intransitive words The proof of Theorem 2 is now a consequence of some of the results already established. Proof of Theorem 2.: For given integers \(n_{1},n_{2}\), let \(\mathbf{W}_{j}\in\mathcal{W}_{\triangleright,\ell}(n_{j})\), \(j=1,2\), so that by (4.4), \[N_{k,k+1}(\mathbf{W}_{j})>\frac{(n_{j})^{2}}{2}\,,\quad k=1,\ldots,\ell-1, \quad\text{and}\quad N_{\ell,1}(\mathbf{W}_{j})>\frac{(n_{j})^{2}}{2},\quad j =1,2.\] Using these inequalities and (4.8), it follows that \(\mathbf{W}_{1}\mathbf{W}_{2}\in\mathcal{W}_{\ell}(n_{1}+n_{2})\) satisfies \[N_{k,k+1}(\mathbf{W}_{1}\mathbf{W}_{2})>\frac{(n_{1}+n_{2})^{2}}{2},\;k=1, \ldots,\ell-1,\quad\text{and}\quad N_{\ell,1}(\mathbf{W}_{1}\mathbf{W}_{2})> \frac{(n_{1}+n_{2})^{2}}{2}.\] Using again (4.8) we conclude that \(\mathbf{W}_{1}\mathbf{W}_{2}\in\mathcal{W}_{\triangleright,\ell}(n_{1}+n_{2})\). Hence, \[\left|\mathcal{W}_{\triangleright,\ell}(n_{1}+n_{2})\right|\;\geq\;\left| \mathcal{W}_{\triangleright,\ell}(n_{1})\right|\left|\mathcal{W}_{\triangleright, \ell}(n_{2})\right|,\] so the sequence \((\log|\mathcal{W}_{\triangleright,\ell}(n)|)_{n}\) is superadditive, and thus by Fekete's Lemma, \[\lim_{n\to\infty}\frac{\log|\mathcal{W}_{\triangleright,\ell}(n)|}{n}\;=\;\sup_ {n}\frac{\log|\mathcal{W}_{\triangleright,\ell}(n)|}{n}\;=\;L(\ell)\] for some constant \(L(\ell)>0\). Since \(\mathcal{W}_{\triangleright,\ell}(n)=\pi(\mathcal{D}_{\triangleright,\ell}(n))\) and \(\pi\) is a bijection, the result follows. ### Some numerical aspects on the number of intransitive words Through a numerical study, we are able to estimate the number \(L(3)\) from Theorem 2 as follows. A simple algorithm computes \(|\mathcal{D}_{\triangleright,3}(n)|\) in a straightforward way: we iterate through every word in the set \(\mathcal{W}_{3}(n)\) and check whether each word is intransitive, a task that can be accomplished in \(\Theta(n)\) operations. A drawback of this approach is that the number of words that need checking grows exponentially with respect to \(n\). In fact, by Stirling's approximation, we have that \(|\mathcal{D}_{3}(n)|=\Theta(27^{n}/n)\), resulting in a total time complexity of \(\Theta(27^{n})\). We optimize this algorithm by partitioning the set \(\mathcal{W}_{3}(n)\) into words with the same "prefixes" (that is, the same sequence of letters for the first \(n\) positions in the word). We then avoid prefixes which are already known to not yield intransitive words, thus performing early exits while checking for intransitivity. The algorithm was implemented in C++ and executed on the Euler cluster maintained by the Center for Mathematical Sciences Applied to Industry (CeMEAI). Using this method, we were able to compute \(|\mathcal{D}_{\triangleright,3}(n)|\) for \(n\leq 11\) (see Table 1). Using Table 1, we can estimate \(L(3)\). By Fekete's lemma, we have that \[L(3)=\sup_{n}\frac{\log|\mathcal{D}_{3}(n)|}{n}\geq\frac{\log|\mathcal{D}_{3}( 11)|}{11}>2.445.\] On the other hand, as \(\mathcal{W}_{\triangleright,\ell}(n)\subset\mathcal{W}_{\ell}(n)\), we obtain \[L(\ell)\leq\lim_{n}\frac{\log|\mathcal{W}_{\ell}(n)|}{n}=\ell\log\ell.\] In particular, the estimate \(2.445<L(3)\leq 3\log 3\approx 3.296\) is valid. The algorithm just described yields exact values of \(|\mathcal{D}_{\triangleright,3}(n)|\), producing the values shown in Table 1. However, its performance is very slow in \(n\), and even computing \(|\mathcal{D}_{\triangleright,3}(n)|\) for, say, \(n=12\) already, becomes out of reach. With this issue in mind, we also performed an stochastic simulation to estimate \(|\mathcal{D}_{\triangleright,3}(n)|/|\mathcal{D}_{3}(n)|\) for a sample of values of \(n\) up to \(500\), an arbitrary cut where fluctuations in the estimation should still be controlled. We sample uniformly from \(D_{3}(n)\). We can then estimate \(\Delta L_{3}(n)=-\log(|\mathcal{D}_{\triangleright,3}(n)|/|\mathcal{D}_{3}(n))|/n\). The results are displayed in Figure 3, showing good agreement with the values computed in Table 1. Observe how the values seem to tend to \(0\). Since \(\lim_{n}\Delta L_{3}(n)=3\log 3-L(3)\), we conjecture that \(L(3)=3\log 3\), so that the fraction \(|\mathcal{D}_{\triangleright,3}(n)|/|\mathcal{D}_{3}(n)|\) should decay sub-exponentially with \(n\). Compare it with (2.3). All the algorithms and data presented here are publicly available in our repository in GitHub2. Footnote 2: [https://github.com/NonTransitiveDices/NonTransitiveDices.git](https://github.com/NonTransitiveDices/NonTransitiveDices.git) ## 5. Some generalities on the counting functions and Gaussian vectors The goal here is to prove Theorem 4. In this section \(\mathbf{D}_{m}=(D^{(1)},\ldots,D^{(\ell)})\) will always denote a collection of \(\ell\geq 3\) independent random dice, where each face of \(D^{(i)}=D^{(i)}(m)\) has law \(\mathcal{L}^{(i)}=\mathcal{L}_{m}^{(i)}\), and such that the sequence \(\{\mathbf{D}_{m}\}_{m}\) satisfies Assumption 3. ### Properties of the counting functions Recall the counting variables \(N_{j}\), their normalized versions \(\widetilde{N}_{j}\), and \(E_{j}\), which were defined in (2.6), (2.13) and (2.7), respectively, and the quantities \(\mathbf{p}_{k},\mathbf{q}_{k},\mathbf{r}_{k}\) and \(\mathbf{s}_{k}\), which depend only on the collection of laws \(\mathcal{L}^{(1)},\ldots,\mathcal{L}^{(\ell)}\), were introduced in (2.9)-(2.12). Note that \(\mathbf{q}_{k}\) and \(\mathbf{r}_{k}\) are not necessarily equal because each die has its law. As said before, these quantities also depend on \(m\) but we will not write down this dependence explicitly. The next lemma establishes (2.16). **Lemma 14**.: _We have_ \[\mathbb{E}N_{k}\;=\;n_{k}n_{k+1}\mathbf{p}_{k}\,,\] \[\operatorname{Var}\left(N_{k}\right)\;=\;n_{k}n_{k+1}\big{[}n_{k }(\mathbf{q}_{k}-\mathbf{p}_{k}^{2})+n_{k+1}(\mathbf{r}_{k}-\mathbf{p}_{k}^{ 2})+\mathbf{p}_{k}^{2}+\mathbf{p}_{k}-\mathbf{q}_{k}-\mathbf{r}_{k}\big{]}\,, \quad\text{ and}\] \[\operatorname{Cov}\left(N_{k-1},N_{k}\right)=n_{k-1}n_{k}n_{k+1} \left(\mathbf{s}_{k}-\mathbf{p}_{k-1}\mathbf{p}_{k}\right).\] _Consequently, under Assumption 3 we have that as \(m\to\infty\),_ \[\mathbb{E}N_{k}\;=\;f_{k}(\infty)f_{k+1}(\infty)\mathbf{p}_{k}(\infty)m^{2}+o (m^{2})\,,\] Figure 3. \(\Delta L_{3}(n)\) for various values of \(n\). The blue data points were calculated using Table 1, and the red data points were generated through a stochastic simulation. The vertical axis is represented in a logarithmic scale \[\operatorname{Var}\left(N_{k}\right)\ =\ \sigma_{k}(\infty)^{2}m^{3}+O(m^{2})\,,\] \[\operatorname{Corr}\left(N_{k-1},N_{k}\right)\ =\ \gamma_{k}(\infty)+O(1/m).\] Proof.: The calculation of \(\operatorname{\mathbb{E}}N_{k}\) is immediate from the definition. For the variance, we begin noticing that \[\operatorname{\mathbb{E}}\left[N_{k}^{2}\right]\ =\ \sum_{i_{1}=1}^{n_{k}}\sum_{j_{1}=1}^{n_{k+1}} \sum_{i_{2}=1}^{n_{k}}\sum_{j_{2}=1}^{n_{k+1}}\mathbb{P}\left(\{D_{i_{1}}^{(k) }>D_{j_{1}}^{(k+1)}\}\cap\{D_{i_{2}}^{(k)}>D_{j_{2}}^{(k+1)}\}\right).\] The probability of such intersections is always in \(\{\mathbf{p}_{k}^{2},\mathbf{p}_{k},\mathbf{r}_{k},\mathbf{q}_{k}\}\), depending on whether indices \(i_{1}\) and \(i_{2}\) or \(j_{1}\) and \(j_{2}\) coincide. Decomposing into all possibilities, we have \[\operatorname{\mathbb{E}}\left[N_{k}^{2}\right] =\sum_{\begin{subarray}{c}i_{2}\neq i_{1}\\ j_{2}\neq j_{1}\end{subarray}}\mathbf{p}_{k}^{2}+\sum_{\begin{subarray}{c}i_{ 2}=i_{1}\\ j_{2}\neq j_{1}\end{subarray}}\mathbf{r}_{k}+\sum_{\begin{subarray}{c}j_{2}=j_{ 1}\\ i_{2}\neq i_{1}\end{subarray}}\mathbf{q}_{k}+\sum_{\begin{subarray}{c}i_{2}=i _{1}\\ j_{2}=j_{1}\end{subarray}}\mathbf{p}_{k}\] \[=n_{k}(n_{k}-1)n_{k+1}(n_{k+1}-1)\mathbf{p}_{k}^{2}+n_{k}n_{k+1}( n_{k+1}-1)\mathbf{r}_{k}\] \[\qquad+n_{k}(n_{k}-1)n_{k+1}\mathbf{q}_{k}+n_{k}n_{k+1}\mathbf{p} _{k}.\] Hence, the variance of \(N_{k}\) is given by \[\operatorname{Var}\left(N_{k}\right) =\ \operatorname{\mathbb{E}}\left[N_{k}^{2}\right]- \operatorname{\mathbb{E}}\left[N_{k}\right]^{2}\ =\ \operatorname{\mathbb{E}}\left[N_{k}^{2}\right]-(n_{k}n_{k+1} \mathbf{p}_{k})^{2}\] \[=\ n_{k}n_{k+1}\big{[}n_{k}(\mathbf{q}_{k}-\mathbf{p}_{k}^{2})+n _{k+1}(\mathbf{r}_{k}-\mathbf{p}_{k}^{2})+\mathbf{p}_{k}^{2}+\mathbf{p}_{k}- \mathbf{q}_{k}-\mathbf{r}_{k}\big{]}.\] The calculation of \(\operatorname{Cov}\left(N_{k-1},N_{k}\right)\) is similar, and the asymptotic expressions follow immediately. For the similar result for the \(E_{k}\)'s, we need to introduce certain quantities analogous to \(\mathbf{p}_{k},\mathbf{q}_{r},\mathbf{s}_{k}\). For \(E_{k}\) as in (2.7), let us introduce \[\mathbf{p}_{k}^{=}\ \coloneqq\ \mathbb{P}\left(D_{1}^{(k)}=D_{1}^{(k+1)} \right)\ =\ \operatorname{\mathbb{E}}\left(\mathbb{1}_{D_{1}^{(k)}=D_{1}^{(k+1)}}\right) \tag{5.1}\] which is the that probability a given face of the \(k\)-th die coincide with a given face of the \((k+1)\)-th die; \[\mathbf{q}_{k}^{=}\ \coloneqq\ \mathbb{P}\left(D_{1}^{(k)}=D_{1}^{(k+1)},D_{2}^{( k)}=D_{1}^{(k+1)}\right) \tag{5.2}\] which is the probability that two given faces of the \(k\)-th die coincide with a given face of the \((k+1)\)-th die; and \[\mathbf{s}_{k}^{=}\ \coloneqq\ \mathbb{P}\left(D_{1}^{(k-1)}=D_{1}^{(k)}=D_{1}^{( k+1)}\right), \tag{5.3}\] which is the probability that three given faces, one from each of the dice \(D^{(k-1)}\), \(D^{(k)}\) and \(D^{(k+1)}\), coincide. The next result is the analogue of Lemma 14 for the variables \(E_{k}\)'s. **Lemma 15**.: _The random variables \(E_{k}\), \(k=1,\ldots,\ell\), satisfy_ \[\operatorname{\mathbb{E}}\left(E_{k}\right) =n_{k}n_{k+1}\mathbf{p}_{k}^{=} \tag{5.4}\] \[\operatorname{Var}\left(E_{k}\right) =n_{k}n_{k+1}\big{[}(n_{k}+n_{k+1})(\mathbf{q}_{k}^{=}-(\mathbf{ p}_{k}^{=})^{2})+(\mathbf{p}_{k}^{=})^{2}+\mathbf{p}_{k}^{=}-2\mathbf{q}_{k}^{=} \big{]}. \tag{5.5}\] _In particular, the estimates (2.23) hold true._ Proof.: The proof of (5.4)-(5.5) is done following the same steps used in the proof of Lemma 14, we skip the details. The estimates (2.23) then follow, having in mind (2.4) and the fact that each \(\mathbf{p}_{k}^{=},\mathbf{q}_{k}^{=},\mathbf{s}_{k}^{=}\) is a probability, and thus bounded as functions of \(m\) Using the previous result, we are able to compare (2.21) with (2.23). **Lemma 16**.: _If condition (2.21) holds, then for every \(k=1,\ldots,\ell\), it is valid_ \[\mathbb{E}(E_{k})=o(m^{2})\quad\text{and}\quad\operatorname{Var}\left(E_{k} \right)=o(m^{3})\quad\text{as}\quad m\to\infty.\] Proof.: Condition (2.21) is the same as saying that \(\mathbf{p}_{k}^{=}\to 0\) for every \(k\). Thus, the claim on \(\mathbb{E}(E_{k})\) is immediate from (5.4). From (5.5) and the fact that \(\mathbf{p}_{k}^{=}\) and \(\mathbf{q}_{k}^{=}\) both remain bounded as \(m\to\infty\), we see that \[\operatorname{Var}\left(E_{k}\right)\;=\;m^{3}f_{k}(\infty)f_{k+1}(\infty)(f_ {k}(\infty)+f_{k+1}(\infty))(\mathbf{q}_{k}^{=}-(\mathbf{p}_{k}^{=})^{2})+O(m^{2})\,.\] A comparison of (5.1) and (5.2) shows that \(0\leq q_{k}^{=}\leq p_{k}^{=}\), so that (2.21) implies also that \(\mathbf{q}_{k}^{=}\to 0\) and the claim on \(\operatorname{Var}\left(E_{k}\right)\) follows. Next, we turn our attention to the structure of the covariance matrix (2.17). It turns out that in the case of particular interest to our problem, the coefficients \(\gamma_{j}(\infty)\) admit a particularly interesting structure, as we now compute. **Proposition 17**.: _Let \(\{\mathbf{D}_{m}\}_{m}\) be a sequence satisfying the conditions of Theorem 5. Then the coefficients \((\gamma_{k}(\infty))\) and \((f_{k}(\infty))\) from Assumption 3 are related by_ \[\gamma_{k}(\infty)\;=\;-\frac{f_{k-1}(\infty)f_{k}(\infty)f_{k+1}(\infty)}{ \sqrt{f_{k-1}(\infty)f_{k}(\infty)(f_{k-1}(\infty)+f_{k}(\infty))}\sqrt{f_{k} (\infty)f_{k+1}(\infty)(f_{k}(\infty)+f_{k+1}(\infty))}}\] _for \(k=1,\ldots,\ell\)._ Proof.: From (5.1),(5.2),(5.3), it follows that \(0\leq\mathbf{s}_{k}^{=},\mathbf{q}_{k}^{=}\leq\mathbf{p}_{k}^{=}\). Condition (16) says that \(\mathbf{p}_{k}^{=}\to 0\), and in this case we therefore have \(\mathbf{s}_{k}^{=},\mathbf{q}_{k}^{=}\to 0\) as well. Next, the events defining \(\mathbf{p}_{k},\mathbf{r}_{k},\mathbf{q}_{k},\mathbf{s}_{k}\) amount to observing specific orderings of the faces involved. Since each face has the same distribution, any fixed ordering has the same probability. For instance, we have \[1\;=\;\mathbb{P}(D_{1}^{(k)}>D_{1}^{(k+1)})+\mathbb{P}(D_{1}^{(k)}<D_{1}^{(k+ 1)})+\mathbb{P}(D_{1}^{(k)}=D_{1}^{(k+1)})\;=\;2\mathbf{p}_{k}+\mathbf{p}_{k}^{=}\,,\] implying that \(\mathbf{p}_{k}\to\frac{1}{2}\) as \(m\to\infty\). In an analogous way we obtain that \(\mathbf{q}_{k}\to\frac{1}{3}\), \(\mathbf{r}_{k}\to\frac{1}{3}\) and \(\mathbf{s}_{k}\to\frac{1}{6}\). The formula for \(\gamma_{k}(\infty)\) then follows plugging these limits into (2.14) and (2.15). ### Gaussian vectors associated to the structured covariance matrix Under the conditions of Theorem 5, Proposition 17 ensures that the nontrivial entries \(\gamma_{k}(\infty)\) of the covariance matrix (2.17) have a particular structure, which ultimately yield that the probability in the right-hand side of (2.22) vanishes, and proving this last claim is the main goal of this subsection. To avoid cumbersome notation, for the calculations that come next we denote \[\mathfrak{f}_{k}\;\coloneqq\;f_{k}(\infty),\quad k=1,\ldots,\ell,\quad\mathfrak{ f}_{\ell+1}\coloneqq f_{1}(\infty)\,,\] so that \[\gamma_{k}(\infty)\;=\;-\frac{\mathfrak{f}_{k-1}\mathfrak{f}_{k}\mathfrak{f}_ {k+1}}{\sqrt{\mathfrak{f}_{k-1}\mathfrak{f}_{k}(\mathfrak{f}_{k-1}+\mathfrak{f} _{k})}\sqrt{\mathfrak{f}_{k}\mathfrak{f}_{k+1}(\mathfrak{f}_{k}+\mathfrak{f}_{ k+1})}}\,. \tag{5.6}\] To study the covariance matrix \(\Sigma\) from (2.17) with coefficients (5.6), we start by collecting some properties of these \(\gamma_{k}(\infty)\)'s. **Proposition 18**.: _The coefficients \(\gamma_{k}=\gamma_{k}(\infty)\), \(k=1,\ldots,\ell\), in (5.6) satisfy the following properties._ 1. \(\gamma_{k}^{2}=\frac{\mathfrak{f}_{k-1}}{\mathfrak{f}_{k-1}+\mathfrak{f}_{k}} \cdot\frac{\mathfrak{f}_{k+1}}{\mathfrak{f}_{k}+\mathfrak{f}_{k+1}}\)_._ 2. \(\gamma_{k}\in(-1,0)\) _for every_ \(k\)_._ 3. _As functions of the_ \(\mathfrak{f}_{j}\)_'s, the coefficients_ \(\gamma_{k}=\gamma_{k}(\mathfrak{f}_{1},\ldots,\mathfrak{f}_{\ell})\) _are scale-invariant: for every_ \(k\in[\ell]\) _and_ \(r>0\) _we have_ \[\gamma_{k}(r\mathfrak{f}_{1},\ldots,r\mathfrak{f}_{\ell})=\gamma_{k}( \mathfrak{f}_{1},\ldots,\mathfrak{f}_{\ell}).\] 4. \(\prod_{k}\gamma_{k}=(-1)^{\ell}\prod_{k}\frac{\mathfrak{f}_{k}}{\mathfrak{f}_ {k}+\mathfrak{f}_{k+1}}\)_._ 5. \(|\prod_{k}\gamma_{k}|\leq 2^{-\ell}\)_, with equality being valid if, and only if,_ \(\mathfrak{f}_{1}=\cdots=\mathfrak{f}_{\ell}\)_._ Proof.: Items _(i)_, _(iii)_ and _(iv)_ are immediate from (5.6). It is obvious that \(\gamma_{k}<0\), so to prove _(ii)_ it suffices to show that \(\gamma_{k}^{2}<1\) which, in turn, by _(i)_ is equivalent to the inequality \[\mathfrak{f}_{k-1}\mathfrak{f}_{k+1}<(\mathfrak{f}_{k-1}+\mathfrak{f}_{k})( \mathfrak{f}_{k}+\mathfrak{f}_{k+1}),\quad\text{that is,}\quad 0<\mathfrak{f}_{k-1} \mathfrak{f}_{k}+\mathfrak{f}_{k}^{2}+\mathfrak{f}_{k}\mathfrak{f}_{k+1}.\] Since \(\mathfrak{f}_{k}>0\) for every \(k\), part _(ii)_ follows. Finally, for item _(v)_ we apply the inequality between arithmetic and geometric means to obtain \[\frac{f_{k}+f_{k+1}}{2}\geq\sqrt{f_{k}f_{k+1}},\text{ for every }k\in[\ell].\] Multiplying all inequalities above, the result follows using item _(iv)_. With the aforementioned properties of \(\gamma_{k}=\gamma_{k}(\infty)\) from (5.6) at hand, we now need to collect some important information on the associated covariance matrix \(\Sigma\) from (2.17). From a linear algebra perspective, this is an example of a periodic Jacobi matrix (see for instance [12, 8, 4]). However, we could not explore these interpretations for the results needed later. Instead, in our case, we use the additional structure (5.6) in a fundamental way to obtain the next results. **Lemma 19**.: _Let \(\Sigma\) be as in (2.17) with coefficients \(\gamma_{k}=\gamma_{k}(\infty)\) as in (5.6). Then_ \[\det\Sigma\;=\;1+2(-1)^{\ell-1}\gamma_{1}\ldots\gamma_{\ell}+\sum_{m=1}^{ \ell}(-1)^{m}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! indices must be consecutive we have \(i_{j}=i_{j-1}+1\) for every \(j\). After \(i_{o}\) the cycle returns to \(i_{1}\), implying that it has length \(\ell\). The case \(i_{2}=i_{1}-1\) is analogous. Hence, apart from cycles of order \(1\) and cycles of the form \((i\ i+1)\), the only cycles whose product is non-zero are \((1\ 2\ \cdots\ \ell)\) and \((1\ \ell\ \ell-1\ \cdots\ 2)\). Both have the same product: \[\prod_{i\in[\ell]}\Sigma_{i,i+1}\ =\ \prod_{i\in[\ell]}\gamma_{i}\,.\] Finally, if \(\sigma\) is a permutation different from the identity and the two cycles of order \(\ell\), in order for its product to be non-zero one must have a cyclic decomposition \(\sigma=\tau_{1}\dots\tau_{t}\) with every \(\tau_{i}\) being a cycle of order \(1\) or \(2\). As cycles of order \(1\) contribute with \(1\) to the product, we can focus on the cycles of order \(2\). Suppose there are \(m\) cycles of order \(2\) and reorder if necessary so that they are given by \(\tau_{1},\dots,\tau_{m}\) with \(\tau_{j}=(i_{j}-1\ i_{j})\) and \(i_{1}<i_{2}<\dots<i_{m}\). The formula in (5.7) follows, since the \(i_{j}\) are non-consecutive by construction. Using Lemma 19 we are able to verify that \(\det\Sigma\) is always zero. **Lemma 20**.: _Let \(\Sigma\) be as in (2.17) with coefficients \(\gamma_{k}=\gamma_{k}(\infty)\) as in (5.6). Then \(\det\Sigma=0\)._ Proof.: We have to prove that the right-hand side of (5.7) is zero. We will replace the expressions for \(\gamma_{k}=\gamma_{k}(\infty)\) given by Proposition 18-_(i)_, _(iv)_ and verify that the right-hand side of (5.7) vanishes. In order to make the computation more streamlined, it is convenient to reinterpret it as an estimate of probabilities, as we describe below. Let us define \(a_{k}\coloneqq\frac{\mathfrak{f}_{k-1}}{\mathfrak{f}_{k-1}+\mathfrak{f}_{k}} \in(0,1)\), which satisfies \(1-a_{k+1}=\frac{\mathfrak{f}_{k+1}}{\mathfrak{f}_{k}+\mathfrak{f}_{k+1}}\). Consider a collection \((U_{j}:j\in[\ell])\) of i.i.d. uniform random variables in \((0,1)\) and set \[A_{k}\ \coloneqq\ \{U_{k}\leq a_{k}\}\,. \tag{5.8}\] The collection \((A_{k}:k\in[\ell])\) consists of mutually independent events such that \(\mathbb{P}(A_{k})=a_{k}\). Defining \(B_{k}\coloneqq A_{k}\cap A_{k+1}^{c}\), it holds that \[\mathbb{P}(B_{k})\ =\ a_{k}(1-a_{k+1})\ =\ \gamma_{k}^{2}\,. \tag{5.9}\] Let us compute the probability of the event \(\cup_{k}B_{k}\) in two different ways. By the inclusion-exclusion principle, we have \[\mathbb{P}(\cup B_{k}) \ =\ \sum_{m=1}^{\ell}(-1)^{m-1}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \[=\;1-2\prod\frac{\mathfrak{f}_{k}}{\mathfrak{f}_{k}+\mathfrak{f}_{k+1}}=1-2(-1)^{ \ell}\prod\gamma_{k}\,. \tag{5.11}\] Equating (5.10) and (5.11) the result follows. Lemma 20 ensures that zero is an eigenvalue of \(\Sigma\), and we now collect some info about the associated eigenspace. **Proposition 21**.: _Let \(\Sigma\) be as in (2.17) with coefficients \(\gamma_{k}=\gamma_{k}(\infty)\) as in (5.6). Then \(0\) is an eigenvalue of \(\Sigma\), its eigenspace has dimension 1 and is generated by a vector \(x\in(0,\infty)^{\ell}\)._ Proof.: Let \(x=(x_{1},\ldots,x_{\ell})\) be a non-zero vector satisfying \(\Sigma x=0\). Then, for every \(k\in[\ell]\) we have \[\gamma_{k}x_{k-1}+x_{k}+\gamma_{k+1}x_{k+1}\;=\;0\,.\] ( \[L_{k}\] ) It is possible to solve the system of equations above explicitly, but the formulas obtained this way are cumbersome. Instead, we show that if some coordinate \(x_{j}\) is positive then \(x_{j+1}\) is positive as well. Since we can always choose the sign of one entry of \(x\), by the cyclic symmetry of the problem we then conclude that there is \(x\in(0,\infty)^{\ell}\) with \(\Sigma x=0\), as wanted. Therefore, assume without loss of generality that \(x_{\ell-1}\geq 0\). From \((L_{1})-\gamma_{1}(L_{\ell})\), we obtain \[-\gamma_{1}\gamma_{\ell}x_{\ell-1}+(1-\gamma_{1}^{2})x_{1}+\gamma_{2}x_{2}\;= \;0\,.\] Defining \(P_{0}:=1\) and \(P_{1}:=1-\gamma_{1}^{2}\), the equation above becomes \[-\gamma_{1}\gamma_{\ell}x_{\ell-1}+P_{1}x_{1}+\gamma_{2}P_{0}x_{2}\;=\;0\,.\] ( \[L_{1}^{\prime}\] ) Equation \((L_{1}^{\prime})\) relates \(x_{\ell-1}\) to \(x_{1}\) and \(x_{2}\). By successive applications of the same reasoning we can relate \(x_{\ell-1}\) to \(x_{k}\) and \(x_{k+1}\) for any \(k\). Indeed, suppose that it holds \[(-1)^{k}\gamma_{1}\ldots\gamma_{k}\gamma_{\ell}x_{\ell-1}+P_{k}x_{k}+\gamma_{ k+1}P_{k-1}x_{k+1}=0,\] ( \[L_{k}^{\prime}\] ) for some already defined \(P_{k-1}\) and \(P_{k}\). Then, from \(P_{k}(L_{k+1})-\gamma_{k+1}(L_{k}^{\prime})\), we obtain \[\big{(}P_{k}\gamma_{k+1}x_{k}+P_{k}x_{k+1}+P_{k}\gamma_{k+2}x_{k+ 2}\big{)}\] \[\qquad-\big{(}\gamma_{k+1}(-1)^{k}\gamma_{1}\ldots\gamma_{k} \gamma_{\ell}x_{\ell-1}+\gamma_{k+1}P_{k}x_{k}+\gamma_{k+1}^{2}P_{k-1}x_{k+1} \big{)}\] \[=(-1)^{k+1}\gamma_{1}\ldots\gamma_{k+1}\gamma_{\ell}x_{\ell-1}+( P_{k}-\gamma_{k+1}^{2}P_{k-1})x_{k+1}+\gamma_{k+2}P_{k}x_{k+2}\;=\;0\,.\] Defining \(P_{k+1}\coloneqq P_{k}-\gamma_{k+1}^{2}P_{k-1}\) for \(k\leq\ell-2\), we conclude that \((L_{k+1}^{\prime})\) also holds. Since we know that \((L_{1}^{\prime})\) holds, it follows by induction that \((L_{\ell-2}^{\prime})\) holds, implying that \[0\;=\;(-1)^{\ell-1}\gamma_{1}\ldots\gamma_{\ell}x_{\ell-1}+P_{\ell-1}x_{\ell- 1}+\gamma_{\ell}P_{\ell-2}x_{\ell}\,,\] that is, \[\gamma_{\ell}P_{\ell-2}x_{\ell}\;=\;\big{(}(-1)^{\ell}\gamma_{1}\ldots\gamma_{ \ell}-P_{\ell-1}\big{)}x_{\ell-1}\,.\] To finish the proof, we need to control the sign of the coefficients appearing above. Once again, the strategy of expressing relevant quantities using the independent events \(A_{k}\) plays a role. **Lemma 22**.: _Define_ \[P_{0}\coloneqq 1,\quad P_{1}\coloneqq 1-\gamma_{1}^{2},\quad P_{k}\coloneqq P _{k-1}-\gamma_{k}^{2}P_{k-2},\;2\leq k\leq\ell-1,\] _and_ \[P_{\ell}\coloneqq 2(-1)^{\ell}\gamma_{1}\cdots\gamma_{\ell}.\] _Then_ \[1\;=\;P_{0}\;>\;P_{1}\;>\;P_{2}\;>\;\cdots\;>\;P_{\ell-1}\;>P_{\ell}\;=\;2(-1 )^{\ell}\gamma_{1}\cdots\gamma_{\ell}\;>\;0\,. \tag{5.12}\] Proof.: Recall \(A_{k}\) as defined in (5.8) and \(B_{k}=A_{k}\cap A_{k+1}^{c}\). We simply notice that the sequence \(P_{k}\) in the statement can be alternatively described by the equation \[P_{k}\;=\;1-\mathbb{P}\Big{(}\bigcup_{j=1}^{k}B_{j}\Big{)},\quad k\leq\ell-1. \tag{5.13}\] Indeed, it is straightforward to check (5.13) for \(k=0,1\). Now, suppose that (5.13) holds for \(k-1\). Then \[1-\mathbb{P}\Big{(}\bigcup_{j=1}^{k}B_{j}\Big{)}\;=\;1-\mathbb{P}\Big{(} \bigcup_{j=1}^{k-1}B_{j}\Big{)}-\mathbb{P}(B_{k})+\mathbb{P}\Big{(}B_{k}\cap \bigcup_{j=1}^{k-1}B_{j}\Big{)}.\] Since \(B_{k}\cap B_{k-1}=\varnothing\), the intersection above is given by \[\mathbb{P}\Big{(}B_{k}\cap\bigcup_{j=1}^{k-1}B_{j}\Big{)}\;=\;\mathbb{P} \Big{(}B_{k}\cap\bigcup_{j=1}^{k-2}B_{j}\Big{)}\;=\;\mathbb{P}(B_{k})\mathbb{ P}\Big{(}\bigcup_{j=1}^{k-2}B_{j}\Big{)}\;=\;\gamma_{k}^{2}(1-P_{k-2})\,,\] where in the second identity we used that \(B_{k}=A_{k}\cap A_{k+1}^{c}\) and \(\cup_{j\leq k-1}B_{j}=\cup_{j\leq k-2}(A_{j}\cap A_{j+1}^{c})\) are mutually independent, because the \(A_{j}\)'s are. Putting together the equations above, we obtain \[1-\mathbb{P}\Big{(}\bigcup_{j=1}^{k}B_{j}\Big{)}\;=\;P_{k-1}-\gamma_{k}^{2}+ \gamma_{k}^{2}(1-P_{k-2})\;=\;P_{k-1}-\gamma_{k}^{2}P_{k-2}\;=\;P_{k}\,,\] completing the induction step. The inequalities in (5.12) are now evident from the fact that the sequence of events \(\cup_{j=1}^{k}B_{j}\) is increasing in \(k\) and that we already know \(P_{\ell}=\mathbb{P}(\cap B_{j}^{c})=2(-1)^{\ell}\gamma_{1}\ldots\gamma_{\ell}>0\), see (5.11). With Lemma 22, we can finish the proof by noticing that \[\gamma_{\ell}P_{\ell-2}\;<\;0\quad\text{and}\quad(-1)^{\ell}\gamma_{1}\ldots \gamma_{\ell}-P_{\ell-1}\;=\;\frac{1}{2}P_{\ell}-P_{\ell-1}\;<\;0\,,\] which imply that \(x_{\ell-1}\) and \(x_{\ell}\) have the same sign. The reasoning above actually shows that any eigenvector of zero with some positive entry is in \((0,\infty)^{\ell}\). Finally, we argue that the eigenspace of zero has dimension \(1\). The Spectral Theorem ensures that \(\Sigma\) has an orthonormal basis of eigenvectors. Now, suppose that \(v_{1},v_{2}\) are two orthogonal eigenvectors of zero. Replacing \(v_{j}\) by \(-v_{j}\) if needed, we can assume \(v_{j}\in(0,\infty)^{\ell}\) for \(j=1,2\). But then their inner product is positive, leading to a contradiction. The final result of this section is a consequence of the previous proposition, and it is the essential outcome of this section which will be used later. **Theorem 23**.: _For \(\ell\geq 3\), suppose that \(X=(X_{1},\ldots,X_{\ell})\) is a centered Gaussian vector with covariance matrix \(\Sigma\) as in (2.17), whose coefficients \(\gamma_{k}=\gamma_{k}(\infty)\) are of the form (5.6). Then \(\mathbb{P}(X_{j}\geq 0,\;j=1,\ldots,\ell)=0\)._ Proof.: Recall that the support of a Gaussian vector \(Z\) is given by \(\mathbb{E}(Z)+\operatorname{Ker}(\operatorname{Cov}\,(Z))^{\perp}\). Thus, in our case the support of \(X\) is \(\operatorname{Ker}(\Sigma)^{\perp}\), and by Proposition 21, \(\operatorname{Ker}(\Sigma)\) is spanned by a vector \(v=(v_{1},\ldots,v_{\ell})\) with \(v_{j}>0\) for every \(j\). If \(y=(y_{1},\ldots,y_{\ell})\) is such that \(y_{j}>0\) for every \(j\), then we must have \(\langle y,v\rangle>0\), so \(y\notin\operatorname{Ker}(\Sigma)^{\perp}\). Thus, \[\operatorname{supp}(X)\cap\{y\in\mathbb{R}^{d}:y_{j}>0,j=1,\ldots,\ell\}\;=\; \{0\}\,,\] and the result follows. ## 6. Proofs of Theorems 5 and 6 Theorem 4 will be proved in the next section. In this section we assume its validity in order to prove Theorems 5 and 6. Proof of Theorem 5.: Recall that the mean and variance of the \(N_{k}\)'s were computed in Lemma 14, the quantities \(f_{k}=f_{k}(m)\) and \(\mathbf{p}_{k}=\mathbf{p}_{k}(m)\) are as in (2.4) and (2.9), and for \(k=1,\ldots,\ell\) denote \[v_{k}\;=\;v_{k}(m)\;\coloneqq\;\frac{1}{m^{3/2}}\operatorname{Var}\left(N_{k} \right)^{1/2}\;=\;\sigma_{k}(\infty)(1+o(1))\,,\quad m\to\infty\,,\] so that \(\widetilde{N}_{k}\) from (2.13) reads as \[\widetilde{N}_{k}\;=\;\frac{N_{k}-m^{2}f_{k}f_{k+1}\mathbf{p}_{k}}{m^{3/2}v_{ k}}\,,\quad k=1,\ldots,\ell.\] In an analogous way, and with Lemma 15 in mind, introduce the normalized version \(\widetilde{E}_{k}\) of \(E_{k}\) from (2.7), namely \[\widetilde{E}_{k}\;\coloneqq\;\frac{E_{k}-\mathbb{E}(E_{k})}{m^{3/2}v_{k}^{ =}}\,,\quad k=1,\ldots,\ell,\] with \[v_{k}^{=}\;=\;v_{k}^{=}(m)\;\coloneqq\;\frac{1}{m^{3/2}}\operatorname{Var} \left(E_{k}\right)^{1/2}\;=\;o(1)\,, \tag{6.1}\] where the last identity is valid thanks to Lemma 16. Finally, introduce the events \[A_{k} \;\coloneqq\;\left\{\widetilde{N}_{k}>\frac{f_{k}f_{k+1}m^{2}}{m ^{3/2}v_{k}}\left(\frac{1}{2}-\mathbf{p}_{k}\right)-\frac{1}{2m^{3/2}v_{k}}E_ {k}\right\}\] \[\;=\;\left\{\widetilde{N}_{k}>\frac{m^{1/2}f_{k}f_{k+1}}{v_{k}} \left(\frac{1}{2}-\mathbf{p}_{k}-\frac{1}{2}\mathbf{p}_{k}^{=}\right)-\frac{v_ {k}^{=}}{2v_{k}}\widetilde{E}_{k}\right\}.\] These notations were introduced so that the identity (2.8) writes simply as \[\mathbb{P}(D^{(1)}\triangleright\cdots\triangleright D^{(\ell)}\triangleright D^{ (1)})\;=\;\mathbb{P}\left(A\right),\quad\text{where}\quad A\;\coloneqq\;\bigcap \limits_{k=1}^{\ell}A_{k}\,.\] If we were to set \(\widetilde{E}_{k}=0\), then the probability \(\mathbb{P}(A)\) would be already suited for a direct application of the Theorem 4. However, in the general case that we are considering here, we need to estimate the possible contributions from the \(E_{k}\)'s in a more careful manner. To that end, let us fix \(\varepsilon>0\) and consider the events \[B_{k}(\varepsilon)\coloneqq\left\{\frac{v_{k}^{=}}{2v_{k}}|\widetilde{E}_{k} |>\varepsilon\right\},\quad k=1,\ldots,\ell,\quad B(\varepsilon)\coloneqq \bigcup\limits_{k=1}^{\ell}B_{k}(\varepsilon),\] \[C_{k}(\varepsilon)\coloneqq B_{k}(\varepsilon)^{c}=\left\{\frac{v_{k}^{=}}{2v_{ k}}|\widetilde{E}_{k}|\leq\varepsilon\right\},\quad k=1,\ldots,\ell,\quad C( \varepsilon)\coloneqq\bigcap\limits_{k=1}^{\ell}C_{k}(\varepsilon)=B( \varepsilon)^{c},\] and write \[\mathbb{P}(A)\;=\;\mathbb{P}(A\cap B(\varepsilon))+\mathbb{P}(A\cap C( \varepsilon))\,. \tag{6.2}\] Given any \(\varepsilon>0\), a simple union bound combined with Chebyshev's inequality gives \[\mathbb{P}(A\cap B(\varepsilon))\;\leq\;\mathbb{P}(B(\varepsilon))\;\leq\; \frac{1}{4\varepsilon^{2}}\sum\limits_{k=1}^{\ell}\left(\frac{v_{k}^{=}}{v_{k} }\right)^{2}. \tag{6.3}\] Thanks to (6.1), we thus conclude that \[\mathbb{P}(A\cap B(\varepsilon))\stackrel{{ m\to\infty}}{{\longrightarrow }}0\,,\quad\text{for any $\varepsilon>0$ fixed}. \tag{6.4}\] To handle the second term in the right-hand side of (6.2), for \(t\in\mathbb{R}\) we introduce yet another event \(D_{k}(t)\), namely \[D_{k}(t)\;\coloneqq\;\left\{\widetilde{N}_{k}>\frac{f_{k}f_{k+1}m^{1/2}}{v_{k }}\left(\frac{1}{2}-\mathbf{p}_{k}-\frac{1}{2}\mathbf{p}_{k}^{=}\right)-t \right\},\;k=1,\ldots,\ell,\quad D(t)\;\coloneqq\;\bigcap_{k=1}^{\ell}D_{k}(t )\,.\] From the definition of \(A_{k},D_{k}(\varepsilon)\) and \(C_{k}(\varepsilon)\), we obtain that \[A_{k}\cap C_{k}(\varepsilon)\;\subset\;D_{k}(\varepsilon)\cap C_{k}( \varepsilon),\quad k=1,\ldots,\ell. \tag{6.5}\] We now estimate the probability of the events on the right-hand side above. Conditioning, we compute \[\mathbb{P}(D(\varepsilon)\cap C(\varepsilon))\;=\;\mathbb{P}(D(\varepsilon) \mid C(\varepsilon))\mathbb{P}(C(\varepsilon))\;=\;\mathbb{P}(D(\varepsilon) )-\mathbb{P}(D(\varepsilon)\mid C(\varepsilon)^{c})\mathbb{P}(C(\varepsilon )^{c})\,,\] and using that \(C(\varepsilon)^{c}=B(\varepsilon)\) and (6.3), we obtain \[\mathbb{P}(D(\varepsilon)\cap C(\varepsilon))\;=\;\mathbb{P}(D(\varepsilon))+ o(1)\,,\quad\text{as $m\to\infty$, for any $\varepsilon>0$ fixed}.\] Finally, a combination of (6.2), (6.4), the inclusion (6.5) and this last estimate, we obtain that for any \(\varepsilon>0\) fixed, \[\mathbb{P}(A)\;\leq\;\mathbb{P}(D(\varepsilon))+o(1),\quad\text{as $m\to\infty$}.\] Thus, \[\limsup_{m\to\infty}\mathbb{P}(A)\;\leq\;\limsup_{m\to\infty}\mathbb{P}(D( \varepsilon)),\quad\text{for any $\varepsilon>0$}.\] But from condition (2.20) and Theorem 4, for any \(\varepsilon>0\), the inequality \[\limsup_{m\to\infty}\mathbb{P}(D(\varepsilon)) \;\leq\;\limsup_{m\to\infty}\mathbb{P}\left(\widetilde{N}_{k} \geq-\frac{\delta f_{k}f_{k+1}}{v_{k}r(m)}-\varepsilon,\;k=1,\ldots,\ell\right)\] \[\;\leq\;\mathbb{P}(X_{k}\geq-\varepsilon,\;k=1,\ldots,\ell)\] holds true, and the result follows. The proof of Theorem 6 is now a simple consequence of a combination of our results. Proof of Theorem 6.: Under the conditions of Theorem 6, we apply Theorem 23 to conclude that the right-hand side of (2.22) vanishes, and the proof is complete. ## 7. Proof of Theorem 4 We now move to the proof of the last standing Theorem 4. So during this section, \(\{\mathbf{D}_{m}\}_{m}\) is a collection of \(\ell\) independent random dice, each with number of faces \(n_{k}=f_{k}m\) satisfying Assumption 3. Recall also that the random variables \(\widetilde{N}_{1}(m),\ldots,\widetilde{N}_{\ell}(m)\) were introduced in (2.6) and (2.13), they depend on the index \(m\) of the sequence but we keep omitting this dependence and write \(\widetilde{N}_{k}=\widetilde{N}_{k}(m)\). Likewise, the associated quantities \(\mathbf{p}_{k}=\mathbf{p}_{k}(m),\mathbf{q}_{k}=\mathbf{q}_{k}(m),\mathbf{r}_ {k}=\mathbf{r}_{k}(m),\mathbf{s}_{k}=\mathbf{s}_{k}(m),\sigma_{k}=\sigma_{k}( m),\gamma_{k}=\gamma_{k}(m)\), \(k=1,\ldots,\ell\), were all defined by (2.9)-(2.15); we also omit their dependence on \(m\), and recall that they are instrumental in computing the leading terms in \(\mathbb{E}(N_{k}),\operatorname{Var}\left(N_{k}\right)\) and \(\operatorname{Corr}\left(N_{k-1},N_{k}\right)\) as in (2.16). Thanks to Assumption 3 and Lemma 14, we see that \[\widetilde{N}_{k}=\frac{N_{k}-n_{k}n_{k+1}\mathbf{p}_{k}}{m^{3/2}v_{k}}=\frac {N_{k}-m^{2}f_{k}f_{k+1}\mathbf{p}_{k}}{m^{3/2}v_{k}},\quad k=1,\ldots,\ell, \tag{7.1}\] with \[\mathbf{p}_{k}=\mathbf{p}_{k}(\infty)+o(1),\quad v_{k}\coloneqq\frac{1}{m^{3/2}} \operatorname{Var}\left(N_{k}\right)^{1/2}=\sigma_{k}(\infty)+o(1),\quad m\to \infty,\quad\sigma_{k}(\infty)>0.\] Our proof of the Central Limit Theorem will be based on the moment method, so for completeness we record here the moments of a general Gaussian random vector. For its statement, recall that \[n!!=\prod_{k=0}^{\lceil\frac{n}{2}-1\rceil}(n-2k)\] is the double factorial of a positive integer \(n\), which is given by the product of all the positive integers up to \(n\) that have the same parity as \(n\). **Proposition 24**.: _Let \(X=(X_{1},\cdots,X_{\ell})^{T}\sim\mathcal{N}_{\ell}(0,\mathbf{\Sigma})\) be a centered Gaussian vector of size \(\ell\) and covariance matrix \(\mathbf{\Sigma}\) with rank \(r\geq 1\). Fix a column vector \(\alpha=(\alpha_{1},\cdots,\alpha_{\ell})^{T}\in\mathbb{R}^{\ell}\) for which \(\alpha^{T}\mathbf{\Sigma}\alpha\neq 0\). Then_ \[\mathbb{E}\Big{[}\Big{(}\sum_{j=1}^{\ell+1}\alpha_{j}X_{j}\Big{)}^{s}\Big{]} \;=\;\begin{cases}0,&\text{ if $s$ is odd},\\ (\alpha^{T}\mathbf{\Sigma}\alpha)^{s/2}(s-1)!!,&\text{ if $s$ is even}.\end{cases} \tag{7.2}\] Proof.: The proof follows standard textbook arguments, we include it here for sake of completeness. The matrix \(\mathbf{\Sigma}\) is positive semi-definite, so it admits a Cholesky decomposition of the form \[\mathbf{\Sigma}\;=\;\boldsymbol{L}\boldsymbol{L}^{T}\,,\] where \(\boldsymbol{L}\) is a real matrix of size \((\ell+1)\times r\) and \(r\) is the rank of \(\mathbf{\Sigma}\). At the level of the random variable \(X\), it induces the identity \[X\;=\;\boldsymbol{L}Z\,,\] where \(Z\sim\mathcal{N}_{r}(0,\boldsymbol{I}_{r})\) is a normalized Gaussian vector of size \(r\). Now, set \[G\;=\;\frac{1}{(\alpha^{T}\mathbf{\Sigma}\alpha)^{1/2}}\alpha^{T}\boldsymbol{ L}Z\,.\] Observe that \(\alpha^{T}\Sigma\alpha>0\) so \(G\) as above is well defined. In fact, \(G\) is a linear combination of independent centered scalar Gaussian random variables, so \(G\) is a centered Gaussian itself. Its variance is \[\mathbb{E}(G^{2})\;=\;\mathbb{E}(GG^{T})\;=\;\frac{1}{\alpha^{T}\mathbf{ \Sigma}\alpha}\alpha^{T}\boldsymbol{L}\mathbb{E}[ZZ^{T}]\boldsymbol{L}^{T} \alpha\;=\;1\,.\] Hence \(G\) is actually a standard Gaussian, so \[\mathbb{E}(G^{s})\;=\;\begin{cases}0,&\text{ if $s$ is odd},\\ (s-1)!!,&\text{ if $s$ is even}.\end{cases}\] The proof is now completed observing that the term inside the expectation on the left-hand side of (7.2) is \(\big{(}\alpha X\alpha^{T}\big{)}^{s}=(\alpha^{T}\mathbf{\Sigma}\alpha)^{s/2}G ^{s}\). From the Cramer-Wold Criteria, in order to prove Theorem 4 it suffices to show that for any \(\alpha=(\alpha_{1},\ldots,\alpha_{\ell})\in\mathbb{R}^{\ell}\) we have \[\sum_{k=1}^{\ell}\alpha_{k}\widetilde{N}_{k}\;\stackrel{{ d}}{{ \longrightarrow}}\;\sum_{k=1}^{\ell}\alpha_{k}X_{k}\quad\text{as $m\to\infty$},\] where \(X=(X_{1},\ldots,X_{\ell})^{T}\sim\mathcal{N}(0,\mathbf{\Sigma})\) with \(\mathbf{\Sigma}\) as in Theorem 4. To prove this, the method of moments will be used (see [3, Theorem 3.12, page 109]), as the normal is a random variable uniquely determined by its moments. Thus, by Proposition 24, we need to show that for each \(s\in\mathbb{N}\), \[\mathbb{E}\Big{[}\Big{(}\sum_{k=1}^{\ell}\alpha_{k}\widetilde{N}_{k}\Big{)}^{s }\Big{]}\ \longrightarrow\ \begin{cases}0,&\text{ if $s$ is odd},\\ (\alpha^{T}\mathbf{\Sigma}\alpha)^{s/2}(s-1)!!,&\text{ if $s$ is even},\end{cases} \tag{7.3}\] as \(m\to\infty\). The overall strategy we take is the following. The sum inside the expectation can be seen as a weighted sum over all pairs of dice faces that are being compared. We identify each term in this weighted sum with a sum over graphs with appropriate properties. This is done in Section 7.1 below. Depending on certain properties of these graphs, they can either give an asymptotic neglegible contribution, or contribute to the leading order. In fact, we will show that at the end only graphs with a very particular structure contribute to the leading order of the sum. The second step of the proof consists in pinpointing the neglegible contributions, and also identifying the structure of the graphs that give the leading contribution. This part is done in Section 7.2 The last part of the proof then consists in counting exactly the graphs that give the leading contributions, and this will be done in Section 7.3, which completes the proof of Theorem 4. ### From moments to combinatorics of graphs We now show how to identify the terms in the sum on the left-hand side of (7.3) with a graph representation. Using Lemma 14 and the definition of \(N_{k}\) in (2.6), we write \[\alpha_{k}\widetilde{N}_{k}\ =\ \frac{\alpha_{k}}{\sigma_{k}m^{3/2}}(1+O(m^{1/2}) )\cdot\sum_{i=1}^{n_{k}}\sum_{j=1}^{n_{k+1}}(\mathbb{1}_{D_{i}^{(k)}>D_{j}^{(k+ 1)}}-\mathbf{p}_{k}),\] and therefore \[\sum_{k=1}^{\ell}\alpha_{k}\widetilde{N}_{k}\ =\ m^{-3/2}(1+O(m^{-1/2}))\sum_{k=1} ^{\ell}\sum_{i=1}^{n_{k}}\sum_{j=1}^{n_{k+1}}\frac{\alpha_{k}}{\sigma_{k}}( \mathbb{1}_{D_{i}^{(k)}>D_{j}^{(k+1)}}-\mathbf{p}_{k})\,. \tag{7.4}\] Raising equation (7.4) to the power \(s\) can be seen combinatorially as choosing \(s\) indexes \((k,i,j)\) from the triple sum above, multiplying their terms together and finally summing over all possible choices. We now introduce a graph representation of this procedure. Define \[\begin{split} V&\ \coloneqq\ \{(k,i);k\in[\ell],i\in[n_{k}]\}, \\ E&\ \coloneqq\ \{e=\big{(}(k,i),\,(k+1,j)\big{)}:k\in[ \ell],i\in[n_{k}],j\in[n_{k+1}]\}\,.\end{split} \tag{7.5}\] The graph \(\mathcal{G}=(V,E)\) has vertices representing all faces of all dice and edges \(e\) that represent the triples \((k,i,j)\) that appear in equation (7.4). Graph \(\mathcal{G}\) already has some structure inherited from the situation it encodes: it is clearly \(\ell\)-partite, with parts \(V_{k}:=\{(k,i):i\in[n_{k}]\}\) and edges exist only between \(V_{k}\) and \(V_{k+1}\). Any choice \(H=\{(k_{t},i_{t},j_{t}):t\in[s]\}\) of \(s\) indices can be seen as an ordered collection of \(s\) (possibly repeated) edges of \(\mathcal{G}\), and we refer to the set of all possible \(H\) as \(\mathcal{G}_{s}\). Any fixed \(H\in\mathcal{G}_{s}\) can be interpreted as a weighted subgraph of \(\mathcal{G}\): for each edge \(e\in\mathcal{G}\), we assign the weight \(w(e)=\#\{t\in[s]:(k_{t},i_{t},j_{t})=e\}\), i.e., its multiplicity. For a graph \(H\in\mathcal{G}_{s}\) introduce \(\varphi(H)\) by \[\varphi(H)\ =\ \prod_{t\in[s]}\frac{\alpha_{k_{t}}}{\sigma_{k_{t}}}\big{(} \mathbb{1}_{D_{i_{t}}^{(k_{t})}>D_{j_{t}}^{(k_{t+1})}}-\mathbf{p}_{k_{t}}\big{)}. \tag{7.6}\] When we raise (7.4) to the power \(s\), we re-index the resulting sum on the right-hand side by \(H\in\mathcal{G}_{s}\), and the factor \(\varphi(H)\) is precisely the term in this sum that corresponds to a given graph \(H\in\mathcal{G}_{s}\). Taking expectation, we thus obtain \[\mathbb{E}\Big{[}\Bigl{(}\sum_{k=1}^{\ell}\alpha_{k}\tilde{N}_{k}\Bigr{)}^{s} \Big{]}\ =\ m^{-\frac{3s}{2}}\bigl{(}1+O(m^{-\frac{1}{2}})\bigr{)}\sum_{H\in \mathcal{G}_{s}}\mathbb{E}[\varphi(H)]\,. \tag{7.7}\] Equation (7.7) expresses the expectation we want to compute in terms of a weighted sum over graphs, and the next step is to identify which structure on these graphs lead to leading and negligible asymptotic contributions as \(m\to\infty\). ### Estimating the contributions from each class of graphs The next step is to estimate the terms inside the sum in (7.7). The following claims emphasize some of the main properties that will play a role in our computations. **Claim 25**.: _Quantity \(\varphi(H)\) is uniformly bounded for all \(H\in\mathcal{G}_{s}\)._ Proof.: Since \(\sigma_{k}\) is bounded away from zero as \(m\) tends to infinity (see Assumption 3-(ii)) we have \[|\varphi(H)|\ \leq\ \Bigl{(}2\max_{k\in[\ell]}\frac{\alpha_{k}}{\sigma_{k}} \Bigr{)}^{s}\,.\qed\] Let \(H\in\mathcal{G}_{s}\). We say that edges \(e_{0}\) and \(\tilde{e}\) in \(H\) are in the same connected component if there is a sequence of edges \((e_{j}\in H;j\in[t])\) such that \(e_{j-1}\) and \(e_{j}\) have a vertex in common for every \(j\in[t]\) and \(e_{t}=\tilde{e}\). This forms an equivalence relation and we partition the edges of \(H\) into connected components. This is helpful to take advantage of independence when evaluating expected values. **Claim 26**.: _Suppose \(H\in\mathcal{G}_{s}\) has \(t\) connected components \(H_{1},\dots,H_{t}\). Then_ \[\mathbb{E}[\varphi(H)]=\prod_{i\in[t]}\mathbb{E}[\varphi(H_{i})].\] Proof.: It is immediate from the definitions, since for \(i\neq j\) the random variables \(\varphi(H_{i})\) and \(\varphi(H_{j})\) depend on disjoint sets of dice faces. Claims 25 and 26 allow us to disregard the contribution of some classes graphs. In the next two claims, we take advantage of the factor \(m^{-\frac{3s}{2}}\) to conclude that the contribution of graphs with too few or too many connected components is negligible. **Claim 27**.: _There are at most \(K_{1}m^{(3s-1)/2}\) graphs in \(\mathcal{G}_{s}\) with less than \(s/2\) connected components, where \(K_{1}\) does not depend on \(m\)._ Proof.: We give an upper bound on the number of graphs in \(\mathcal{G}_{s}\) with \(t\) connected components. Define \(f_{\max}\coloneqq\max_{k\in[\ell]}f_{k}\). The total number of edges in \(\mathcal{G}\) is \[|E|\ =\ \sum_{k\in[\ell]}(f_{k}m)(f_{k+1}m)\ =\ m^{2}\sum_{k\in[\ell]}f_{k}f_{k+1} \ \leq\ (\ell f_{\max}^{2})m^{2}\,. \tag{7.8}\] To count the number of graphs in \(\mathcal{G}_{s}\), we begin by building such graphs \(H\in\mathcal{G}_{s}\) in a specific ordering. Let \(H_{j}\) with \(j\in[t]\) denote the \(t\) connected components of a given \(H\). First, we choose one edge \(e_{j}\) from \(E\) for each \(H_{j}\), without any restriction. For these initial choices, we have at most \(((\ell f_{\max}^{2})m^{2})^{t}\) possibilities. Since \(H\) has \(s\) edges, we still have to choose \(s-t\) edges. For the remaining choices \(e_{j}\) with \(j\in[s]\setminus[t]\) we will always choose \(e_{j}\) so that it has some vertex in common with some previously chosen \(e_{i}\) with \(i\in[j-1]\), to ensure that we do not create any new connected components. Hence, on the second round of choices, for choosing \(e_{j}\) we have at most \((2(j-1))\) options for the common vertex and at most \(2f_{\max}m\) options for the other vertex. Hence, we have at most \[((\ell f_{\max}^{2})m^{2})^{t}\,(2(s-1)2f_{\max}m)^{s-t}\ =\ Km^{t+s}\] possibilities, where \(K=K(\ell,s,t,f_{\max})\) is a positive constant. Finally, observe that any graph \(H\in\mathcal{G}_{s}\) with exactly \(t\) connected components can have its edges reordered to a graph \(H^{\prime}\in\mathcal{G}_{s}\) so that the edges of \(H^{\prime}\) were chosen according to the procedure above. It follows that the number of graphs in \(\mathcal{G}_{s}\) with \(t\) connected components is at most \(s!Km^{t+s}\). Therefore, there are at most \[s!K(m^{s+1}+m^{s+2}+\cdots+m^{s+t})\ \leq\ s!Ktm^{s+t}\ \leq\ \tilde{K}m^{s+\frac{s-1}{2}}\] graphs in \(\mathcal{G}_{s}\) with less than \(s/2\) connected components (as \(t<s/2\), then \(t\leq(s-1)/2\), because \(t\) and \(s\) are integer numbers). The positive constant \(\tilde{K}\) does not depend on \(m\), and the claim is proved. **Claim 28**.: _If \(H\in\mathcal{G}_{s}\) has more than \(s/2\) connected components, then \(\mathbb{E}\left[\varphi(H)\right]=0\)._ Proof.: As there are more than \(s/2\) connected components and only \(s\) edges, at least one of the components must be an isolated edge, say \(H_{1}\) is just the edge \((k,i)(k+1,j)\). Then, we have \[\mathbb{E}[\varphi(H_{1})]\ =\ \mathbb{E}\Big{[}\frac{\alpha_{k}}{\sigma_{k}}( \mathbb{1}_{D_{i}^{(k)}>D_{j}^{(k+1)}}-\mathbf{p}_{k})\Big{]}\ =\ \frac{\alpha_{k}}{\sigma_{k}}\mathbb{E}\big{[}\mathbb{1}_{D_{i}^{(k)}>D_{j}^{( k+1)}}-\mathbf{p}_{k}\big{]}\ =\ 0,\] where the expectation vanishes because of the definition of \(\mathbf{p}_{k}\) in (2.9), and the result follows by Claim 26. As a consequence of the claims above, we are able to pinpoint the leading order of the \(s\) moment in (7.7) by focusing on a very specific class of graphs in \(\mathcal{G}_{s}\). We say that a connected component \(H_{j}\) of a graph \(H\in\mathcal{G}_{s}\) is a **cherry** if it is composed by two distinct edges, and we say that a graph \(H\in\mathcal{G}_{s}\) is a **cherry graph** if all its connected components are cherries. In particular, if \(H\in\mathcal{G}_{s}\) is a cherry graph then \(s\) must be even and \(H\) must have exactly \(s/2\) components. The vertex of degree \(2\) in a cherry will be called joint and the other two will be called tips. Let us denote by \(\mathcal{C}_{s}\) the set of all graphs \(H\in\mathcal{G}_{2s}\) that are cherry graphs. In words, if \(H\in\mathcal{C}_{s}\) then it has \(s\) connected components of size two with no repeating edges. It turns out that the leading contribution to the right-hand side of (7.7) comes precisely from cherry graphs, as claimed by our next result. **Proposition 29**.: _For any positive integer \(s\), the estimates_ \[\mathbb{E}\Big{[}\Big{(}\sum_{k=1}^{\ell}\alpha_{k}\widetilde{N }_{k}\Big{)}^{2s+1}\Big{]} \ =\ O(m^{-1/2})\,, \tag{7.9}\] \[\mathbb{E}\Big{[}\Big{(}\sum_{k=1}^{\ell}\alpha_{k}\widetilde{N }_{k}\Big{)}^{2s}\Big{]} \ =\ m^{-3s}\sum_{H\in\mathcal{C}_{s}}\mathbb{E}[\varphi(H)]+O(m^{-1})\,. \tag{7.10}\] _hold true as \(m\to\infty\)._ Proof.: Let us estimate the \(s\)-moment via equation (7.7). From Claims 25 and 27 we conclude that when estimating the sum in equation (7.7) the contribution of graphs in \(\mathcal{G}_{s}\) with less than \(s/2\) connected components is too small when compared to \(m^{3s/2}\). By Claim 28, the contribution of graphs with more than \(s/2\) is precisely zero. Hence, equation (7.9) is immediate: for the \((2s+1)\)-moment we can write \[m^{-\frac{3}{2}(2s+1)}\sum_{H\in\mathcal{G}_{2s+1}}\mathbb{E}[\varphi(H)] \;=\;m^{-\frac{3}{2}(2s+1)}\sum_{1\leq t\leq s}\sum_{\begin{subarray} {c}H\text{ with }t\text{ connected}\\ \text{components}\end{subarray}}\mathbb{E}[\varphi(H)]\] \[\leq\;m^{-3s-3/2}\cdot Km^{3s+1}=Km^{-1/2}\,.\] When estimating the \(2s\)-moment, the same argument shows that we only have to worry with the contribution of graphs with exactly \(s\) connected components. By Claim 26 if \(H\) has some component with only one edge then \(\mathbb{E}[\varphi(H)]=0\). Consequently, for a non-zero contribution, each of the \(s\) components must have at least \(2\) edges. But then we already have \(2s\) edges in total, and we conclude that each component has exactly \(2\) edges. In principle, we can have multiple edges in such graphs. However, another simple counting argument shows that the number of such graphs containing at least one multiple edge is at most \(Km^{3s-1}\). This implies we can focus on the sum over \(H\in\mathcal{C}_{s}\), as claimed in (7.10). With Proposition (29) at hand, the remaining step is to estimate the sum over cherry graphs in the right-hand side of (7.10), we will see that this remaining sum is in fact \(\Theta(m^{3s})\), so the leading order is indeed given by it, and we will in fact be able to compute its contribution precisely. ### Computing the leading contribution, and the conclusion of the proof of Theorem 4 What remains is to count all the cherry graphs \(H\in\mathcal{C}_{s}\) and compute \(\mathbb{E}[\varphi(H)]\). Since cherries are disjoint, we break any \(H\in\mathcal{C}_{s}\) into \(s\) cherries, which we denote \(H_{1},\ldots,H_{s}\). For a cherry \(H_{j}\) the value of \(\mathbb{E}[\varphi(H_{j})]\) will depend on which dice are used for its joints and tips. We say that cherry \(H_{j}\) has: **Type \((k,1)\):**: If its joint is on die \(D^{(k)}\), one tip is on \(D^{(k-1)}\) and the other is on \(D^{(k+1)}\). **Type \((k,2)\):**: If its joint is on die \(D^{(k)}\), and both tips are on \(D^{(k+1)}\). **Type \((k,3)\):**: If its joint is on die \(D^{(k)}\), and both tips are on \(D^{(k-1)}\). By the construction of the graph \(\mathcal{G}\) in (7.5), these are the only cherries that can occur as components of a graph \(H\in\mathcal{C}_{s}\). It is straightforward to compute \(\mathbb{E}[\varphi(H_{j})]\). **Proposition 30**.: _If cherry \(H_{j}\) has type \((k,t)\) then \(\mathbb{E}[\varphi(H_{j})]\) depends only on \((k,t)\). Denoting its value by \(\varphi_{k,t}\), we have_ \[\varphi_{k,t}\;\coloneqq\;\mathbb{E}[\varphi(H_{j})]\;=\;\begin{cases}\dfrac{ \alpha_{k-1}\alpha_{k}}{\sigma_{k-1}\sigma_{k}}(\mathbf{s}_{k}-\mathbf{p}_{k-1 }\mathbf{p}_{k})&\text{if }t=1;\\ \left(\dfrac{\alpha_{k}}{\sigma_{k}}\right)^{2}(\mathbf{r}_{k}-\mathbf{p}_{k} ^{2})&\text{if }t=2;\\ \left(\dfrac{\alpha_{k-1}}{\sigma_{k-1}}\right)^{2}(\mathbf{q}_{k}-\mathbf{p} _{k-1}^{2})&\text{if }t=3.\end{cases} \tag{7.11}\] Proof.: It is straightforward from the definition of \(\varphi(H_{j})\) given in (7.6) and the definitions of \(\mathbf{p}_{k},\mathbf{q}_{k},\mathbf{r}_{k},\mathbf{s}_{k}\) in (2.9)-(2.12). Since \(k\in[\ell]\), we have in total \(3\ell\) different types of cherries. Recall that the total number of cherries is \(s\). It is useful to classify \(H\in\mathcal{C}_{s}\) with respect to the number of occurrences of each type. We define \(M_{k,t}=M_{k,t}(H)\) as the number of cherries of type \((k,t)\) on the cherry graph \(H\) we encode these numbers in the matrix \(M=(M_{k,t})_{\ell\times 3}\), and say that \(H\) has type \(M\). Observe that \(M_{k,t}\in\mathbb{N}\) and \(\sum_{k,t}M_{k,t}=s\). With this codification in mind, for a cherry graph \(H\) of type \(M\) we have \[\mathbb{E}[\varphi(H)]\;=\;\prod_{j=1}^{s}\mathbb{E}[\varphi(H_{j})]\;=\;\prod _{k,t}\varphi_{k,t}^{M_{k,t}}\,. \tag{7.12}\] Finally, to estimate the sum over all \(H\in\mathcal{C}_{s}\) we partition the cherry graphs according to the possible types. Let \(\mathcal{C}_{s}(M)\) denote the set of all cherry graphs \(H\in\mathcal{C}_{s}\) of type \(M\). We need estimates on the number of elements of \(\mathcal{C}_{s}(M)\) for each \(M\). **Lemma 31**.: _For each cherry type \((k,t)\), define_ \[c_{k,t}\;=\;\begin{cases}f_{k-1}f_{k}f_{k+1}&\text{if $t=1$};\\ \frac{1}{2}f_{k}f_{k+1}^{2}&\text{if $t=2$};\\ \frac{1}{2}f_{k}f_{k-1}^{2}&\text{if $t=3$}.\end{cases} \tag{7.13}\] _As \(n\) tends to infinity, the size \(|\mathcal{C}_{s}(M)|\) of \(\mathcal{C}_{s}(M)\) satisfies_ \[|\mathcal{C}_{s}(M)|\;=\;m^{3s}\Bigl{[}\prod_{k,t}\frac{c_{k,t}^{M_{k,t}}}{M_{ k,t}!}\Bigr{]}\left((2s)!\right)\left(1+O(1/m)\right) \tag{7.14}\] Since \(f_{k}=f_{k}(m)\) depends on \(m\), we also have that \(c_{k,t}=c_{k,t}(m)\) depends on \(m\), but in virtue of Assumption 3 each \(c_{k,t}\) has a nonzero limit as \(m\to\infty\). Proof.: We begin by counting the number of cherries of a given specific type, considering that its edges are _not ordered_. Recall that \(n_{k}=f_{k}m\) denotes the number of faces in the die \(D^{(k)}\), and let us define \[C_{k,t}(n_{1},\ldots,n_{\ell})\;\coloneqq\;\frac{1}{2}|\{H\in\mathcal{G}_{2}: H\text{ is a cherry of type $(k,t)$}\}|\,, \tag{7.15}\] where the factor \(\frac{1}{2}\) is precisely to disregard the order of edges in a cherry. By a simple counting argument, we have that \[C_{k,t}(n_{1},\ldots,n_{\ell})\;=\;\begin{cases}n_{k-1}n_{k}n_{k+1}&\text{if $t= 1$};\\ n_{k}\binom{n_{k+1}}{2}&\text{if $t=2$};\\ n_{k}\binom{n_{k-1}}{2}&\text{if $t=3$}.\end{cases} \tag{7.16}\] Therefore, the estimate \[C_{k,t}(n_{1},\ldots,n_{\ell})=c_{k,t}m^{3}+O(m^{2}),\quad\text{as $m\to\infty$},\] is valid, where \(c_{k,t}\) are the values in (7.13). Now, let us fix a type \(M=(M_{k,t})\). First, we compute in how many ways we can choose an unordered collection of \(s\) cherries with exactly \(M_{k,t}\) occurrences of each type \((k,t)\). Given \(M\), we will choose its cherries one by one following the sequence of types \(\{(k_{j},t_{j}):j\in[s]\}\) in lexicographic order. The first cherry, with type \((k_{1},t_{1})\), is chosen from all possible edges of \(\mathcal{G}\). When choosing the following cherries, we have to successively remove the vertices that appear in the previous cherries, to ensure disjointness. Hence, when choosing the vertices of cherry \((k_{j},t_{j})\) we have \(C_{k_{j},t_{j}}(n_{1}^{(j)},\ldots,n_{\ell}^{(j)})\) options, where \(n_{i}^{(j)}\) is the number of faces of die \(D^{(i)}\) that do not appear in the \(j-1\) previously chosen cherries. It is clear that \((n_{i}^{(j)})\) will depend on the sequence \(\{(k_{j},t_{j})\}\). However, for our estimates it is enough to notice that since we only choose \(s\) cherries, we have \(n_{i}^{(j)}=f_{i}m+O(1)\). Finally, the above procedure chooses the \(s\) cherries following the ordering \(\{(k_{j},t_{j})\}\). Hence, the number of choices of an unordered collection of \(s\) cherries is given by \[\frac{\prod_{j\in[s]}|C_{k_{j},t_{j}}(n_{1}^{(j)},\ldots,n_{\ell}^{(j)})|}{ \prod_{k,t}M_{k,t}!}\;=\;\frac{\prod_{j\in[s]}\big{(}c_{k_{j},t_{j}}m^{3}+O(m^ {2})\big{)}}{\prod_{k,t}M_{k,t}!}\;=\;\Bigl{[}\prod_{k,t}\frac{c_{k,t}^{M_{k,t }}}{M_{k,t}!}\Bigr{]}m^{3s}(1+O(1/m))\,.\] To conclude the argument, just notice that when summing over \(H\in\mathcal{C}_{s}\) we are actually summing over all fixed unordered collection and considering all possible permutations of the \(2s\) edges that compose \(H\). The estimate in equation (7.14) follows. Now, we proceed with the estimate in equation (7.10). Breaking the sum on the right-hand side with respect to the type \(M\) of the cherry graphs \(H\in\mathcal{C}_{s}\) and using equation (7.12), we have that \[\mathbb{E}\Bigl{[}\Bigl{(}\sum_{k=1}^{\ell}\alpha_{k}\widetilde {N}_{k}\Bigr{)}^{2s}\Bigr{]} =\;m^{-3s}\sum_{H\in\mathcal{C}_{s}}\mathbb{E}[\varphi(H)]+O(m^{-1})\] \[=\;m^{-3s}\sum_{M}\sum_{H\in\mathcal{C}_{s}(M)}\prod_{k,t}\varphi _{k,t}^{M_{k,t}}+O(m^{-1})\] \[=\;\sum_{M}(2s)!\cdot\Bigl{[}\prod_{k,t}\frac{(c_{k,t}\varphi_{k, t})^{M_{k,t}}}{M_{k,t}!}\Bigr{]}+O(m^{-1})\,. \tag{7.17}\] To obtain a more meaningful expression, we recognize the sum over \(M\) as a sum to the \(s\) power. Indeed, recall that \(M=(M_{k,t})\) is such that \(M_{k,t}\in\mathbb{N}\) must sum to \(s\). Hence, we can write \[\sum_{M}(2s)!\cdot\Bigl{[}\prod_{k,t}\frac{(c_{k,t}\varphi_{k,t}) ^{M_{k,t}}}{M_{k,t}!}\Bigr{]} =\;\frac{(2s)!}{s!}\;\sum_{(M_{k,t});\sum M_{k,t}=s}\;\frac{s!}{ \prod\limits_{k,t}M_{k,t}!}(c_{k,t}\varphi_{k,t})^{M_{k,t}}\] \[=\;\frac{(2s)!}{s!}\Bigl{(}\sum_{k,t}c_{k,t}\varphi_{k,t}\Bigr{)} ^{s}\] \[=\;(2s-1)!!\Bigl{(}\sum_{k,t}2c_{k,t}\varphi_{k,t}\Bigr{)}^{s}, \tag{7.18}\] where we used that \((2s-1)!!=\frac{(2s)!}{s!2^{s}}\). Using equation (7.17) the next step is to identify the limit of this \(2s\)-moment. From equations (7.13) and (7.11) we have that \[2c_{k,t}\varphi_{k,t}\;=\;\begin{cases}2f_{k-1}f_{k}f_{k+1}\frac{\alpha_{k-1} \alpha_{k}}{\sigma_{k-1}\sigma_{k}}\cdot(\mathbf{s}_{k}-\mathbf{p}_{k-1} \mathbf{p}_{k})&\text{if $t=1$};\\ f_{k}f_{k+1}^{2}\bigl{(}\frac{\alpha_{k}}{\sigma_{k}}\bigr{)}^{2}\cdot(\mathbf{r }_{k}-\mathbf{p}_{k}^{2})&\text{if $t=2$};\\ f_{k}f_{k-1}^{2}\bigl{(}\frac{\alpha_{k-1}}{\sigma_{k-1}}\bigr{)}^{2}\cdot( \mathbf{q}_{k}-\mathbf{p}_{k-1}^{2})&\text{if $t=3$},\end{cases} \tag{7.19}\] and we can recognize the sum over \((k,t)\in[\ell]\times[3]\) as a quadratic form in the vector \(\alpha=(\alpha_{1},\dots,\alpha_{\ell})^{T}\). The coefficient of \(\alpha_{k}^{2}\) is given by \[\frac{1}{\sigma_{k}^{2}}\Big{[}f_{k}f_{k+1}^{2}(\mathbf{r}_{k}-\mathbf{p}_{k}^{ 2})+f_{k}^{2}f_{k+1}(\mathbf{q}_{k+1}-\mathbf{p}_{k}^{2})\Big{]}\ =\ 1\,,\] recalling the definition of \(\sigma_{k}\) in (2.14). The coefficient of \(\alpha_{k-1}\alpha_{k}\) is precisely the value \(\gamma_{k}=\gamma_{k}(m)\) given by (2.15) \[\gamma_{k}\ =\ \frac{1}{\sigma_{k-1}\sigma_{k}}f_{k-1}f_{k}f_{k+1}(\mathbf{ s}_{k}-\mathbf{p}_{k-1}\mathbf{p}_{k})\,.\] Writing \(\alpha=(\alpha_{1},\dots,\alpha_{\ell})^{T}\), and defining \[\Sigma(m)\,\coloneqq\,\left(\begin{array}{cccccc}1&\gamma_{2}(m)&0&\cdots& 0&\gamma_{1}(m)\\ \gamma_{2}(m)&1&\gamma_{3}(m)&\cdots&0&0\\ 0&\gamma_{3}(m)&1&\cdots&0&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&0&\cdots&1&\gamma_{\ell}(m)\\ \gamma_{1}(m)&0&0&\cdots&\gamma_{\ell}(m)&1\end{array}\right),\] with \(\gamma_{k}(m)\) as in (2.15), we just unraveled the identity \[\sum_{k,t}2c_{k,t}\varphi_{k,t}=\alpha^{T}\Sigma(m)\alpha.\] By Assumption 3, we learn that \(\Sigma(m)\) converges to the matrix \(\Sigma\) in (2.17). This last convergence thus shows (7.3), and concludes the proof of Theorem 4.
2305.13053
On the weak Harnack inequality for unbounded non-negative super-solutions of degenerate double-phase parabolic equations
In the case $q> p\dfrac{n+2}{n}$, we give a proof of the weak Harnack inequality for non-negative super-solutions of degenerate double-phase parabolic equations under the additional assumption that $u\in L^{s}_{loc}(\Omega_{T})$ with some $s >p\dfrac{n+2}{n}$.
Mariia Savchenko, Igor Skrypnik, Yevgeniia Yevgenieva
2023-05-22T14:08:33Z
http://arxiv.org/abs/2305.13053v2
On the weak Harnack inequality for unbounded non-negative super-solutions of degenerate double-phase parabolic equations ###### Abstract In the case \(q>p\dfrac{n+2}{n}\), we give a proof of the weak Harnack inequality for non-negative super-solutions of degenerate double-phase parabolic equations under the additional assumption that \(u\in L^{s}_{loc}(\Omega_{T})\) with some \(s>p\dfrac{n+2}{n}\). **Keywords:** weak Harnack inequality, unbounded super-solutions, double-phase parabolic equations **MSC (2010)**: 35B40, 35B45, 35B65. ## 1 Introduction and main results In this paper we are concerned with double-phase parabolic equations. Let \(\Omega\) be a domain in \(\mathbb{R}^{n}\), \(T>0\), \(\Omega_{T}:=\Omega\times(0,T)\), we study unbounded super-solutions to the equation \[u_{t}-\mathrm{div}\mathbb{A}(x,t,\nabla u)=0,\quad(x,t)\in\Omega_{T}. \tag{1.1}\] Throughout the paper we suppose that the functions \(\mathbb{A}:\Omega_{T}\times\mathbb{R}^{n}\to\mathbb{R}^{n}\) are such that \(\mathbb{A}(\cdot,\cdot,\xi)\) are Lebesgue measurable for all \(\xi\in\mathbb{R}^{n}\), and \(\mathbb{A}(x,t,\cdot)\) are continuous for almost all \((x,t)\in\Omega_{T}\). We also assume that the following structure conditions are satisfied \[\begin{split}\mathbb{A}(x,t,\xi)\xi&\geqslant K_{ 1}\left(|\xi|^{p}+a(x,t)|\xi|^{q}\right):=K_{1}\,\varphi(x,t,|\xi|),\quad 2 <p<q,\\ |\mathbb{A}(x,t,\xi)|&\leqslant K_{2}\big{(}|\xi|^{p -1}+a(x,t)|\xi|^{q-1}\big{)}=K_{2}\,\dfrac{\varphi(x,t,|\xi|)}{|\xi|},\end{split} \tag{1.2}\] where \(K_{1}\), \(K_{2}\) are positive constants and function \(a(x,t)\geqslant 0:\Omega_{T}\times\mathbb{R}_{+}\to\mathbb{R}_{+}\) satisfies the following condition: * for any cylinder \(Q_{r,r^{2}}(x_{0},t_{0}):=B_{r}(x_{0},t_{0})\times(t_{0},t_{0}+r^{2})\subset Q _{8r,(16r)^{2}}(x_{0},t_{0})\subset\Omega_{T}\) there holds \[\underset{Q_{r,r^{2}}(x_{0},t_{0})}{\text{osc}}a(x,t)\leqslant Ar^{\alpha},\] with some \(A>0\) and some \(\alpha\in(0,1]\). It is known that for integrands with \((p,q)\)-growth, it is crucial that the gap between \(p\) and \(q\) is not too large. Otherwise, in the case \(q>\dfrac{np}{n-p}\), \(p<n\) there exist unbounded minimizers (we refer the reader to [1, 2, 3, 4, 5, 6, 8, 9, 10, 11, 12, 18, 19, 20, 21, 25, 27, 28, 29, 30, 31, 33, 34] for results, references, historical notes and extensive survey of regularity issues). It was Ok [24], who proved the boundedness of minimizers of elliptic functionals of double-phase type in the case \(q>\dfrac{np}{n-p}\) under some additional assumption. More precisely, under the condition \(\underset{B_{r}(x_{0})}{\text{osc}}a(x)\leqslant Ar^{\alpha}\), the minimizer is bounded by a constant depending on \(||u||_{L^{s}}\) with \(s>\dfrac{np}{n-p}\), provided that \(\alpha\geqslant q-p\) and \(s\geqslant\dfrac{(q-p)n}{\alpha+p-q}\). This condition, for example, gives a possibility to improve the regularity results [4, 5, 6, 9, 10] for unbounded minimizers with constant depending on \(||u||_{L^{s}}\). The weak Harnack inequality for unbounded super-solutions of the corresponding elliptic equations with generalized Orlicz growth under similar condition was proved in [7]. This result was generalized in [26] for unbounded functions from the corresponding De Giorgi's classes \(DG_{\varphi}^{-}(\Omega)\). The parabolic theory for quasi-linear parabolic equations differs substantially from the elliptic case. This becomes clear by looking at the Barenblatt solution of the parabolic \(p\)-Laplace equation. DiBenedetto developed an innovative intrinsic scaling method (see [13] and the references to the original papers therein) and proved the Holder continuity of weak solutions to (1.1) for \(p=q\neq 2\). The intrinsic Harnack inequality for parabolic \(p\)-Laplace evolution equation was proved in the breakthrough papers [14, 15]. The weak Harnack inequality for parabolic \(p\)-Laplacian was obtained by Kuusi [23] by using the Krylov-Safonov covering argument. A similar result was proved in [16] by using the local clustering lemma. As for parabolic equations with nonstandard growth, this question remains open. The local boundedness of solutions of parabolic equations is known under the condition \(q\leqslant p\dfrac{n+2}{n}\) (see, for example [32]). The upper bound for the number \(q\) stems from the parabolic embedding. The intrinsic Harnack inequality for bounded solutions to corresponding singular parabolic equations with \((p,q)\)- growth was proved in [29]. The weak Harnack inequality for bounded super-solutions of the \(p(x)\)- Laplace evolution equation was obtained in [33]. In this paper, using De Giorgi's approach we prove the weak Harnack inequality for unbounded non-negative super-solutions to equation (1.1) in the case \(q>p\dfrac{n+2}{n}\) under the condition similar to that of [24]. We will focus only on the case \(p>2\), we leave the case \(p<2<q\) for further research. To formulate our results, let us remind the reader of the definition of a weak super-solution to equation (1.1). We say that function \(u\) is a weak super-solution to Eq. (1.1) if \(u\in V^{2,q}(\Omega_{T}):=C_{\text{loc}}(0,T;L^{2}_{\text{loc}}(\Omega))\cap L ^{q}_{\text{loc}}(0,T;W^{1,q}_{\text{loc}}(\Omega))\), and for any compact set \(E\subset\Omega\) and every subinterval \([t_{1},t_{2}]\subset(0,T]\) there holds \[\int\limits_{E}u\zeta\,dx\bigg{|}_{t_{1}}^{t_{2}}+\int\limits_{t_{1}}^{t_{2}} \int\limits_{E}\{-u\zeta_{\tau}+\mathbb{A}(x,\tau,\nabla u)\nabla\zeta\}\,dxd \tau\geqslant\,0 \tag{1.3}\] for any testing functions \(\zeta\in W^{1,2}(0,T;L^{2}(E))\cap L^{q}(0,T;W^{1,q}_{0}(E))\), \(\zeta\geqslant 0\). Technically, it would be convenient to have a formulation of a weak super-solution that involves \(u_{t}\). Let \(\rho(x)\in C_{0}^{\infty}(\mathbb{R}^{n})\), \(\rho(x)\geqslant 0\) in \(\mathbb{R}^{n}\), \(\rho(x)\equiv 0\) for \(|x|>1\) and \(\int\limits_{\mathbb{R}^{n}}\rho(x)\,dx=1\), and set \[\rho_{h}(x):=h^{-n}\rho\left(x/h\right),\quad u_{h}(x,t):=h^{-1}\int\limits_{t}^{ t+h}\int\limits_{\mathbb{R}^{n}}u(y,\tau)\rho_{h}(x-y)\,dyd\tau.\] Fix \(t\in(0,T)\) and let \(h>0\) be so small that \(0<t<t+h<T\). In (1.3) take \(t_{1}=t\), \(t_{2}=t+h\) and replace \(\zeta\) by \(\int\limits_{\mathbb{R}^{n}}\zeta(y,t)\rho_{h}(x-y)\,dy\). Dividing by \(h\), since the testing function does not depend on \(\tau\), we obtain \[\int\limits_{E\times\{t\}}\left(\frac{\partial u_{h}}{\partial t}\,\zeta+[ \mathbb{A}(x,t,\nabla u)]_{h}\nabla\zeta\right)dx\geqslant\,0, \tag{1.4}\] for all \(t\in(0,T-h)\) and for all \(\zeta\in W^{1,q}_{0}(E)\), \(\zeta\geqslant 0\). Our main result reads as follows. **Theorem 1.1**.: _Let \(u\) be a weak super-solution to equation (1.1), let conditions (1.2) and (A) be fulfilled. Assume additionally that \(u\in L^{s}(\Omega_{T})\) and_ \[s\geqslant p-2+\frac{(q-p)(n+p)}{\alpha+p-q}. \tag{1.5}\] _Then there exist positive constants \(C_{1}\), \(C_{2}\), \(C_{3}>0\) depending only on \(n\), \(p\), \(q\), \(K_{1}\), \(K_{2}\), \(A\) and \(d:=\big{(}\iint\limits_{\Omega_{T}}u^{s}dxdt\big{)}^{\frac{1}{s}}\) such that for a.a. \((x_{0},t_{0})\in\Omega_{T}\), either_ \[\mathcal{I}:=\fint\limits_{B_{\rho}(x_{0})}u(x,t_{0})dx\leqslant C_{1}\left\{ \rho+\rho\ \psi^{-1}_{Q_{12\rho,(12\rho)^{2}}(x_{0},t_{0})}\bigg{(}\frac{\rho^{2}}{T-t_{0} }\bigg{)}\right\}, \tag{1.6}\] _or_ \[\mathcal{I}\leqslant C_{1}\inf\limits_{B_{4\rho}(x_{0})}u(\cdot,t), \tag{1.7}\] _for any time levels_ \[t_{0}+C_{2}\theta\leqslant t\leqslant t_{0}+C_{3}\theta,\quad\theta:=\frac{ \rho^{2}}{\psi_{Q_{12\rho,(12\rho)^{2}}(x_{0},t_{0})}(\frac{\mathcal{I}}{ \rho})}, \tag{1.8}\] _provided that \(Q_{16\rho,(16\rho)^{2}}(x_{0},t_{0})\subset\Omega_{T}\). Here \(\fint\limits_{B_{\rho}(x_{0})}u(x,t_{0})dx:=|B_{\rho}(x_{0})|^{-1}\int\limits_ {B_{\rho}(x_{0})}u(x,t_{0})dx\), \(\psi_{Q}(v):=\frac{\varphi^{+}_{Q}(v)}{v^{2}}=v^{p-2}+a^{+}_{Q}v^{q-2}\), \(v>0\), \(a^{+}_{Q}:=\max\limits_{Q}a(x,t)\) and \(\psi^{-1}_{Q}(\cdot)\) is inverse function to \(\psi_{Q}(\cdot)\)._ **Remark 1.1**.: _If inequality (1.6) is violated, i.e._ \[\mathcal{I}\geqslant C_{1}\left\{\rho+\rho\ \psi^{-1}_{Q_{12\rho,(12\rho)^{2}}(x_{ 0},t_{0})}\bigg{(}\frac{\rho^{2}}{T-t_{0}}\bigg{)}\right\}, \tag{1.9}\] _then the inclusion \(Q_{16\rho,C_{3}\theta}(x_{0},t_{0})\subset Q_{16\rho,(16\rho)^{2}}(x_{0},t_{0})\) holds, provided that \(C_{1}\) is large enough. We need this inclusion only in order to use the condition (A) in the cylinder \(Q_{12\rho,(12\rho)^{2}}(x_{0},t_{0})\). In the case when \(a(x,t)\) does not depend on \(t\), the first inequality in (1.6) is not required. In this case, it is enough for the inclusion \(Q_{16\rho,C_{3}\theta}(x_{0},t_{0})\subset\Omega_{T}\), which holds by the second inequality in (1.9)._ The main difficulty arising in the proof of our main result is related to the so-called theorem on the expansion of positivity. Roughly speaking, having information on the measure of the "positivity set" of \(u\) over the ball \(B_{r}(\bar{x})\): \[|\{B_{r}(\bar{x}):u(\cdot,t)\geqslant\lambda\}|\geqslant\beta|B_{r}(\bar{x})|,\] with some \(r,\lambda>0\) and \(\beta\in(0,1)\) and for some time level \(t\) we need to translate it into an expansion of the set of positivity to a ball \(B_{2r}(\bar{x})\) and with some time level \(\tau>t\). We divide the proof of this fact into two subcases: the first one is so-called \(p\)-phase case, and the second one is so-called \((p,q)\)-phase case (see (3.7) and (3.8) below). It seems that the most difficult case is when the \(p\)-phase condition holds in the large cylinder, and simultaneously the \((p,q)\)-phase condition holds in a small cylinder, which is generated by local clustering lemma (see Lemma 2.1 below). In the proof, we do not use the classical covering argument of Krylov and Safonov [22], DiBenedetto and Trudinger [17] as it was done in [23], instead we use the local clustering lemma due to DiBenedetto, Gianazza and Vespri [16]. Moreover, instead of \(\sup\limits_{Q_{2r,2\eta}(\bar{x},\bar{t})}u\) we are forced to use averages of \(u\) over the cylinder \(Q_{2r,2\eta}(\bar{x},\bar{t})\), and, in addition, we need to obtain estimates of \(\mathcal{I}\) by \(\iint\limits_{\Omega_{T}}u^{s}dxdt\). The rest of the paper contains the proof of Theorem 1.1. In Section 2 we collect some auxiliary propositions and required integral estimates of super-solutions. Expansion of positivity is proved in Section 3. Finally, in Section 4 we prove the weak Harnack inequality, Theorem 1.1. ## 2 Auxiliary material and integral estimates ### Local Clustering Lemma The following lemma will be used in the sequel, it is the local clustering lemma, see [16]. **Lemma 2.1**.: _Let \(K_{r}(y)\) be a cube in \(\mathbb{R}^{n}\) of edge \(r\) centered at \(y\) and let \(u\in W^{1,1}(K_{r}(y))\) satisfy_ \[||(u-k)_{-}||_{W^{1,1}(K_{r}(y))}\leqslant\mathcal{K}\,k\,r^{n-1},\quad and \quad|\{K_{r}(y):u\geqslant k\}|\geqslant\beta|K_{r}(y)|, \tag{2.1}\] _with some \(\beta\in(0,1)\), \(k\in\mathbb{R}^{1}\) and \(\mathcal{K}>0\). Then for any \(\xi\in(0,1)\) and any \(\nu\in(0,1)\) there exists \(\bar{x}\in K_{r}(y)\) and \(\delta=\delta(n)\in(0,1)\) such that_ \[|\{K_{\bar{r}}(\bar{x}):u\geqslant\xi\,k\}|\geqslant(1-\nu)|K_{\bar{r}}(y)|, \ \ \bar{r}:=\delta\beta^{2}\frac{(1-\xi)\nu}{\mathcal{K}}\,r. \tag{2.2}\] ### De Giorgi-Poincare Lemma The following lemma is the well-known De Giorgi-Poincare lemma (see for example [16]). **Lemma 2.2**.: _Let \(u\in W^{1,1}(B_{r}(y))\) for some \(r>0\). Let \(k\) and \(l\) be real numbers such that \(k<l\). Then there exists a constant \(\gamma>0\) depending only on \(n\) such that_ \[(l-k)|A_{k,r}||B_{r}(y)\setminus A_{l,r}|\leqslant\gamma r^{n+1}\int\limits_{ A_{l,r}\setminus A_{k,r}}|\nabla u|dx,\] _where \(A_{k,r}=\{x\in B_{r}(y):u(x)<k\}\)._ ### Local Energy Estimates We refer to the parameters \(n\), \(p\), \(q\), \(K_{1}\), \(K_{2}\), \(A\) and \(d\) as our structural data, and we write \(\gamma\) if it can be quantitatively determined a priori only in terms of the above quantities. The generic constant \(\gamma\) may change from line to line. **Lemma 2.3**.: _Let \(u\) be a weak non-negative super-solution to equation (1.1), then for any \(Q_{r,\eta}(\bar{x},\bar{t})\subset Q_{r,r}(\bar{x},\bar{t})\subset Q_{8r,8r}( \bar{x},\bar{t})\subset\Omega_{T}\), any \(k>0\), any \(\sigma\in(0,1)\), any \(\zeta_{1}(x)\in C_{0}^{\infty}(B_{r}(\bar{x}))\), \(0\leqslant\zeta_{1}(x)\leqslant 1\), \(\zeta_{1}(x)=1\) in \(B_{r(1-\sigma)}(\bar{x})\), \(|\nabla\zeta_{1}(x)|\leqslant(\sigma r)^{-1}\), any \(\zeta_{2}(t)\in C^{1}(\mathbb{R}_{+}),0\leqslant\zeta_{2}(t)\leqslant 1\), \(\zeta_{2}(t)=1\) for \(t\leqslant\bar{t}+\eta(1-\sigma)\), \(\zeta_{2}(t)=0\) for \(t\geqslant\bar{t}+\eta\), \(|\frac{d}{dt}\zeta_{2}(t)|\leqslant(\sigma\eta)^{-1}\) next inequalities hold_ \[\sup_{\bar{t}<t<\bar{t}+\eta}\int\limits_{B_{r}(\bar{x})}(u-k)_{- }^{2}\big{(}\zeta_{1}\zeta_{2}\big{)}^{q}dx+\\ +\gamma^{-1}\bigg{(}1+a_{Q_{r,\eta}(\bar{x},\bar{t})}^{-}\bigg{(} \frac{k}{r}\bigg{)}^{q-p}\bigg{)}\iint\limits_{Q_{r,\eta}^{+}(\bar{x},\bar{t}) }|\nabla(u-k)_{-}|^{p}\big{(}\zeta_{1}\zeta_{2}\big{)}^{q}dxdt\leqslant\\ \leqslant\gamma\sigma^{-q}\varphi_{Q_{r,\eta}(\bar{x},\bar{\eta}) }^{+}\left(\frac{k}{r}\right)\,\bigg{\{}1+\frac{k^{2}}{\eta\varphi_{Q_{r,\eta} (\bar{x},\bar{\eta})}^{+}(\frac{k}{r})}\bigg{\}}|A_{k,r,\eta}^{-}|, \tag{2.3}\] \[\sup_{\bar{t}<t<\bar{t}+\eta}\int\limits_{B_{r}(\bar{x})}(u-k)_{- }^{2}\zeta_{1}^{q}dx\leqslant\int\limits_{B_{r}(\bar{x})\times\{\bar{t}\}}(u- k)_{-}^{2}\zeta_{1}^{q}dx+\gamma\,\eta\,\varphi_{Q_{r,\eta}(\bar{x},\bar{\eta}) }^{+}\left(\frac{k}{r}\right)\,|A_{k,r,\eta}^{-}|, \tag{2.4}\] _where \(A_{k,r,\eta}^{-}=Q_{r,\eta}(\bar{x},\bar{t})\cap\big{\{}u\leqslant k\big{\}}\), \(\varphi_{Q_{r,\eta}(\bar{x},\bar{\eta})}^{+}\big{(}\frac{k}{r}\big{)}=\big{(} \frac{k}{r}\big{)}^{p}+a_{Q_{r,\eta}(\bar{x},\bar{t})}^{+}\big{(}\frac{k}{r} \big{)}^{q}\), \(a_{Q_{r,\eta}(\bar{x},\bar{t})}^{-}=\min\limits_{Q_{r,\eta}(\bar{x},\bar{t})}a( x,t)\), \(a_{Q_{r,\eta}(\bar{x},\bar{t})}^{+}=\max\limits_{Q_{r,\eta}(\bar{x},\bar{t})}a(x,t)\)._ Proof.: Test (1.4) by \((u_{h}-k)_{-}\big{(}\zeta_{1}\zeta_{2}\big{)}^{q}\), integrating over \((\bar{t},\bar{t}+\eta)\), letting \(h\to 0\), using conditions (1.2), \((A)\) and the Young inequality we arrive at \[\sup_{\bar{t}<t<\bar{t}+\eta}\int\limits_{B_{r}(\bar{x})}(u-k)_{- }^{2}\big{(}\zeta_{1}\zeta_{2}\big{)}^{q}dx+\gamma^{-1}\iint\limits_{Q_{r, \eta}(\bar{x},\bar{t})}|\nabla(u-k)_{-}|^{p}\big{(}\zeta_{1}\zeta_{2}\big{)}^{ q}dxdt+\\ +\gamma^{-1}\iint\limits_{Q_{r,\eta}(\bar{x},\bar{t})}a(x,t)| \nabla(u-k)_{-}|^{q}\big{(}\zeta_{1}\zeta_{2}\big{)}^{q}dxdt\leqslant\gamma \sigma^{-1}\frac{k^{2}}{\eta}|A_{k,r,\eta}^{-}|+\gamma\sigma^{-q}\iint\limits_ {A_{k,r,\eta}^{-}}\varphi\left(x,t,\frac{k}{r}\right)dxdt\leqslant\\ \leqslant\gamma\sigma^{-q}\left(\frac{k^{2}}{\eta}+\varphi_{Q_{r, \eta}(\bar{x},\bar{\eta})}^{+}\left(\frac{k}{r}\right)\right)|A_{k,r,\eta}^{-}|.\] Using the Young inequality we obtain \[\bigg{(}1+a_{Q_{r,\eta}(\bar{x},\bar{t})}^{-}\bigg{(}\frac{k}{r} \bigg{)}^{q-p}\bigg{)}\iint\limits_{Q_{r,\eta}(\bar{x},\bar{t})}|\nabla(u-k)_{- }|^{p}\big{(}\zeta_{1}\zeta_{2}\big{)}^{q}dxdt\leqslant\iint\limits_{Q_{r, \eta}(\bar{x},\bar{t})}|\nabla(u-k)_{-}|^{p}\big{(}\zeta_{1}\zeta_{2}\big{)}^{ q}dxdt\\ +\iint\limits_{Q_{r,\eta}(\bar{x},\bar{t})}a(x,t)|\nabla(u-k)_{- }|^{q}\big{(}\zeta_{1}\zeta_{2}\big{)}^{q}dxdt+a_{Q_{r,\eta}(\bar{x},\bar{t})} ^{+}\bigg{(}\frac{k}{r}\bigg{)}^{q}|A_{k,r,\eta}^{-}|,\] from which the required inequality (2.3) follows. Now test (1.4) by \((u_{h}-k)_{-}\zeta_{1}^{q}\), completely similar to the previous we arrive at (2.4), this proves the lemma. **Lemma 2.4**.: _Let \(u\) be a weak non-negative super-solution to equation (1.1), then for any \(Q_{r,\eta}(\bar{x},\bar{t})\subset Q_{r,r^{2}}(\bar{x},\bar{t})\subset Q_{8r,(8r)^ {2}}(\bar{x},\bar{t})\subset\Omega_{T}\), any \(\delta>0\), any \(\varepsilon,\sigma\in(0,1)\), next inequality holds_ \[\frac{1}{1-\varepsilon}\sup_{\bar{t}<t<\bar{t}+\eta}\int\limits_{B _{r}(\bar{x})}(u+\delta)^{1-\varepsilon}\big{(}\zeta_{1}\zeta_{2}\big{)}^{q}dx +\frac{\varepsilon}{\gamma}\iint\limits_{Q_{r,\eta}(\bar{x},\bar{t})}(u+\delta )^{-\varepsilon-1}|\nabla u|^{p}\big{(}\zeta_{1}\zeta_{2}\big{)}^{q}dxdt+\\ +\frac{\varepsilon}{\gamma}\iint\limits_{Q_{r,\eta}(\bar{x}, \bar{t})}a(x,t)(u+\delta)^{-\varepsilon-1}|\nabla u|^{q}\big{(}\zeta_{1}\zeta_ {2}\big{)}^{q}dxdt\leqslant\frac{1}{(1-\varepsilon)\sigma\eta}\iint\limits_{Q _{r,\eta}(\bar{x},\bar{t})}(u+\delta)^{1-\varepsilon}dxdt+\\ +\frac{\gamma\varepsilon^{1-p}}{(\sigma r)^{p}}\iint\limits_{Q_ {r,\eta}(\bar{x},\bar{t})}(u+\delta)^{p-\varepsilon-1}dxdt+\frac{\gamma \varepsilon^{1-q}}{(\sigma r)^{q}}a_{Q_{r,\eta}(\bar{x},\bar{t})}^{+}\iint \limits_{Q_{r,\eta}(\bar{x},\bar{t})}(u+\delta)^{q-\varepsilon-1}dxdt. \tag{2.5}\] Proof.: Test (1.4) by \((u_{h}+\delta)^{-\varepsilon}\big{(}\zeta_{1}\zeta_{2}\big{)}^{q}\), integrating over \((\bar{t},\bar{t}+\eta)\), letting \(h\to 0\), using conditions (1.2) and the Young inequality we arrive at the required (2.5). ### De Giorgi Type Lemma Let \(u\in V^{2,m}(\Omega_{T})\), \(m>\frac{2n}{n+1}\), \(u\geqslant 0\) and let next inequalities hold \[\sup_{t<t<\bar{t}+\eta}\int\limits_{B_{r}(\bar{x})}(u-k)_{-}^{2} \big{(}\zeta_{1}\zeta_{2}\big{)}^{m}dx+\gamma^{-1}\iint\limits_{Q_{r,\eta}( \bar{x},\bar{t})}|\nabla(u-k)_{-}|^{m}\big{(}\zeta_{1}\zeta_{2}\big{)}^{m}dxdt \leqslant\\ \leqslant K\sigma^{-m}\left\{\frac{k^{2}}{\eta}+\left(\frac{k}{r }\right)^{m}\right\}|A_{k,r,\eta}^{-}|, \tag{2.6}\] for any \(k>0\), any cylinder \(Q_{r,\eta}(\bar{x},\bar{t})\subset Q_{8r,8\eta}(\bar{x},\bar{t})\subset\Omega_ {T}\) and any \(\sigma\in(0,1)\), and with some \(K>0\). Here \(\zeta_{1}\), \(\zeta_{2}\) and \(A_{k,r,\eta}^{-}\) were defined in Lemma 2.3. The following lemma is the standard De Giorgi-type lemma (cf. [16], Chapter 3). **Lemma 2.5**.: _Let (2.6) hold, then there exists \(\nu\in(0,1)\) depending only on \(K\), \(n\), \(m\) and \(r\), \(\eta\) such that if_ \[\big{|}\big{\{}(x,t)\in Q_{r,\eta}(\bar{x},\bar{t}):u(x,t)\leqslant k\big{\}} \big{|}\leqslant\nu|Q_{r,\eta}(\bar{x},\bar{t})|, \tag{2.7}\] _then_ \[u(x,t)\geqslant\frac{k}{2},\quad(x,t)\in Q_{\frac{r}{2},\frac{\eta}{2}}(\bar {x},\bar{t}). \tag{2.8}\] The number \(\nu\) is chosen to satisfy \[\nu:=\frac{1}{\gamma}\frac{r^{m}}{\eta k^{m-2}}\bigg{(}1+\frac{\eta k^{m-2}}{ r^{m}}\bigg{)}^{-\frac{n+m}{m}}. \tag{2.9}\] ## 3 Expansion of Positivity Fix \((x_{0},t_{0})\in\Omega_{T}\) such that \(Q_{8\rho,(8\rho)^{2}}(x_{0},t_{0})\subset\Omega_{T}\) and let \(Q_{8r,(8r)^{2}}(\bar{x},\bar{t})\subset Q_{\rho,\rho^{2}}(x_{0},t_{0})\). In what follows, we will assume that \(k>0\) satisfies conditions \[C_{*}\rho\leqslant k,\quad k^{s}\leqslant\varepsilon_{0}\frac{\varphi_{Q_{6 \rho,(6\rho)^{2}}(x_{0},t_{0})}^{+}(\frac{k}{\rho})}{\rho^{n}k^{2}}= \varepsilon_{0}\bigg{(}\frac{k^{p-2}}{\rho^{n+p}}+a_{Q_{6\rho,(6\rho)^{2}}(x_{ 0},t_{0})}^{+}\frac{k^{q-2}}{\rho^{n+q}}\bigg{)}, \tag{3.1}\] where \(C_{*}>1\), \(\varepsilon_{0}\in(0,1)\) depend only on the known data to be chosen later. First, we will prove the following result. **Proposition 3.1**.: _Let \(u\) be a weak non-negative super-solution to equation (1.1), let \(k\) satisfy (3.1) and let also_ \[\big{|}\big{\{}B_{r}(\bar{x}):u(\cdot,\bar{t})>k\big{\}}\big{|}\geqslant\beta_{0 }|B_{r}(\bar{x})|, \tag{3.2}\] _with some \(\beta_{0}\in(0,1)\). Then there exist numbers \(C_{*}\), \(b_{1}\), \(b_{2}>0\) and \(\varepsilon_{0}\), \(\sigma_{0}\in(0,1)\) depending only on the data and \(\beta_{0}\) such that_ \[u(x,t)\geqslant\sigma_{0}k,\quad x\in B_{2r}(\bar{x}), \tag{3.3}\] _for all_ \[\bar{t}+\eta_{1}:=\bar{t}+b_{1}\frac{(\sigma_{0}k)^{2}}{\varphi_{Q_{\bar{\nu},(6\nu)^{2}}(\bar{x},\bar{t})}^{+}\big{(}\frac{\sigma_{0}k}{r}\big{)}}\leqslant t \leqslant\bar{t}+b_{2}\frac{(\sigma_{0}k)^{2}}{\varphi_{Q_{\bar{\nu},(6\nu)^ {2}}(\bar{x},\bar{t})}^{+}\big{(}\frac{\sigma_{0}k}{r}\big{)}}:=\bar{t}+\eta _{2}. \tag{3.4}\] **Lemma 3.1**.: _Let conditions of Proposition 3.1 hold, then there exist \(\varepsilon,\delta\in(0,1)\), depending only on the data and \(\beta_{0}\) such that_ \[\big{|}\big{\{}B_{r}(\bar{x}):u(\cdot,t)\geqslant\varepsilon k\big{\}}\big{|} \geqslant\frac{\beta_{0}}{4}|B_{r}(\bar{x})|, \tag{3.5}\] _for any \(\bar{t}<t\leqslant\bar{t}+\frac{\delta k^{2}}{\varphi_{Q_{\bar{\nu},(6\nu)^{2 }}(\bar{x},\bar{t})}^{+}\big{(}\frac{k}{r}\big{)}}\)._ Proof.: Use inequality (2.4), which yields for any \(\bar{t}<t<\bar{t}+\frac{\delta k^{2}}{\varphi_{Q_{\bar{\nu},(6\nu)^{2}}(\bar{ x},\bar{t})}^{+}\big{(}\frac{k}{r}\big{)}}\) \[\big{|}\big{\{}B_{r}(\bar{x}):u(\cdot,t)\leqslant\varepsilon k\big{\}}\big{|} \leqslant n\sigma|B_{r}(\bar{x})|+\frac{1}{(1-\varepsilon)^{2}}\big{\{}1- \beta_{0}+\gamma\sigma^{-q}\delta\big{\}}|B_{r}(\bar{x})|.\] Choose \[\sigma=\frac{\beta_{0}}{8n},\quad\frac{\beta_{0}}{8}\leqslant\varepsilon=1- \frac{(1-\frac{3}{4}\beta_{0})^{\frac{1}{2}}}{(1-\frac{1}{2}\beta_{0})^{\frac{ 1}{2}}}\leqslant\frac{\beta_{0}}{2},\quad\delta=\frac{\beta_{0}^{q+1}}{32(8n) ^{q}\gamma}, \tag{3.6}\] we arrive at the required (3.5), which proves the lemma. In the proof of Proposition 3.1 we will distinguish between two different cases. The first one is the so-called case of \(p\)-phase \[a_{Q_{\bar{\nu},(6\nu)^{2}}(x_{0},t_{0})}^{+}\frac{k^{q-2}}{\rho^{q}}\leqslant \frac{k^{p-2}}{\rho^{p}}, \tag{3.7}\] and the second is the case of \((p,q)\)-phase \[a_{Q_{\bar{\nu},(6\nu)^{2}}(x_{0},t_{0})}^{+}\frac{k^{q-2}}{\rho^{q}}\geqslant \frac{k^{p-2}}{\rho^{p}}. \tag{3.8}\] In turn, we divide case (3.7) into subcases. Fix \(j_{*}>1\), which will be chosen later depending only on the data and \(\beta_{0}\) and set \(\tau_{*}=\left(\frac{2^{j_{*}}}{\varepsilon}\right)^{q-2}\). We will assume that either \((i)\) \[a_{Q_{\bar{\nu},(6\nu)^{2}}(\bar{x},\bar{t})}^{+}\left(\frac{\varepsilon k}{r2^ {j_{*}}e^{\tau_{*}}}\right)^{q-p}\leqslant 1,\] or \((ii)\) \[a_{Q_{\bar{\nu},(6\nu)^{2}}(\bar{x},\bar{t})}^{+}\left(\frac{\varepsilon k}{r2 ^{j_{*}}e^{\tau_{*}}}\right)^{q-p}\geqslant 1.\] ### Proof of Proposition 3.1 in the case (3.7) and (\(i\)) For \(\tau\geqslant\bar{\tau}_{*}:=\tau_{*}+j_{*}\log 2+\log\frac{1}{\varepsilon}\) we have \[a^{+}_{Q_{6r,(6r)^{2}}(\bar{x},\bar{t})}\left(\frac{k}{re^{\tau}}\right)^{q-p} \leqslant a^{+}_{Q_{6r,(6r)^{2}}(\bar{x},\bar{t})}\bigg{(}\frac{\varepsilon k }{r2^{j_{*}}e^{\tau_{*}}}\bigg{)}^{q-p}\leqslant 1,\] therefore, \(\varphi^{+}_{Q_{6r,(6r)^{2}}(\bar{x},\bar{t})}\bigg{(}\frac{k}{re^{\tau}} \bigg{)}^{q}\leqslant 2\bigg{(}\frac{k}{re^{\tau}}\bigg{)}^{p}\) for \(\tau\geqslant\bar{\tau}_{*}\) and inequality (3.5) with \(k\) replaced by \(e^{-\tau}k\), \(\tau\geqslant\bar{\tau}_{*}\) yields \[\left|\left\{B_{r}(\bar{x}):u\left(\cdot,\bar{t}+\frac{\delta}{2}r^{p}\bigg{(} \frac{e^{\tau}}{k}\bigg{)}^{p-2}\right)\geqslant\varepsilon e^{-\tau}k \right\}\right|\geqslant\frac{\partial_{0}}{4}|B_{r}(\bar{x})|,\quad\text{for all}\quad\tau\geqslant\bar{\tau}_{*}.\] Following [16] we consider function \[w(y,\tau):=\frac{e^{\tau}}{k}u\left(\bar{x}+ry,\bar{t}+\frac{\delta}{2}r^{p} \bigg{(}\frac{e^{\tau}}{k}\bigg{)}^{p-2}\right),\quad\tau\geqslant\bar{\tau} _{*}.\] The previous inequality translates into \(w\) as \(\big{|}\big{\{}B_{1}(0):w(\cdot,\tau)\geqslant\varepsilon\big{\}}\big{|} \geqslant\frac{\beta_{0}}{4}|B_{1}(0)|\), which implies \[\big{|}\big{\{}B_{4}(0):w(\cdot,\tau)\geqslant\varepsilon\big{\}}\big{|} \geqslant\frac{\beta_{0}}{4^{n+1}}|B_{4}(0)|,\quad\text{for all}\quad\tau \geqslant\bar{\tau}_{*}. \tag{3.9}\] Since \(w\geqslant 0\), formal differentiation, which can be justified in a standard way, gives \[w_{\tau}=w+\frac{\delta}{2}(p-2)r^{p}\bigg{(}\frac{e^{\tau}}{k}\bigg{)}^{p-1} u_{t}\geqslant div\,\bar{\mathbb{A}}(y,\tau,\nabla w), \tag{3.10}\] where \(\bar{\mathbb{A}}\) satisfies the inequalities \[\begin{split}\bar{\mathbb{A}}(y,\tau,\nabla w)\nabla w\geqslant K _{1}\frac{\delta}{2}(p-2)\left(|\nabla w|^{p}+\bar{a}(y,\tau)\bigg{(}\frac{k}{ e^{\tau}r}\bigg{)}^{q-p}|\nabla w|^{q}\right),\\ |\bar{\mathbb{A}}(y,\tau,\nabla w)|\leqslant K_{2}\frac{\delta}{ 2}(p-2)\left(|\nabla w|^{p-1}+\bar{a}(y,\tau)\bigg{(}\frac{k}{e^{\tau}r}\bigg{)} ^{q-p}|\nabla w|^{q-1}\right),\end{split} \tag{3.11}\] and \(\bar{a}(y,\tau):=a\big{(}\bar{x}+ry,\bar{t}+\frac{\delta}{2}r^{p}\big{(}\frac{ e^{\tau}}{k}\big{)}^{p-2}\big{)}\). **Lemma 3.2**.: _For any \(\nu\in(0,1)\) there exists \(j_{*}\), depending only on the data and \(\nu\), such that_ \[\Big{|}\Big{\{}Q^{*}:w\leqslant\frac{\varepsilon}{2^{j_{*}}}\Big{\}}\Big{|} \leqslant\nu|Q^{*}|, \tag{3.12}\] \(Q^{*}=B_{4}(0)\times(\bar{\tau}_{*}+\frac{1}{2}\big{(}\frac{2^{j_{*}}}{ \varepsilon}\big{)}^{p-2},\bar{\tau}_{*}+\big{(}\frac{2^{j_{*}}}{\varepsilon} \big{)}^{p-2})\)_._ Proof.: Using Lemma 2.2 with \(k=k_{j}:=\frac{\varepsilon}{2^{j}}\) and \(l=k_{j-1}\), \(1\leqslant j\leqslant j_{*}\), due to (3.9) we obtain \[k_{j}|A_{k_{j},4}(\tau)|\leqslant\gamma(\beta_{0})\int\limits_{A_{k_{j-1},4}( \tau)\setminus A_{k_{j},4}(\tau)}|\nabla w|dx,\quad\tau\geqslant\bar{\tau}_{ *},\] where \(A_{k_{j},4}(\tau):=B_{4}(0)\cap\{w(\cdot,\tau)<k_{j}\}\). Integrating this inequality with respect to \(\tau\), \(\tau\in(\bar{\tau}_{*}+\frac{1}{2}k_{j_{*}}^{2-p},\bar{\tau}_{*}+k_{j_{*}}^{2-p})\) and using the Holder inequality we have \[k_{j}^{\frac{p}{p-1}}|A_{j}|^{\frac{p}{p-1}}\leqslant\gamma(\beta_{0})\left( \iint\limits_{A_{j-1}}|\nabla w|^{p}dy\,d\tau\right)^{\frac{1}{p-1}}|A_{j-1} \setminus A_{j}|, \tag{3.13}\] where \(A_{j}:=\int\limits_{\bar{\tau}_{*}+\frac{1}{2}k_{j_{*}}^{2-p}}^{\bar{\tau}_{* }+\frac{1}{2}k_{j_{*}}^{2-p}}A_{k_{j},4}(\tau)\,d\tau\). To estimate the first factor, similarly to Lemma 2.3 with \(|\frac{d}{dr}\zeta_{2}|\leqslant\gamma k_{j_{*}}^{p-2}\), by structure inequalities (3.11) we obtain \[\sup\limits_{\bar{\tau}_{*}+\frac{1}{2}k_{j_{*}}^{2-p}<\tau<\bar{ \tau}_{*}+k_{j_{*}}^{2-p}}\int\limits_{B_{4}(0)}(w-k_{j-1})_{-}^{2}\,dx+\iint \limits_{A_{j-1}}|\nabla w|^{p}dy\,d\tau\leqslant\\ \leqslant\gamma\big{(}k_{j-1}^{2}k_{j_{*}}^{p-2}+k_{j-1}^{p} \big{)}|Q_{1}^{*}\cap\{w<k_{j-1}\}|+\gamma k_{j-1}^{q}\iint\limits_{Q_{1}^{*} \cap\{w<k_{j-1}\}}\bar{a}(y,\tau)\bigg{(}\frac{k}{e^{\tau}r}\bigg{)}^{q-p}\, dyd\tau\leqslant\\ \leqslant\gamma k_{j}^{p}|Q^{*}\cap\{w<k_{j-1}\}|\left\{1+\, \bigg{(}\frac{k_{j}k}{e^{\bar{\tau}_{*}}r}\bigg{)}^{q-p}\bar{a}_{Q_{6}^{*}}^{ +}\right\},\] where \(Q_{6}^{*}:=B_{6}(0)\times(\bar{\tau}_{*}+\frac{1}{4}k_{j_{*}}^{2-p},\bar{\tau }_{*}+2k_{j_{*}}^{2-p})\), \(\bar{a}_{Q_{6}^{*}}^{+}=\max\limits_{Q_{6}^{*}}\bar{a}(y,\tau)\). To estimate the last term on the right-hand side of this inequality, we note that by condition \((i)\) \[\bigg{(}\frac{k_{j}k}{e^{\bar{\tau}_{*}}r}\bigg{)}^{q-p}\bar{a}_{Q_{6}^{*}}^{ +}\leqslant 1,\] and hence \[\sup\limits_{\bar{\tau}_{*}+\frac{1}{2}k_{j_{*}}^{2-p}<\tau<\bar{\tau}_{*}+k_ {j_{*}}^{2-p}}\int\limits_{B_{4}(0)}(w-k_{j-1})_{-}^{2}\,dx+\iint\limits_{A_{ j-1}}|\nabla w|^{p}dy\,d\tau\leqslant\gamma k_{j}^{p}|\bar{Q}^{*}\cap\{w<k_{j-1}\}|. \tag{3.14}\] Combining estimates (3.13) and (3.14) we obtain \[|A_{j}|^{\frac{p}{p-1}}\leqslant\gamma(\beta_{0})|Q^{*}|^{\frac{1}{p-1}}|A_{j -1}\setminus A_{j}|.\] Summing up the last inequalities over \(j\), \(1\leqslant j\leqslant j_{*}\), we conclude that \[{j_{*}}^{\frac{p-1}{p}}|A_{j_{*}}|\leqslant\gamma(\beta_{0})|Q^{*}|.\] Choosing \(j_{*}\) by the condition \[j_{*}^{-\frac{p-1}{p}}\gamma(\beta_{0})\leqslant\nu,\] we obtain inequality (3.12), which proves Lemma 3.2. Use Lemma 2.5, similarly to that of (3.14) inequality (2.6) holds for \(u\) replaced by \(w\), \(m=p\), \(k=k_{j_{*}}\) and \(\eta=\gamma k_{j_{*}}^{2-p}\) we obtain \[w(y,\tau)\geqslant k_{j_{*}+1},\quad y\in B_{2}(0),\] for all \(\bar{\tau}_{*}+\frac{5}{8}k_{j_{*}}^{2-p}\leqslant\tau\leqslant\bar{\tau}_{*}+ \frac{3}{4}k_{j_{*}}^{2-p}\). This inequality for \(u\) translates into \[u(x,t)\geqslant\frac{\varepsilon ke^{-\bar{\tau}_{*}-\frac{3}{4}(\frac{2^{j_{*} }}{\varepsilon})^{p-2}}}{2^{j_{*}+1}}=\frac{\varepsilon^{2}ke^{-\tau_{*}- \frac{3}{4}(\frac{2^{j_{*}}}{\varepsilon})^{p-2}}}{2^{2j_{*}+1}},\quad x\in B_{ 2r}(\bar{x}),\] for all \(\bar{t}+\frac{\delta}{2}r^{p}\bigg{(}\frac{2^{j_{*}}e^{\tau_{*}+\frac{5}{8}( \frac{2^{j_{*}}}{\varepsilon})^{p-2}}}{\varepsilon k}\bigg{)}^{p-2}\leqslant t \leqslant\bar{t}+\frac{\delta}{2}r^{p}\bigg{(}\frac{2^{j_{*}}e^{\tau_{*}+\frac {3}{4}(\frac{2^{j_{*}}}{\varepsilon})^{p-2}}}{\varepsilon k}\bigg{)}^{p-2}, \tau_{*}=\bigg{(}\frac{2^{j_{*}}}{\varepsilon}\bigg{)}^{q-2}\). Choose \(\sigma_{0}=\frac{\varepsilon^{2}e^{-\tau_{*}-\frac{3}{4}(\frac{2^{j_{*}}}{ \varepsilon})^{p-2}}}{2^{2j_{*}+1}}\), by condition \((i)\) \[\frac{\delta}{2}r^{p}\bigg{(}\frac{2^{j_{*}}e^{\tau_{*}+\frac{5}{8}(\frac{2^{ j_{*}}}{\varepsilon})^{p-2}}}{\varepsilon k}\bigg{)}^{p-2}=\frac{\delta \varepsilon^{p-2}e^{-\frac{1}{8}(\frac{2^{j_{*}}}{\varepsilon})^{p-2}}}{2^{p+( p-2)j_{*}}}(\sigma_{0}k)^{2-p}r^{p}\leqslant\bar{b}_{1}\frac{(\sigma_{0}k)^{2}}{ \varphi_{Q_{6r,(6r)^{2}}(\bar{x},\bar{t})}^{+}(\frac{\sigma_{0}k}{r})}\] and \[\frac{\delta}{2}r^{p}\bigg{(}\frac{2^{j_{*}}e^{\bar{\tau}_{*}+\frac{3}{4}( \frac{2^{j_{*}}}{\varepsilon})^{p-2}}}{\varepsilon k}\bigg{)}^{p-2}=\frac{ \delta\varepsilon^{p-2}}{2^{p+(p-2)j_{*}}}(\sigma_{0}k)^{2-p}r^{p}\geqslant \bar{b}_{2}\frac{(\sigma_{0}k)^{2}}{\varphi_{Q_{6r,(6r)^{2}}(\bar{x},\bar{t})} ^{+}(\frac{\sigma_{0}k}{r})},\] \(\bar{b}_{1}=\frac{\delta\varepsilon^{p-2}e^{-\frac{1}{8}(\frac{2^{j_{*}}}{ \varepsilon})^{p-2}}}{2^{(p-2)(j_{*}+1)+1}}\), \(\bar{b}_{2}=\frac{\delta\varepsilon^{p-2}}{2^{p+(p-2)j_{*}}}\), which proves Proposition 3.1 in the case (3.7) and \((i)\) with \(b_{1}=\bar{b}_{1}\) and \(b_{2}=\bar{b}_{2}=2\bar{b}_{1}e^{\frac{1}{8}(\frac{2^{j_{*}}}{\varepsilon})^{ p-2}}\). ### Proof of Proposition 3.1 in the case (3.7) and \((ii)\) Set \(l:=\frac{s-p+2}{s-q+2}>1\), by conditions \((ii)\), \((A)\), \((3.1)\), the Young inequality, and using the fact that \((\alpha+p-q)\frac{l}{l-1}-p-n=(\alpha+p-q)\frac{s-p+2}{q-p}-p-n\geqslant 0\) we obtain \[a_{Q_{6r,(6r)^{2}}(\bar{x},\bar{t})}^{+}\frac{k^{q-2}}{r^{q}}-a_{ Q_{6r,(6r)^{2}}(\bar{x},\bar{t})}^{-}\frac{k^{q-2}}{r^{q}}\leqslant 6^{\alpha} Ar^{\alpha-q}k^{q-2}=6^{\alpha} Ar^{\alpha-q}k^{ \frac{p-2}{l}+(\frac{p-2)(l-1)}{l}+q-p}\leqslant\] \[\leqslant\frac{k^{p-2}}{4r^{p}}+\gamma(l,A)r^{(\alpha+p-q)\frac{s -p+2}{q-p}-p}k^{s}\leqslant a_{Q_{6r,(6r)^{2}}(\bar{x},\bar{t})}^{+}\frac{k^{q- 2}}{4r^{q}}+\] \[+\varepsilon_{0}\gamma(l,A)r^{(\alpha+p-q)\frac{s-p+2}{q-p}-p} \varphi_{Q_{6r,(6r)^{2}}(\bar{x},\bar{t})}^{+}\left(\frac{k}{\rho}\right) \leqslant a_{Q_{6r,(6r)^{2}}(\bar{x},\bar{t})}^{+}\frac{k^{q-2}}{4r^{q}}+2 \varepsilon_{0}\gamma(l,A)\frac{k^{p-2}}{\rho^{p}}\leqslant\] \[\leqslant a_{Q_{6r,(6r)^{2}}(\bar{x},\bar{t})}^{+}\frac{k^{q-2}}{4r ^{q}}+2\varepsilon_{0}\gamma(l,A)\frac{k^{p-2}}{r^{p}}\leqslant\left(\frac{1} {4}+2\varepsilon_{0}\gamma(l,A)\right)a_{Q_{6r,(6r)^{2}}(\bar{x},\bar{t})}^{+ }\frac{k^{q-2}}{r^{q}},\] and hence \[a_{Q_{6r,(6r)^{2}}(\bar{x},\bar{t})}^{+}\leqslant 2a_{Q_{6r,(6r)^{2}}(\bar{x},\bar{t})} ^{-}, \tag{3.15}\] provided that \(\varepsilon_{0}\) is chosen to satisfy \[2\varepsilon_{0}\gamma(l,A)=\frac{1}{4}. \tag{3.16}\] And moreover, by condition \((ii)\) \[a_{Q_{6r,(6r)^{2}}(\bar{x},\bar{t})}^{+}\bigg{(}\frac{k}{re^{\tau}}\bigg{)}^{q- p}\geqslant 1\quad\text{for}\quad 0<\tau\leqslant\bar{\tau}_{*}. \tag{3.17}\] So, inequality (3.5) with \(k\) replaced by \(e^{-\tau}k\), \(\tau\leqslant\bar{\tau}_{*}\) yields \[\left|\left\{B_{r}(\bar{x}):u\left(\cdot,\bar{t}+\frac{\delta}{2a^{+}_{Q_{6r,(6r )^{2}}(\bar{x},\bar{t})}}r^{q}\!\left(\frac{e^{\tau}}{k}\right)^{q-2}\right) \geqslant\varepsilon e^{-\tau}k\right\}\right|\geqslant\frac{\beta_{0}}{4}|B_ {r}(\bar{x})|,\quad\text{for all}\quad 0<\tau\leqslant\bar{\tau}_{*}.\] Consider the function \[w(y,\tau):=\frac{e^{\tau}}{k}u\left(\bar{x}+ry,\bar{t}+\frac{\delta}{2a^{+}_{Q_{6 r,(6r)^{2}}(\bar{x},\bar{t})}}r^{q}\!\left(\frac{e^{\tau}}{k}\right)^{q-2} \right),\quad 0<\tau\leqslant\bar{\tau}_{*}.\] The previous inequality translates into \(w\) as \(\big{|}\big{\{}B_{1}(0):w(\cdot,\tau)\geqslant\varepsilon\big{\}}\big{|} \geqslant\frac{\beta_{0}}{4}|B_{1}(0)|\), which implies \[\big{|}\big{\{}B_{4}(0):w(\cdot,\tau)\geqslant\varepsilon\big{\}}\big{|} \geqslant\frac{\beta_{0}}{4^{n+1}}|B_{4}(0)|,\quad\text{for all}\quad 0<\tau\leqslant\bar{ \tau}_{*}. \tag{3.18}\] By differentiation \[w_{\tau}=w+\frac{\delta(q-2)}{2a^{+}_{Q_{6r,(6r)^{2}}(\bar{x},\bar{t})}}r^{q} \!\left(\frac{e^{\tau}}{k}\right)^{q-1}\!u_{t}\geqslant div\,\bar{\mathbb{A}} (y,\tau,\nabla w), \tag{3.19}\] where \(\bar{\mathbb{A}}\) satisfies the inequalities \[\begin{split}&\bar{\mathbb{A}}(y,\tau,\nabla w)\nabla w\geqslant K _{1}\frac{\delta}{2}(q-2)\left\{\frac{1}{a^{+}_{Q_{6r,(6r)^{2}}(\bar{x},\bar{t })}}\!\left(\frac{e^{\tau}r}{k}\right)^{q-p}\!|\nabla w|^{p}+\frac{\bar{a}(y, \tau)}{a^{+}_{Q_{6r,(6r)^{2}}(\bar{x},\bar{t})}}|\nabla w|^{q}\right\},\\ &|\bar{\mathbb{A}}(y,\tau,\nabla w)|\leqslant K_{2}\frac{\delta }{2}(q-2)\left\{\frac{1}{a^{+}_{Q_{6r,(6r)^{2}}(\bar{x},\bar{t})}}\!\left( \frac{e^{\tau}r}{k}\right)^{q-p}\!|\nabla w|^{p-1}\!+\!\!\frac{\bar{a}(y,\tau)} {a^{+}_{Q_{6r,(6r)^{2}}(\bar{x},\bar{t})}}|\nabla w|^{q-1}\right\},\end{split} \tag{3.20}\] and \(\bar{a}(y,\tau):=a\!\left(\bar{x}+ry,\bar{t}+\frac{\delta}{2a^{+}_{Q_{6r,(6r)^ {2}}(\bar{x},\bar{t})}}r^{q}\!\left(\frac{e^{\tau}}{k}\right)^{q-2}\right)\). **Lemma 3.3**.: _For any \(\nu\in(0,1)\) there exists \(j_{*}\), depending only on the data and \(\nu\) such that_ \[\Big{|}\Big{\{}Q^{*}:w\leqslant\frac{\varepsilon}{2^{j_{*}}}\Big{\}}\Big{|} \leqslant\nu|Q^{*}|, \tag{3.21}\] \(Q^{*}=B_{4}(0)\times(\frac{1}{2}\big{(}\frac{2^{j_{*}}}{\varepsilon}\big{)}^{ q-2},\frac{3}{4}\big{(}\frac{2^{j_{*}}}{\varepsilon}\big{)}^{q-2})\)_._ Proof.: Using Lemma 2.2 with \(k=k_{j}:=\frac{\varepsilon}{2^{j}}\) and \(l=k_{j-1}\), \(1\leqslant j\leqslant j_{*}\), due to (3.18) we obtain \[k_{j}|A_{k_{j},4}(\tau)|\leqslant\gamma(\beta_{0})\int\limits_{A_{k_{j-1},4}( \tau)\setminus A_{k_{j},4}(\tau)}|\nabla w|\;dx,\quad 0<\tau\leqslant\bar{\tau}_{*},\] where \(A_{k_{j},4}(\tau):=B_{4}(0)\cap\{u(\cdot,\tau)<k_{j}\}\). Integrating this inequality with respect to \(\tau\), \(\tau\in(\frac{1}{2}k_{j_{*}}^{2-q},\frac{3}{4}k_{j_{*}}^{2-q})\) and using the Holder inequality we have \[k_{j}^{\frac{q}{q-1}}|A_{j}|^{\frac{q}{q-1}}\leqslant\gamma(\beta_{0})\left( \iint\limits_{A_{j-1}}|\nabla w|^{q}dy\,d\tau\right)^{\frac{1}{q-1}}|A_{j-1} \setminus A_{j}|, \tag{3.22}\] where \(A_{j}:=\int\limits_{\frac{1}{2}k_{j_{\star}}^{2-q}}^{\frac{3}{4}k_{j_{\star}}^{2-q }}A_{k_{j},4}(\tau)\,d\tau\). Similarly to Lemma 3.2 with \(|\frac{d}{d\tau}\zeta_{2}|\leqslant\gamma k_{j_{\star}}^{q-2}\), by structure conditions (3.20), estimate (3.15) we estimate the first factor on the right-hand side of (3.22) as follows \[\sup_{\frac{1}{2}k_{j_{\star}}^{2-q}<\tau<\frac{3}{4}k_{j_{\star}}^ {2-q}}\int\limits_{B_{4}(0)}(w-k_{j-1})_{-}^{2}\,dx+\frac{1}{2}\iint\limits_{ A_{j-1}}|\nabla w|^{q}dy\,d\tau\leqslant\\ \leqslant\sup_{\frac{1}{2}k_{j_{\star}}^{2-q}<\tau<\frac{3}{4}k_{ j_{\star}}^{2-q}}\int\limits_{B_{4}(0)}(w-k_{j-1})_{-}^{2}\,dx+\iint\limits_{A_{j-1}} \frac{\bar{a}(y,\tau)}{a_{Q_{\theta_{r},(\theta r)^{2}}(\bar{x},\bar{l})}^{+}} |\nabla w|^{q}dy\,d\tau\leqslant\\ \leqslant\gamma k_{j-1}^{2}k_{j_{\star}}^{q-2}|Q_{1}^{\ast}\cap \{w<k_{j-1}\}|+\gamma\frac{k_{j}^{p}}{a_{Q_{\theta_{r},(\theta r)^{2}}(\bar{x },\bar{l})}^{+}}\iint\limits_{Q_{\theta_{r},(\theta r)^{2}}(\bar{x},\bar{l})} \left(\frac{e^{\tau}r}{k}\right)^{q-p}dyd\tau+\\ +\gamma k_{j}^{q}\iint\limits_{Q_{\theta^{\cap}}\cap\{w<k_{j-1} \}}\frac{\bar{a}(y,\tau)}{a_{Q_{\theta_{r},(\theta r)^{2}}(\bar{x},\bar{l})}^{+ }}dyd\tau\leqslant\gamma k_{j}^{q}|Q_{6}^{\ast}\cap\{w<k_{j-1}\}|+\\ +\gamma\frac{k_{j}^{p}}{a_{Q_{\theta_{r},(\theta r)^{2}}(\bar{x },\bar{l})}^{+}}\bigg{(}\frac{e^{\frac{7}{8}k_{j_{\star}}^{2-q}}r}{k}\bigg{)}^{ q-p}|Q_{6}^{\ast}\cap\{w<k_{j-1}\}|, \tag{3.23}\] where \(Q_{6}^{\ast}:=B_{6}(0)\times(\frac{1}{4}k_{j_{\star}}^{2-q},\frac{7}{8}k_{j_{ \star}}^{2-q})\). By our choices and (3.17) \[\frac{k_{j}^{p}}{a_{Q_{\theta_{r},(\theta r)^{2}}(\bar{x},\bar{l})}^{+}}\bigg{(} \frac{e^{\frac{7}{8}k_{j_{\star}}^{2-q}}r}{k}\bigg{)}^{q-p}\leqslant\frac{k_{j }^{q}}{a_{Q_{\theta_{r},(\theta r)^{2}}(\bar{x},\bar{l})}^{+}}\bigg{(}\frac{e^ {\frac{7}{\tau}r}}{k}\bigg{)}^{q-p}\leqslant k_{j}^{q}.\] Therefore, inequality (3.23) yields \[\sup_{\frac{1}{2}k_{j_{\star}}^{2-q}<\tau<\frac{3}{4}k_{j_{\star}}^{2-q}}\int \limits_{B_{4}(0)}(w-k_{j-1})_{-}^{2}\,dx+\iint\limits_{A_{j-1}}|\nabla w|^{q} dy\,d\tau\leqslant\gamma k_{j}^{q}|Q^{\ast}\cap\{w<k_{j-1}\}|. \tag{3.24}\] Combining (3.22) and (3.24) we arrive at \[|A_{j}|^{\frac{q}{q-1}}\leqslant\gamma(\beta_{0})|Q^{\ast}|^{\frac{1}{q-1}}|A_ {j-1}\setminus A_{j}|.\] Summing up this inequalities in \(j\), \(1\leqslant j\leqslant j_{\star}\) and choosing \(j_{\star}\) by the condition \(j_{\star}^{-\frac{q-1}{q}}\gamma(\beta_{0})\leqslant\nu\), we arrive at the required (3.21), which proves the lemma. Use Lemma 2.5, similarly to that of (3.24) inequality (2.6) holds for \(u\) replaced by \(w\), \(m=q\), \(k=k_{j_{\star}}\) and \(\eta=\gamma k_{j_{\star}}^{2-q}\) we obtain \[w(y,\tau)\geqslant k_{j_{\star}+1},\quad y\in B_{2}(0),\] for all \(\frac{9}{16}k_{j_{\star}}^{2-q}\leqslant\tau\leqslant\frac{5}{8}k_{j_{\star}}^ {2-q}\). This inequality for \(u\) translates into \[u(x,t)\geqslant\frac{\varepsilon ke^{-\frac{5}{8}(\frac{2^{q}}{\varepsilon})^{q -2}}}{2^{j_{\star}+1}},\quad x\in B_{2r}(\bar{x}),\] for all \(\bar{t}+\frac{\delta}{2a_{Q_{\theta_{r},(\theta r)^{2}}(\bar{x},\bar{l})}^{+}} \tau^{q}k^{2-q}e^{\frac{9}{16}(q-2)(\frac{2^{2}}{\varepsilon})^{q-2}}\leqslant t \leqslant\bar{t}+\frac{\delta}{4a_{Q_{\theta_{r},(\theta r)^{2}}(\bar{x},\bar{ l})}^{+}}r^{q}k^{2-q}e^{\frac{5}{8}(q-2)(\frac{2^{2}}{\varepsilon})^{q-2}}\). Choose \(\sigma_{0}=\frac{\varepsilon ke^{-\frac{5}{8}(\frac{2^{j*}}{\varepsilon})^{q-2}}}{2 ^{j_{*}+1}}\), therefore \[\frac{\delta}{2a^{+}_{Q_{\theta_{r},(6r)^{2}}(\bar{x},\bar{t})}}r^{q}k^{2-q}e^{ \frac{9}{16}(q-2)(\frac{2^{j_{*}}}{\varepsilon})^{q-2}}=\frac{\delta\varepsilon ^{q-2}e^{-\frac{1}{16}(q-2)(\frac{2^{j_{*}}}{\varepsilon})^{q-2}}}{2^{q+j_{*} (q-2)}a^{+}_{Q_{\theta_{r},(6r)^{2}}(\bar{x},\bar{t})}}r^{q}(\sigma_{0}k)^{2-q} \leqslant\bar{b}_{1}\frac{(\sigma_{0}k)^{2}}{\varphi^{+}_{Q_{6r,(6r)^{2}}(\bar {x},\bar{t})}(\frac{\sigma_{0}k}{r})},\] and \[\frac{\delta}{2a^{+}_{Q_{\theta_{r},(6r)^{2}}(\bar{x},\bar{t})}}r^{q}k^{2-q}e^ {\frac{5}{8}(q-2)(\frac{2^{j_{*}}}{\varepsilon})^{q-2}}=\frac{\delta\varepsilon ^{q-2}}{2^{q+j_{*}(q-2)}a^{+}_{Q_{\theta_{r},(6r)^{2}}(\bar{x},\bar{t})}}r^{q} (\sigma_{0}k)^{2-q}\geqslant\bar{b}_{2}\frac{(\sigma_{0}k)^{2}}{\varphi^{+}_{ Q_{\theta_{r},(6r)^{2}}(\bar{x},\bar{t})}(\frac{\sigma_{0}k}{r})},\] where \(\bar{b}_{1}=\frac{(1+A)\delta\varepsilon^{q-2}e^{-\frac{1}{16}(q-2)(\frac{2^ {j_{*}}}{\varepsilon})^{q-2}}}{2^{(j_{*}+1)(q-2)}}\), \(\bar{b}_{2}=\frac{\delta\varepsilon^{q-2}}{2^{q+j_{*}(q-2)}}\). This proves Proposition 3.1 in the case (3.7) and \((ii)\). To complete the proof of Proposition 3.1, we note that in the case (3.8) \[a^{+}_{Q_{\theta_{\rho},(6\rho)^{2}}(x_{0},t_{0})}\frac{k^{q-2}} {\rho^{q}}-a^{-}_{Q_{\theta_{\rho},(6\rho)^{2}}(x_{0},t_{0})}\frac{k^{q-2}}{ \rho^{q}}\leqslant 6^{\alpha}A\rho^{\alpha-q}k^{q-2}=6^{\alpha}A\rho^{ \alpha-q}k^{\frac{p-2}{t}+\frac{(p-2)(l-1)}{l}+q-p}\leqslant\] \[\leqslant\frac{k^{p-2}}{4\rho^{p}}+\gamma(l,A)\rho^{(\alpha+p-q) \frac{\varepsilon-p+2}{q-p}-p}k^{s}\leqslant a^{+}_{Q_{\theta_{\rho},(6\rho)^{ 2}}(x_{0},t_{0})}\frac{k^{q-2}}{4\rho^{q}}+\] \[+\varepsilon_{0}\gamma(l,A)\rho^{(\alpha+p-q)\frac{\varepsilon- p+2}{q-p}-p}k^{-2}\varphi^{+}_{Q_{\theta_{\rho},(6\rho)^{2}}(x_{0},t_{0})}\left( \frac{k}{\rho}\right)\leqslant\left(\frac{1}{4}+2\varepsilon_{0}\gamma(l,A) \right)a^{+}_{Q_{\theta_{\rho},(6\rho)^{2}}(x_{0},t_{0})}\frac{k^{q-2}}{4\rho ^{q}},\] choose \(\varepsilon_{0}\) from the condition (3.16) and therefore \[a^{+}_{Q_{\theta_{\rho},(6\rho)^{2}}(x_{0},t_{0})}\leqslant 2a^{-}_{Q_{\theta_{ \rho},(6\rho)^{2}}(x_{0},t_{0})}.\] Introduce the change of variables and the new unknown function \[w(y,\tau):=\frac{e^{\tau}}{k}u(\bar{x}+ry,\bar{t}+\frac{\delta}{2a^{-}_{Q_{ \theta_{r},(6r)^{2}}(\bar{x},\bar{t})}}r^{q}\big{(}\frac{e^{\tau}}{k}\big{)}^{q -2}),\] which satisfies (3.19), (3.20) and Proposition 3.1 is a consequence of Lemmas 2.5 and 3.3. Our main result of this Section reads as follows **Theorem 3.1**.: _Let \(u\) be a weak non-negative super-solution to equation (1.1), let \(k\) satisfy_ \[0<k^{s}\leqslant\varepsilon_{0}\Big{(}\frac{k^{p-2}}{\rho^{n+p}}+a^{+}_{Q_{ 6\rho,(6\rho)^{2}}(y,\tau)}\frac{k^{q-2}}{\rho^{n+q}}\Big{)}, \tag{3.25}\] _and let also_ \[\big{|}\big{\{}B_{\rho}(y):u(\cdot,\tau)>k\big{\}}\big{|}\geqslant\beta|B_{ \rho}(y)|, \tag{3.26}\] _with some \(\beta\in(0,1)\). Then there exist numbers \(C\), \(B\), \(0<B_{1}\leqslant\frac{B_{2}}{2}\) and \(\sigma_{1}\in(0,1)\) depending only on the data such that either_ \[\beta^{B}k\leqslant C\rho, \tag{3.27}\] _or_ \[u(x,t)\geqslant\sigma_{1}\beta^{B}k,\quad x\in B_{2\rho}(y), \tag{3.28}\] _and for all_ \[\tau+B_{1}\frac{(\sigma_{1}\beta^{B}k)^{2}}{\varphi^{+}_{Q_{12\rho,(12\rho)^{2} }(y,\tau)}(\frac{\sigma_{1}\beta^{B}k}{\rho})}\leqslant t\leqslant\tau+B_{2} \frac{(\sigma_{1}\beta^{B}k)^{2}}{\varphi^{+}_{Q_{12\rho,(12\rho)^{2}}(y,\tau) }(\frac{\sigma_{1}\beta^{B}k}{\rho})}, \tag{3.29}\] _provided that \(Q_{16\rho,(16\rho)^{2}}(y,\tau)\subset\Omega_{T}\)._ Proof.: In what follows, we assume that \[\beta^{B}k\geqslant C\rho. \tag{3.30}\] Condition (3.25) and Lemma 3.1 (see (3.6)) yield \[\big{|}\big{\{}B_{\rho}(y):u(\cdot,t)>\frac{\beta}{8}k\big{\}}\big{|}\geqslant \frac{\beta}{4}|B_{\rho}(y)|, \tag{3.31}\] for all \(\tau<t\leqslant\tau+\frac{\delta k^{2}}{\varphi_{Q_{\theta_{\rho},(6\rho)^{2}} (y,\tau)}^{+}(\frac{k}{\rho})}\), \(\delta=\frac{\beta^{q+1}}{\gamma}\). Write down the energy estimates (2.3) with \(k\) replaced by \(\frac{\beta}{8}k\), for the pair of cylinders \(Q:=B_{\rho}(y)\times(\tau+\frac{\eta}{2},\tau+\eta)\), \(Q_{1}:=B_{2\rho}(y)\times(\tau,\tau+\eta)\), \(\eta=\frac{\delta k^{2}}{\varphi_{Q_{\theta_{\rho},(6\rho)^{2}}(y,\tau)}^{+}( \frac{k}{\rho})}\) and take \[\bigg{|}\frac{d}{dt}\zeta_{2}\bigg{|}\leqslant\gamma\frac{\varphi_{Q_{\theta _{\rho},(6\rho)^{2}}(y,\tau)}^{+}(\frac{k}{\rho})}{\delta k^{2}}\text{ and }|\nabla\zeta_{1}|\leqslant\frac{\gamma}{\rho}\text{. By condition }(A)\text{ and }\eqref{eq:energy_estimate}\] \[\iint\limits_{Q}\bigg{|}\nabla\left(u-\frac{\beta}{8}k\right)_{-}\bigg{|}^{p} dxdt\leqslant\frac{\gamma}{\beta^{q+1}}\bigg{(}\frac{\beta k}{\rho}\bigg{)}^{p} \frac{\bigg{(}1+a_{Q_{\theta_{\rho},(6\rho)^{2}}(y,\tau)}^{+}(\frac{\beta k}{ 8\rho})^{q-p}\bigg{)}}{\bigg{(}1+a_{Q_{\theta_{\rho},(6\rho)^{2}}(y,\tau)}^{ +}(\frac{\beta k}{8\rho})^{q-p}\bigg{)}}|Q|\leqslant\frac{\gamma}{\beta^{q+1 }}\bigg{(}\frac{\beta k}{\rho}\bigg{)}^{p}|Q|.\] From this and (3.31) it follows that there exists \(t_{1}\in(\tau+\frac{\eta}{2},\tau+\eta)\) such that \[\int\limits_{B_{\rho}(y)\times\{t_{1}\}}\bigg{|}\nabla\left(u-\frac{\beta}{8} k\right)_{-}\bigg{|}\,dx\leqslant\frac{\gamma}{\beta^{\frac{q+1}{p}}}\beta k \rho^{n-1}\text{ and }\left|\bigg{\{}B_{\rho}(y):u(\cdot,t_{1})>\frac{\beta}{8}k \bigg{\}}\right|\geqslant\frac{\beta}{4}|B_{\rho}(y)|. \tag{3.32}\] The local clustering Lemma 2.1 with \(\mathcal{K}=\frac{\gamma}{\beta^{\frac{q+1}{p}}}\), \(\alpha=\frac{\beta}{4}\), \(\nu=\frac{1}{2}\), \(\xi=\frac{1}{2}\) and \(k\) replaced by \(\frac{\beta}{8}k\) yields \[\bigg{|}\bigg{\{}B_{r}(\bar{x}):u(\cdot,t_{1})>\frac{\beta}{16}k\bigg{\}} \bigg{|}\geqslant\frac{1}{2}|B_{r}(\bar{x})|,\quad r=\epsilon_{0}\beta^{2+\frac {q+1}{p}}\rho \tag{3.33}\] with some \(\bar{x}\in B_{\rho}(y)\) and some \(\epsilon_{0}\in(0,1)\) depending only on the data. Proposition 3.1 with \(\beta_{0}=\frac{1}{2}\) and \(k\) replaced by \(\frac{\beta}{16}k\) implies \[u(x,t)\geqslant\sigma_{0}\beta k,\quad x\in B_{2r}(\bar{x}),\] for all \[t_{2}:=t_{1}+b_{1}\frac{(\sigma_{0}\beta k)^{2}}{\varphi_{Q_{\theta_{r},(6\rho )^{2}}(\bar{x},t_{1})}^{+}(\frac{\sigma_{0}\beta k}{r})}\leqslant t\leqslant t_ {1}+b_{2}\frac{(\sigma_{0}\beta k)^{2}}{\varphi_{Q_{\theta_{r},(6\rho)^{2}}( \bar{x},t_{1})}^{+}(\frac{\sigma_{0}\beta k}{r})},\] with some \(\sigma_{0}\in(0,1)\) and \(b_{1}\), \(b_{2}>0\) depending only on the data. From this by iteration we obtain \[u(x,t)\geqslant\sigma_{0}^{j}\beta k,\quad x\in B_{2jr}(\bar{x}), \tag{3.34}\] for all \[t_{j+1}:=t_{j}+b_{1}\frac{(\sigma_{0}^{j}\beta k)^{2}}{\varphi_{Q_{2j\theta_{r},(2j\theta_{r})^{2}}(\bar{x},t_{j})}^{+}(\frac{\sigma_{0}^{j}\beta k}{2^{r}})} \leqslant t\leqslant t_{j}+b_{2}\frac{(\sigma_{0}^{j}\beta k)^{2}}{\varphi_{Q_ {2j\theta_{r},(2j\theta_{r})^{2}}(\bar{x},t_{j})}^{+}(\frac{\sigma_{0}^{j} \beta k}{2^{r}})}. \tag{3.35}\] Choosing \(j\) by the condition \(2^{j}r=2\rho\), from (3.34) we obtain \[u(x,t)\geqslant\frac{k}{\gamma(\sigma_{0},\epsilon_{0})}\,\beta^{1+(2+\frac{q+1}{p })\log\frac{1}{\sigma_{0}}}=\sigma_{1}\beta^{B}k,\quad x\in B_{2\rho}(y),\] for all \(t\) satisfying (3.35). We have \[t_{j}+b_{2}\frac{(\sigma_{0}^{j}\beta k)^{2}}{\varphi_{Q_{2^{j} \theta_{\mathrm{cr}},(2^{j}\theta_{\mathrm{cr}})^{2}(\bar{x},t_{j})}}^{+}( \frac{\sigma_{0}^{j}\beta k}{2^{j}r})}\geqslant\tau+b_{2}\frac{(\sigma_{0}^{j} \beta k)^{2}}{\varphi_{Q_{2^{j}\theta_{\mathrm{cr}},(2^{j}\theta_{\mathrm{cr}}) ^{2}(\bar{x},t_{j})}}^{+}(\frac{\sigma_{1}\beta^{B}k}{2^{j}r})}\geqslant\tau+ b_{2}\frac{(\sigma_{1}\beta^{B}k)^{2}}{\varphi_{Q_{1_{2\rho},(12\rho)^{2}(y, \tau)}}^{+}(\frac{\sigma_{1}\beta^{B}k}{\rho})}=\\ =\tau+B_{2}\frac{(\sigma_{1}\beta^{B}k)^{2}}{\varphi_{Q_{1_{2\rho },(12\rho)^{2}(y,\tau)}}^{+}(\frac{\sigma_{1}\beta^{B}k}{\rho})}.\] In addition, by condition \((A)\) and (3.25) \[t_{j+1}-\tau-\frac{\beta^{q+1}k^{2}}{\gamma\varphi_{Q_{6\rho,(6 \rho)^{2}}(y,\tau)}^{+}(\frac{k}{\rho})}\leqslant b_{1}\sum_{i=0}^{j}\frac{( \sigma_{0}^{i}\beta k)^{2}}{\varphi_{Q_{1_{2\rho},(12\rho)^{2}(y,\tau)}}^{-}( \frac{\sigma_{0}^{i}\beta k}{2^{j}r})}\leqslant\\ \leqslant b_{1}\frac{(\sigma_{0}^{j}\beta k)^{2}}{\varphi_{Q_{1_{2 \rho},(12\rho)^{2}(y,\tau)}}^{-}(\frac{\sigma_{0}^{j}\beta k}{2^{j}r})}\sum_{ i=0}^{j}\left(\frac{2^{p}}{\sigma_{0}^{p-2}}\right)^{i-j}\leqslant\gamma b_{1} \frac{(\sigma_{0}^{j}\beta k)^{2}}{\varphi_{Q_{1_{2\rho},(12\rho)^{2}(y,\tau)} }^{-}(\frac{\sigma_{0}^{j}\beta k}{2^{j}r})}\leqslant\\ \leqslant\gamma(A)b_{1}\frac{(\sigma_{1}\beta^{B}k)^{2}}{\varphi_{ Q_{1_{2\rho},(12\rho)^{2}(y,\tau)}}^{+}(\frac{\sigma_{1}\beta^{B}k}{\rho})}.\] So, by our choices of \(b_{1}\), \(b_{2}\) and by possible reducing of \(\sigma_{0}\), if needed, we have \[t_{j+1}\leqslant\tau+\gamma b_{1}\frac{(\sigma_{1}\beta^{B}k)^{2 }}{\varphi_{Q_{1_{2\rho},(12\rho)^{2}(y,\tau)}}^{+}(\frac{\sigma_{1}\beta^{B} k}{\rho})}=\tau+B_{1}\frac{(\sigma_{1}\beta^{B}k)^{2}}{\varphi_{Q_{1_{2\rho},(12\rho)^{2} (y,\tau)}}^{+}(\frac{\sigma_{1}\beta^{B}k}{\rho})}\leqslant\\ \leqslant\tau+\frac{B_{2}}{2}\frac{(\sigma_{1}\beta^{B}k)^{2}}{ \varphi_{Q_{1_{2\rho},(12\rho)^{2}(y,\tau)}}^{+}(\frac{\sigma_{1}\beta^{B}k}{ \rho})}.\] This completes the proof of Theorem 3.1. ## 4 Weak Harnack Inequality, Proof of Theorem 1.1 Fix \(\xi_{0}\in(0,1)\) depending only on the data to be chosen later. In the proof of Theorem 1.1, we will distinguish two alternative cases: either there exist a time level \(\bar{t}\in(t_{0},t_{0}+\frac{\mathcal{I}^{2}}{\varphi_{Q_{2\rho,(2\rho)^{2}}(x _{0},t_{0})}^{+}(\frac{\mathcal{I}}{\rho})})\) and a number \(\lambda_{0}>1\) such that \[\big{|}\big{\{}B_{2\rho}(x_{0}):u(\cdot,\bar{t})\geqslant\lambda_{0}\mathcal{I }\big{\}}\big{|}\geqslant\lambda_{0}^{-\frac{\xi_{0}}{B}}|B_{2\rho}(x_{0})|, \tag{4.1}\] or such inequality is violated, i.e. for all \(t\in(t_{0},t_{0}+\frac{\mathcal{I}^{2}}{\varphi_{Q_{2\rho,(2\rho)^{2}}(x_{0},t _{0})}^{+}(\frac{\mathcal{I}}{\rho})})\) and for any \(\lambda>1\) there holds \[\big{|}\big{\{}B_{2\rho}(x_{0}):u(\cdot,t)\geqslant\lambda\mathcal{I}\big{\}} \big{|}\leqslant\lambda^{-\frac{\xi_{0}}{B}}|B_{2\rho}(x_{0})|, \tag{4.2}\] here \(B>1\) is the number defined in Theorem 3.1 and \(\mathcal{I}=\underset{B_{\rho}(x_{0})}{\int}u(x,t_{0})dx\). ### Proof of Theorem 1.1 under Condition (4.1) **Lemma 4.1**.: _Let (4.1) hold then there exists positive number \(\gamma_{0}\) depending only on the data such that_ \[\big{(}\lambda_{0}^{\xi_{0}}\mathcal{I}\big{)}^{s}\leqslant\gamma_{0}d^{s}\bigg{\{} \frac{(\lambda_{0}^{\xi_{0}}\mathcal{I})^{p-2}}{\rho^{n+p}}+a_{Q_{\xi_{\rho},(6 \rho)^{2}}(x_{0},\bar{t})}^{(\lambda_{0}^{\xi_{0}}\mathcal{I})^{q-2}}\frac{ (\lambda_{0}^{\xi_{0}}\mathcal{I})^{q-2}}{\rho^{n+q}}\bigg{\}}. \tag{4.3}\] Proof.: Lemma 3.1 and conditions (3.6) yield \[\big{|}\big{\{}B_{2\rho}(x_{0}):u(\cdot,t)\geqslant\frac{1}{8}\lambda_{0}^{1- \frac{\xi_{0}}{B}}\mathcal{I}\big{\}}\big{|}\geqslant\frac{1}{2}\lambda_{0}^{ -\frac{\xi_{0}}{B}}|B_{2\rho}(x_{0})|,\] for all \(t\in(\bar{t},\bar{t}+\eta)\), \(\eta=\gamma^{-1}\frac{\lambda_{0}^{-\frac{(q+1)\xi_{0}}{B}}(\lambda_{0} \mathcal{I})^{2}}{\varphi_{Q_{\xi_{\rho},(6\rho)^{2}}(x_{0},\bar{t})}^{( \frac{\lambda_{0}\mathcal{I}}{\rho})}}\). From this \[\frac{1}{16}\lambda_{0}^{1-\frac{\xi_{0}}{B}-\frac{\xi_{0}}{B}}|B_{2\rho}(x_{0} )|^{\frac{1}{2}}\eta^{\frac{1}{2}}\mathcal{I}\leqslant\left(\underset{Q_{2\rho,\eta}(x_{0},\bar{t})}{\iint}u^{s}\right)^{\frac{1}{s}}\leqslant d. \tag{4.4}\] If \(a_{Q_{\xi_{\rho},(6\rho)^{2}}(x_{0},\bar{t})}^{+}\bigg{(}\frac{\lambda_{0} \mathcal{I}}{\rho}\bigg{)}^{q-p}\geqslant 1\), then (4.4) implies \[\lambda_{0}^{1-\frac{\xi_{0}(s+q+2)}{B(s-q+2)}}\,\mathcal{I}\leqslant\gamma \big{[}a_{Q_{6\rho,(6\rho)^{2}}(x_{0},\bar{t})}^{+}\big{]}^{\frac{1}{s-q+2}} \,\rho^{-\frac{n+q}{s-q+2}}d^{\frac{s}{s-q+2}},\] and if \(1-\frac{\xi_{0}(s+q+2)}{B(s-q+2)}\geqslant\xi_{0}\), i.e. \(\xi_{0}(1+\frac{s+q+2}{B(s-q+2)})\leqslant 1\), then \[\lambda_{0}^{\xi_{0}}\,\mathcal{I}\leqslant\gamma\big{[}a_{Q_{6\rho,(6\rho)^{ 2}}(x_{0},\bar{t})}^{+}\big{]}^{\frac{1}{s-q+2}}d^{\frac{s}{s-q+2}}\rho^{- \frac{n+q}{s-q+2}}.\] And if \(a_{Q_{6\rho,(6\rho)^{2}}(x_{0},\bar{t})}^{+}\bigg{(}\frac{\lambda_{0} \mathcal{I}}{\rho}\bigg{)}^{q-p}\leqslant 1\), then (4.4) yields \[\lambda_{0}^{\xi_{0}}\,\mathcal{I}\leqslant\lambda_{0}^{1-\frac{\xi_{0}(s+q+2 )}{B(s-p+2)}}\,\mathcal{I}\leqslant\gamma\rho^{-\frac{n+p}{s-p+2}}d^{\frac{s}{ s-p+2}},\] provided that \(1-\frac{\xi_{0}(s+q+2)}{B(s-p+2)}\geqslant\xi_{0}\), i.e. \(\xi_{0}(1+\frac{s+q+2}{B(s-p+2)})\leqslant 1\). This completes the proof of the lemma. We use Theorem 3.1 with \(k=\frac{\varepsilon_{0}\lambda_{0}^{\xi_{0}}\mathcal{I}}{\gamma_{0}d^{s}}\), \(\beta_{0}=\frac{1}{2}\lambda_{0}^{-\frac{\xi_{0}}{B}}\) and \(\tau=\bar{t}+\eta\), where \(\varepsilon_{0}\) is defined in (3.1) and \(\eta\) is defined in Lemma 4.1, then either \(\mathcal{I}\leqslant C\frac{\gamma_{0}}{\varepsilon_{0}}d^{s}\rho\), or \[u(x,t)\geqslant\frac{\varepsilon_{0}\sigma_{1}}{\gamma_{0}d^{s}}\,\mathcal{I}= \bar{\sigma}_{1}\,\mathcal{I},\quad x\in B_{4\rho}(x_{0}), \tag{4.5}\] for all \(\bar{t}+\eta+B_{1}\frac{(\bar{\sigma}_{1}\mathcal{I})^{2}}{\varphi_{Q_{12\rho,(1 2\rho)^{2}}(x_{0},\bar{t}+\eta)}^{+}\big{(}\frac{\bar{\sigma}_{1}\mathcal{I}}{ \rho}\big{)}}\leqslant t\leqslant\bar{t}+\eta+B_{2}\frac{(\bar{\sigma}_{1} \mathcal{I})^{2}}{\varphi_{Q_{12\rho,(12\rho)^{2}}(x_{0},\bar{t}+\eta)}^{+} \big{(}\frac{\bar{\sigma}_{1}\mathcal{I}}{\rho}\big{)}}\), which by (4.3) and condition \((A)\) yields that (4.5) holds for all time levels \[t_{0}+\bar{B}_{1}\frac{(\bar{\sigma}_{1}\mathcal{I})^{2}}{\varphi_{Q_{12\rho,(1 2\rho)^{2}}(x_{0},t_{0})}^{+}\big{(}\frac{\bar{\sigma}_{1}\mathcal{I}}{\rho} \big{)}}\leqslant t\leqslant t_{0}+\bar{B}_{2}\frac{(\bar{\sigma}_{1}\mathcal{I} )^{2}}{\varphi_{Q_{12\rho,(12\rho)^{2}}(x_{0},t_{0})}^{+}\big{(}\frac{\bar{ \sigma}_{1}\mathcal{I}}{\rho}\big{)}}.\] ### Proof of Theorem 1.1 Under Condition (4.2) In what follows, we will assume that \[\mathcal{I}\geqslant C_{1}\big{\{}\rho+\rho\ \psi_{Q_{1_{2\rho,(12\rho)^{2}}}(x_{0},t_{ 0})}^{-1}\bigg{(}\frac{\rho^{2}}{T-t_{0}}\bigg{)}\big{\}}, \tag{4.6}\] with some positive \(C_{1}>0\) to be chosen later. The following lemma is an upper bound of \(\mathcal{I}\), similar to that of Lemma 4.1. **Lemma 4.2**.: _Next inequality holds_ \[\mathcal{I}^{s}\leqslant\gamma d^{s}\bigg{(}\frac{\mathcal{I}^{p-2}}{\rho^{n+ p}}+a_{Q_{2_{\rho},(2\rho)^{2}}(x_{0},t_{0})}^{+}\frac{\mathcal{I}^{q-2}}{\rho^{n+ q}}\bigg{)}. \tag{4.7}\] Proof.: Test (1.4) by \(\zeta^{q}(x)\in C_{0}^{1}(B_{\frac{3}{2}\rho}(x_{0}),\)\(0\leqslant\zeta(x)\leqslant 1,\)\(\zeta(x)=1\) in \(B_{\rho}(x_{0}),\)\(|\nabla\zeta(x)|\leqslant\frac{2}{\rho}.\) Integrating over \((t_{0},t),\)\(t\in(t_{0},t_{0}+\frac{\eta}{2}),\)\(\eta=\frac{\mathcal{I}^{2}}{\varphi_{Q_{2_{\rho},(2\rho)^{2}}(x_{0},t_{0})}^{+} \big{(}\frac{\mathcal{I}}{\rho}\big{)}}\) and letting \(h\to 0\) we obtain \[\int\limits_{B_{\rho}(x_{0})}u(x,t_{0})dx\leqslant\int\limits_{B_{\frac{3}{2} \rho}(x_{0})}u(x,t)dx+\frac{\gamma}{\rho}\iint\limits_{Q_{\frac{3}{2}\rho, \frac{\eta}{2}}(x_{0},t_{0})}|\nabla u|^{p-1}dxdt+\frac{\gamma}{\rho}\iint \limits_{Q_{\frac{3}{2}\rho,\frac{\eta}{2}}(x_{0},t_{0})}a(x,t)|\nabla u|^{q- 1}dxdt.\] Integrating over \((t_{0},t_{0}+\frac{\eta}{2}),\) from the previous we have \[\mathcal{I}\leqslant\int\limits_{Q_{\frac{3}{2}\rho,\frac{\eta}{2}}(x_{0},t_{ 0})}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Lemma 2.4 and (4.9) yield \[\iint\limits_{Q_{\frac{3}{2}\rho,\frac{9}{2}}(x_{0},t_{0})}(u+ \delta\mathcal{I})^{-1-\varepsilon}|\nabla u|^{p}dxdt+\iint\limits_{Q_{\frac{3}{2 }\rho,\frac{9}{2}}(x_{0},t_{0})}a(x,t)(u+\delta\mathcal{I})^{-1-\varepsilon}| \nabla u|^{q}dxdt\leqslant\\ \leqslant\frac{\gamma(\varepsilon)}{\eta}\iint\limits_{Q_{2\rho, \eta}(x_{0},t_{0})}(u+\delta\mathcal{I})^{1-\varepsilon}dxdt+\frac{\gamma( \varepsilon)}{\rho^{p}}\iint\limits_{Q_{2\rho,\eta}(x_{0},t_{0})}(u+\delta \mathcal{I})^{-1-\varepsilon+p}dxdt+\\ +\frac{\gamma(\varepsilon)}{\rho^{q}}a^{+}_{Q_{2\rho,(2\rho)^{2} }(x_{0},t_{0})}\iint\limits_{Q_{2\rho,\eta}(x_{0},t_{0})}(u+\delta\mathcal{I}) ^{-1-\varepsilon+q}dxdt\leqslant\\ \leqslant\frac{\gamma(\varepsilon)}{\eta}(d^{s}+\delta^{s} \mathcal{I}^{s}|Q_{\rho,\eta}(x_{0},t_{0})|)^{\frac{1-\varepsilon}{s}}|Q_{ \rho,\eta}(x_{0},t_{0})|^{1-\frac{1-\varepsilon}{s}}+\\ +\frac{\gamma(\varepsilon)}{\rho^{p}}(d^{s}+\delta^{s} \mathcal{I}^{s}|Q_{\rho,\eta}(x_{0},t_{0})|)^{\frac{p-1-\varepsilon}{s}}|Q_{ \rho,\eta}(x_{0},t_{0})|^{1-\frac{p-1-\varepsilon}{s}}+\\ +\frac{\gamma(\varepsilon)}{\rho^{q}}a^{+}_{Q_{2\rho,(2\rho)^{2} }(x_{0},t_{0})}(d^{s}+\delta^{s}\mathcal{I}^{s}|Q_{\rho,\eta}(x_{0},t_{0})|)^{ \frac{q-1-\varepsilon}{s}}|Q_{\rho,\eta}(x_{0},t_{0})|^{1-\frac{q-1-\varepsilon }{s}}\leqslant\\ \leqslant\gamma(\varepsilon)\mathcal{I}^{1-\varepsilon}|Q_{\rho, \eta}(x_{0},t_{0})|\bigg{\{}\frac{1}{\eta}\left(\frac{1}{\mathcal{I}_{1}}+ \delta^{s}\right)^{\frac{1-\varepsilon}{s}}+\frac{\mathcal{I}^{p-2}}{\rho^{p} }\left(\frac{1}{\mathcal{I}_{1}}+\delta^{s}\right)^{\frac{p-1-\varepsilon}{s} }+\\ +\frac{\mathcal{I}^{q-2}}{\rho^{q}}a^{+}_{Q_{2\rho,(2\rho)^{2} }(x_{0},t_{0})}\left(\frac{1}{\mathcal{I}_{1}}+\delta^{s}\right)^{\frac{q-1- \varepsilon}{s}}\bigg{\}}\leqslant\\ \leqslant\gamma(\varepsilon)\mathcal{I}^{1-\varepsilon}|Q_{\rho, \eta}(x_{0},t_{0})|\left(\frac{1}{\eta}+\frac{\mathcal{I}^{p-2}}{\rho^{p}}+ \frac{\mathcal{I}^{q-2}}{\rho^{q}}a^{+}_{Q_{2\rho,(2\rho)^{2}}(x_{0},t_{0})} \right)\leqslant\gamma(\varepsilon)\mathcal{I}^{1-\varepsilon}\rho^{n}. \tag{4.12}\] By the Holder inequality \[\iint\limits_{Q_{\frac{3}{2}\rho,\frac{9}{2}}(x_{0},t_{0})}(u+ \delta\mathcal{I})^{(1+\varepsilon)(p-1)}dxdt\leqslant\\ \leqslant\gamma(d^{s}+\delta^{s}\mathcal{I}^{s}|Q_{\rho,\eta}(x_{ 0},t_{0})|)^{\frac{(1+\varepsilon)(p-1)}{s}}|Q_{\rho,\eta}(x_{0},t_{0})|^{1- \frac{(1+\varepsilon)(p-1)}{s}}\leqslant\\ \leqslant\gamma\left(\frac{1}{\tilde{\gamma}}+\delta^{s}\right)^{ \frac{(1+\varepsilon)(p-1)}{s}}\mathcal{I}^{(1+\varepsilon)(p-1)}|Q_{\rho, \eta}(x_{0},t_{0})|\leqslant\gamma\left(\frac{1}{\tilde{\gamma}}+\delta^{s} \right)^{\frac{(1+\varepsilon)(p-1)}{s}}\mathcal{I}^{1+\varepsilon(p-1)}\rho^ {n+p}. \tag{4.13}\] Combining (4.11)-(4.13) we obtain \[\frac{1}{\rho^{1+n}}\iint\limits_{Q_{\frac{3}{2}\rho,\frac{9}{2}}(x_{0},t_{0}) }|\nabla u|^{p-1}dxdt\leqslant\gamma(\varepsilon)\left(\frac{1}{\tilde{\gamma} }+\delta^{s}\right)^{\frac{(1+\varepsilon)(p-1)}{sp}}\mathcal{I}. \tag{4.14}\] By the Holder inequality \[\iint\limits_{Q_{\frac{3}{2}\rho,\frac{9}{2}}(x_{0},t_{0})}a(x,t) |\nabla u|^{q-1}dxdt\leqslant\left(\iint\limits_{Q_{\frac{3}{2}\rho,\frac{9} {2}}(x_{0},t_{0})}a(x,t)(u+\delta\mathcal{I})^{-1-\varepsilon}|\nabla u|^{q} dxdt\right)^{\frac{q-1}{q}}\times\\ \times\left(\iint\limits_{Q_{\frac{3}{2}\rho,\frac{9}{2}}(x_{0},t_ {0})}a(x,t)(u+\delta\mathcal{I})^{(1+\varepsilon)(q-1)}dxdt\right)^{\frac{1}{q}}.\] The first integral on the right-hand side of this inequality was estimated in (4.12), while the second one can be estimated similarly to (4.13) \[\iint\limits_{Q_{\frac{3}{2}\rho,\frac{9}{2}}(x_{0},t_{0})}a(x,t)(u+ \delta\mathcal{I})^{(1+\varepsilon)(q-1)}dxdt\leqslant\\ \leqslant\gamma a^{+}_{Q_{2\rho,(2)\rho}(x_{0},t_{0})}(d^{s}+ \delta^{s}\mathcal{I}^{s}|Q_{\rho,\eta}(x_{0},t_{0})|)^{\frac{(1+\varepsilon)( q-1)}{s}}|Q_{\rho,\eta}(x_{0},t_{0})|^{1-\frac{(1+\varepsilon)(q-1)}{s}}\leqslant\\ \leqslant\gamma a^{+}_{Q_{2\rho,(2)\rho}(x_{0},t_{0})}\left(\frac {1}{\bar{\gamma}}+\delta^{s}\right)^{\frac{(1+\varepsilon)(q-1)}{s}}\mathcal{ I}^{(1+\varepsilon)(q-1)}|Q_{\rho,\eta}(x_{0},t_{0})|\leqslant\\ \leqslant\gamma\left(\frac{1}{\bar{\gamma}}+\delta^{s}\right)^{ \frac{(1+\varepsilon)(q-1)}{s}}\mathcal{I}^{1+\varepsilon(q-1)}\rho^{n+q} \tag{4.15}\] Collecting estimates (4.8), (4.10), (4.14) and (4.15) we arrive at \[\mathcal{I}\leqslant\frac{\gamma}{\bar{\gamma}^{\frac{1}{s}}}\mathcal{I}+ \gamma(\varepsilon)\left(\frac{1}{\bar{\gamma}}+\delta^{s}\right)^{\frac{(1+ \varepsilon)(q-1)}{sp}}\mathcal{I}+\gamma(\varepsilon)\left(\frac{1}{\bar{ \gamma}}+\delta^{s}\right)^{\frac{(1+\varepsilon)(q-1)}{sq}}\mathcal{I}.\] Choosing \(\varepsilon=\frac{1}{2}\) and \(\bar{\gamma}\), \(\delta\) by the condition \(\frac{\gamma}{\bar{\gamma}^{\frac{1}{s}}}+\gamma(\frac{1}{\bar{\gamma}}+ \delta^{s})^{\frac{3(p-1)}{2sp}}+\gamma(\frac{1}{\bar{\gamma}}+\delta^{s})^{ \frac{3(q-1)}{2sq}}\leqslant\frac{1}{2}\), we reach a contradiction to (4.9), which completes the proof of the lemma. Now we note that condition (4.2) yields \[\begin{split}&\int\limits_{B_{2\rho}(x_{0})}u(x,t)^{\varkappa}\,dx= \frac{\varkappa}{|B_{2\rho}(x_{0})|}\int\limits_{0}^{\infty}|\big{\{}u> \lambda\big{\}}|\lambda^{\varkappa-1}d\lambda=\\ &=\frac{\varkappa\mathcal{I}^{\varkappa}}{|B_{2\rho}(x_{0})|} \int\limits_{0}^{\infty}|\big{\{}B_{2\rho}(x_{0}):u>\lambda\mathcal{I}\big{\}} |\lambda^{\varkappa-1}d\lambda\leqslant\mathcal{I}^{\varkappa}+\varkappa \mathcal{I}^{\varkappa}\int\limits_{1}^{\infty}\lambda^{\varkappa-\frac{ \xi_{0}}{B}-1}d\lambda\leqslant 3\mathcal{I}^{\varkappa},\quad\varkappa=\frac{\xi_{0}}{2B}, \end{split} \tag{4.16}\] for all \(t\in\left(t_{0},t_{0}+\frac{\mathcal{I}^{2}}{\varphi^{+}_{Q_{2\rho,(2)\rho}(x_ {0},t_{0})}(\frac{\mathcal{I}}{\rho})}\right)\). The following lemma is the uniform upper bound for the super-solutions. **Lemma 4.3**.: _Fix \(l\) by the condition_ \[1<l:=\frac{s-p+2}{s-q+2}<\frac{n+p}{n}. \tag{4.17}\] _Then for all \(m\) in the range_ \[q-2<m<q-1+\frac{p-n(l-1)}{ln} \tag{4.18}\] _there holds_ \[\frac{\mathcal{I}^{p-2}}{\rho^{n+p}}\iint\limits_{Q_{\frac{7}{2} \rho,\frac{3}{4}\eta}(x_{0},t_{0})}\left(\frac{u}{\mathcal{I}}+1\right)^{m-q+ p}dxdt+a^{+}_{Q_{2\rho,(2p)^{2}}(x_{0},t_{0})}\frac{\mathcal{I}^{q-2}}{\rho^{n+q}} \iint\limits_{Q_{\frac{7}{2}\rho,\frac{3}{4}\eta}(x_{0},t_{0})}\left(\frac{u }{\mathcal{I}}+1\right)^{m}dxdt\leqslant\gamma, \tag{4.19}\] _where \(\eta=\frac{\mathcal{I}^{2}}{\varphi^{+}_{Q_{2\rho,(2)\rho}(x_{0},t_{0})}( \frac{\mathcal{I}}{\rho})}\)._ Proof.: Fix \(\sigma\in(0,1)\), let \(\frac{15}{8}\rho<(1-\sigma)r<r<2\rho\) and let \(\zeta(x)\in C^{1}_{0}(B_{r}(x_{0}))\), \(0\leqslant\zeta(x)\leqslant 1\), \(\zeta(x)=1\) in \(B_{(1-\sigma)r}(x_{0})\) and \(|\nabla\zeta(x)|\leqslant\frac{1}{\sigma r}\). We use Lemma 2.4 with \(\varepsilon=1-\frac{\varkappa}{I}\), where \(\varkappa\) is the number defined in (4.16). By the Sobolev embedding theorem and by (4.16) we obtain \[\iint\limits_{Q_{r,\eta}(x_{0},t_{0})}(u+\mathcal{I})^{p-2+ \varkappa\frac{n+p}{ln}}\zeta^{q}(x)dxdt\leqslant\gamma\bigg{(}\sup\limits_{t_ {0}<t<t_{0}+\eta}\int\limits_{B_{r}(x_{0})}(u+\mathcal{I})^{\frac{\varkappa}{I }}dx\bigg{)}^{\frac{p}{n}}\times\] \[\times\iint\limits_{Q_{r,\eta}(x_{0},t_{0})}(u+\mathcal{I})^{-2+ \frac{\varkappa}{I}}|\nabla(u\zeta^{\frac{q}{p}}(x)|^{p}dxdt\leqslant\gamma \rho^{p}\mathcal{I}^{\frac{\varkappa p}{ln}}\iint\limits_{Q_{r,\eta}(x_{0},t_ {0})}(u+\mathcal{I})^{-2+\frac{\varkappa}{I}}|\nabla(u\zeta^{\frac{q}{p}}(x)|^ {p}dxdt\leqslant\gamma\sigma^{-q}\rho^{p}\mathcal{I}^{\frac{\varkappa p}{ln}} \left\{\frac{1}{\rho^{p}}\iint\limits_{Q_{r,\eta}(x_{0},t_{0})}(u+\mathcal{I} )^{p-2+\frac{\varkappa}{I}}dxdt+\frac{a^{+}_{Q_{2\rho,(2\rho)^{2}}(x_{0},t_{0} )}}{\rho^{q}}\iint\limits_{Q_{r,\eta}(x_{0},t_{0})}(u+\mathcal{I})^{q-2+\frac{ \varkappa}{I}}dxdt\right\},\] which yields \[\frac{\mathcal{I}^{p-2}}{\rho^{n+p}}\iint\limits_{Q_{r,\eta}(x_{0 },t_{0})}\Big{(}\frac{u}{\mathcal{I}}+1\Big{)}^{p-2+\varkappa\frac{n+p}{ln}} \zeta^{q}(x)dxdt\leqslant\gamma\sigma^{-q}\left\{\frac{\mathcal{I}^{p-2}}{\rho ^{n+p}}\iint\limits_{Q_{r,\eta}(x_{0},t_{0})}\left(\frac{u}{\mathcal{I}}+1 \right)^{p-2+\frac{\varkappa}{I}}dxdt+\right.\] \[\left.+a^{+}_{Q_{2\rho,(2\rho)^{2}}(x_{0},t_{0})}\frac{\mathcal{I }^{q-2}}{\rho^{n+q}}\iint\limits_{Q_{r,\eta}(x_{0},t_{0})}\left(\frac{u}{ \mathcal{I}}+1\right)^{q-2+\frac{\varkappa}{I}}dxdt\right\}. \tag{4.20}\] By condition \((A)\) we have \[a^{+}_{Q_{2\rho,(2\rho)^{2}}(x_{0},t_{0})}\iint\limits_{Q_{r, \eta}(x_{0},t_{0})}(u+\mathcal{I})^{q-2+\varkappa\frac{n+p}{ln}}\zeta^{q}(x)dxdt\leqslant\] \[\leqslant a^{-}_{Q_{2\rho,(2\rho)^{2}}(x_{0},t_{0})}\iint\limits_ {Q_{r,\eta}(x_{0},t_{0})}(u+\mathcal{I})^{q-2+\varkappa\frac{n+p}{ln}}\zeta^{q }(x)dxdt+\gamma\rho^{\alpha}\iint\limits_{Q_{r,\eta}(x_{0},t_{0})}(u+ \mathcal{I})^{q-2+\varkappa\frac{n+p}{ln}}\zeta^{q}(x)dxdt. \tag{4.21}\] Let us estimate the terms on the right-hand side of (4.21). Similarly to (4.20) \[a^{-}_{Q_{2\rho,(2\rho)^{2}}(x_{0},t_{0})}\iint\limits_{Q_{r, \eta}(x_{0},t_{0})}(u+\mathcal{I})^{q-2+\varkappa\frac{n+p}{ln}}dxdt\leqslant\] \[\leqslant\gamma\rho^{q-p}\bigg{(}\sup\limits_{t_{0}<t<t_{0}+\eta} \int\limits_{B_{r}(x_{0})}(u+\mathcal{I})^{\frac{\varkappa}{I}}dx\bigg{)}^{ \frac{p}{n}}\iint\limits_{Q_{r,\eta}(x_{0},t_{0})}a(x,t)(u+\mathcal{I})^{-2+ \frac{\varkappa}{I}}|\nabla(u\zeta(x))|^{q}dxdt\leqslant\] \[\leqslant\gamma\rho^{q}\mathcal{I}^{\frac{\varkappa p}{ln}}\iint \limits_{Q_{r,\eta}(x_{0},t_{0})}a(x,t)(u+\mathcal{I})^{-2+\frac{\varkappa}{I} }|\nabla(u\zeta(x))|^{q}dxdt\leqslant\] \[\leqslant\gamma\sigma^{-q}\rho^{q}\mathcal{I}^{\frac{\varkappa p}{ln }}\left\{\frac{1}{\rho^{p}}\iint\limits_{Q_{r,\eta}(x_{0},t_{0})}(u+ \mathcal{I})^{p-2+\frac{\varkappa}{I}}dxdt+\frac{a^{+}_{Q_{2\rho,(2\rho)^{2}}( x_{0},t_{0})}}{\rho^{q}}\iint\limits_{Q_{r,\eta}(x_{0},t_{0})}(u+\mathcal{I})^{q-2+ \frac{\varkappa}{I}}dxdt\right\},\] which together with (4.21) yield \[a^{+}_{Q_{2\rho,(2\rho)^{2}}(x_{0},t_{0})}\frac{\mathcal{I}^{q-2}}{ \rho^{n+q}}\iint\limits_{Q_{r,\eta}(x_{0},t_{0})}(u+\mathcal{I})^{q-2+\varkappa \frac{n+p}{ln}}\zeta^{q}(x)dxdt\leqslant\\ \leqslant\gamma\sigma^{-q}\bigg{\{}\frac{\mathcal{I}^{p-2}}{\rho^ {n+p}}\iint\limits_{Q_{r,\eta}(x_{0},t_{0})}\Big{(}\frac{u}{\mathcal{I}}+1 \Big{)}^{p-2+\frac{\varkappa}{l}}dxdt+a^{+}_{Q_{2\rho,(2\rho)^{2}}(x_{0},t_{0}) }\frac{\mathcal{I}^{q-2}}{\rho^{n+q}}\iint\limits_{Q_{r,\eta}(x_{0},t_{0})} \Big{(}\frac{u}{\mathcal{I}}+1\Big{)}^{q-2+\frac{\varkappa}{l}}dxdt\bigg{\}}+\\ +\gamma\rho^{\alpha-q-n}\iint\limits_{Q_{r,\eta}(x_{0},t_{0})} \Big{(}\frac{u}{\mathcal{I}}+1\Big{)}^{\varkappa\frac{n+p}{ln}}(u+\mathcal{I} )^{q-2}\zeta^{q}(x)dxdt. \tag{4.22}\] To estimate the last term on the right-hand side of (4.22) we use the Young inequality \[\rho^{\alpha-q-n}\iint\limits_{Q_{r,\eta}(x_{0},t_{0})}\Big{(} \frac{u}{\mathcal{I}}+1\Big{)}^{\varkappa\frac{n+p}{ln}}(u+\mathcal{I})^{q-2} \zeta^{q}(x)dxdt=\\ =\rho^{\alpha-q-n}\iint\limits_{Q_{r,\eta}(x_{0},t_{0})}\Big{(} \frac{u}{\mathcal{I}}+1\Big{)}^{\varkappa\frac{n+p}{ln}}(u+\mathcal{I})^{ \frac{p-2}{l}+\frac{(p-2)(l-1)}{l}+q-p}\zeta^{q}(x)dxdt\leqslant\\ \leqslant\frac{\gamma}{\rho^{n+p}}\iint\limits_{Q_{r,\eta}(x_{0},t _{0})}\Big{(}\frac{u}{\mathcal{I}}+1\Big{)}^{\varkappa\frac{n+p}{n}}(u+ \mathcal{I})^{p-2}\zeta^{q}(x)dxdt+\\ +\gamma\rho^{(\alpha+p-q)\frac{l}{l-1}-n-p}\iint\limits_{Q_{r, \eta}(x_{0},t_{0})}(u+\mathcal{I})^{p-2+\frac{(q-p)l}{l-1}}dxdt. \tag{4.23}\] The first integral on the right-hand side of (4.23) we estimate similarly to (4.20) \[\frac{1}{\rho^{n+p}}\iint\limits_{Q_{r,\eta}(x_{0},t_{0})}\Big{(} \frac{u}{\mathcal{I}}+1\Big{)}^{\varkappa\frac{n+p}{n}}(u+\mathcal{I})^{p-2} \zeta^{q}(x)dxdt=\frac{\mathcal{I}^{p-2}}{\rho^{n+p}}\iint\limits_{Q_{r,\eta}( x_{0},t_{0})}\Big{(}\frac{u}{\mathcal{I}}+1\Big{)}^{p-2+\varkappa\frac{n+p}{n}} \zeta^{q}(x)dxdt\\ \leqslant\gamma\sigma^{-q}\bigg{\{}\frac{\mathcal{I}^{p-2}}{\rho ^{n+p}}\iint\limits_{Q_{r,\eta}(x_{0},t_{0})}\Big{(}\frac{u}{\mathcal{I}}+1 \Big{)}^{p-2+\varkappa}dxdt+a^{+}_{Q_{2\rho,(2\rho)^{2}}(x_{0},t_{0})}\frac{ \mathcal{I}^{q-2}}{\rho^{n+q}}\iint\limits_{Q_{r,\eta}(x_{0},t_{0})}\Big{(} \frac{u}{\mathcal{I}}+1\Big{)}^{q-2+\varkappa}dxdt\bigg{\}}. \tag{4.24}\] To estimate the last term on the right-hand side of (4.23) we use Lemma 4.2. By our choice of \(l\) \[\rho^{(\alpha+p-q)\frac{l}{l-1}-n-p}\iint\limits_{Q_{r,\eta}(x_{0},t_{0})}(u+\mathcal{I})^{p-2+\frac{(q-p)l}{l-1}}dxdt=\\ =\gamma\rho^{(\alpha+p-q)\frac{s-p+2}{q-p}-n-p}\iint\limits_{Q_{r, \eta}(x_{0},t_{0})}(u+\mathcal{I})^{s}dxdt\leqslant\gamma\big{(}d^{s}+ \mathcal{I}^{s}|Q_{r,\eta}(x_{0},t_{0})|\big{)}\leqslant\gamma d^{s}. \tag{4.25}\] So, collecting estimates (4.20), (4.22)-(4.25) we arrive at \[J_{\sigma}:=\frac{\mathcal{I}^{p-2}}{\rho^{n+p}}\iint\limits_{Q_{(1- \sigma)r,\eta}(x_{0},t_{0})}\Big{(}\frac{u}{\mathcal{I}}+1\Big{)}^{p-2+\varkappa \frac{n+p}{ln}}dxdt+\\ +a_{Q_{2\rho,(2\rho)^{2}}(x_{0},t_{0})}^{+}\frac{\mathcal{I}^{q- 2}}{\rho^{n+q}}\iint\limits_{Q_{(1-\sigma)r,\eta}(x_{0},t_{0})}\Big{(}\frac{u} {\mathcal{I}}+1\Big{)}^{q-2+\varkappa\frac{n+p}{ln}}dxdt\leqslant\\ \leqslant\gamma\sigma^{-\gamma}\bigg{\{}\frac{\mathcal{I}^{p-2}}{ \rho^{n+p}}\iint\limits_{Q_{r,\eta}(x_{0},t_{0})}\Big{(}\frac{u}{\mathcal{I}} +1\Big{)}^{p-2+\varkappa}dxdt+a_{Q_{2\rho,(2\rho)^{2}}(x_{0},t_{0})}^{+}\frac{ \mathcal{I}^{q-2}}{\rho^{n+q}}\iint\limits_{Q_{r,\eta}(x_{0},t_{0})}\Big{(} \frac{u}{\mathcal{I}}+1\Big{)}^{q-2+\varkappa}dxdt+\\ +\frac{\mathcal{I}^{p-2}}{\rho^{n+p}}\iint\limits_{Q_{r,\eta}(x_ {0},t_{0})}\Big{(}\frac{u}{\mathcal{I}}+1\Big{)}^{p-2+\frac{\varkappa}{l}}dxdt +a_{Q_{2\rho,(2\rho)^{2}}(x_{0},t_{0})}^{+}\frac{\mathcal{I}^{q-2}}{\rho^{n+q }}\iint\limits_{Q_{r,\eta}(x_{0},t_{0})}\Big{(}\frac{u}{\mathcal{I}}+1\Big{)} ^{q-2+\frac{\varkappa}{l}}dxdt+\gamma\bigg{\}},\] which by the Young inequality with any \(\epsilon\in(0,1)\) yields \[J_{\sigma}\leqslant\epsilon J_{0}+\gamma\sigma^{-\gamma}\epsilon^{-\gamma},\] from which by iteration we obtain \[\frac{\mathcal{I}^{p-2}}{\rho^{n+p}}\iint\limits_{Q_{\frac{15}{6} \rho,\eta}(x_{0},t_{0})}\Big{(}\frac{u}{\mathcal{I}}+1\Big{)}^{p-2+\varkappa \frac{n+p}{ln}}dxdt+\\ +a_{Q_{2\rho,(2\rho)^{2}}(x_{0},t_{0})}^{+}\frac{\mathcal{I}^{q-2 }}{\rho^{n+q}}\iint\limits_{Q_{\frac{15}{8}\rho,\eta}(x_{0},t_{0})}\Big{(} \frac{u}{\mathcal{I}}+1\Big{)}^{q-2+\varkappa\frac{n+p}{ln}}dxdt\leqslant\gamma. \tag{4.26}\] To complete the proof of the lemma we need to obtain the reverse Holder inequality. Define the number \(\bar{\varkappa}\leqslant\varkappa\) by the condition \[(m-q+2)\bigg{(}\frac{ln}{n+p}\bigg{)}^{j+1}=\bar{\varkappa},\] in this setting \[\bar{\varkappa}\bigg{(}\frac{n+p}{ln}\bigg{)}^{i}=(m-q+2)\bigg{(} \frac{ln}{n+p}\bigg{)}^{j+1-i}<(1+\frac{p-n(l-1)}{ln})\bigg{(}\frac{ln}{n+p} \bigg{)}^{j+1-i}=\\ =\bigg{(}\frac{ln}{n+p}\bigg{)}^{j-i}\leqslant 1,\quad 1 \leqslant i\leqslant j.\] We use Lemma 2.4 with \(\varepsilon=\frac{\bar{\varkappa}}{l}\bigg{(}\frac{n+p}{ln}\bigg{)}^{i}\) for the pair of cylinders \(Q_{i}:=B_{i}\times(t_{0},t_{0}+\eta_{i})\) and \(Q_{i+1}\), \(B_{i}:=B_{\rho_{i}}(x_{0})\), \(\eta_{i}=\frac{\mathcal{I}^{2}}{\varphi_{Q_{2\rho,(2\rho)^{2}}(x_{0},t_{0})} ^{+}\big{(}\frac{\mathcal{I}}{\rho_{i}}\big{)}}\), \(\rho_{i}:=\frac{15}{8}\rho(1-\frac{1}{15}\frac{1-2^{-i+1}}{1-2^{-(j+1)}})\), \(i=1,...,j\). Choose \(\zeta_{1}(x)\in C_{0}^{1}(B_{i})\), \(\zeta_{1}(x)=1\) in \(B_{i+1}\), \(0\leqslant\zeta_{1}(x)\leqslant 1\), \(|\nabla\zeta_{1}(x)|\leqslant\gamma\frac{2^{i}}{\rho}\) and \(\zeta_{2}(t)\in C^{1}(\mathbb{R}_{+})\), \(0\leqslant\zeta_{2}(t)\leqslant 1\), \(\zeta_{2}(t)=1\) for \(t\leqslant t_{0}+\eta_{i+1}\), \(\zeta_{2}=0\) for \(t\geqslant t_{0}+\eta_{i}\), \(|\frac{d}{dt}\zeta_{2}(t)|\leqslant\gamma 2^{i}\frac{\varphi_{Q_{2\rho,(2\rho)^{2}}(x_{0},t_{0})} ^{+}\big{(}\frac{\mathcal{I}}{\rho}\big{)}}{\mathcal{I}^{2}}\). By the Sobolev embedding theorem, choosing \(y_{i}:=p-2+\bar{\varkappa}\bigg{(}\dfrac{n+p}{ln}\bigg{)}^{i}\), \(z_{i}:=q-2+\bar{\varkappa}\bigg{(}\dfrac{n+p}{ln}\bigg{)}^{i}\) we obtain \[\iint\limits_{Q_{i+1}}(u+\mathcal{I})^{y_{i+1}}dxdt \leqslant\gamma\bigg{(}\sup\limits_{t_{0}<t<t_{0}+\eta_{i}}\int \limits_{B_{i}}(u+\mathcal{I})^{\frac{\bar{\varkappa}}{l}(\frac{n+p}{ln})^{i}} (\zeta_{1}(x)\zeta_{2}(t))^{q}dx\bigg{)}^{\frac{p}{n}}\times\] \[\quad\times\iint\limits_{Q_{i}}(u+\mathcal{I})^{-2+\frac{\bar{ \varkappa}}{l}(\frac{n+p}{ln})^{i}}|\big{(}\nabla u(\zeta_{1}(x)\zeta_{2}(t))^{ \frac{q}{p}}\big{)}|^{p}dxdt\leqslant\] \[\leqslant\gamma\bigg{(}\dfrac{n+p}{ln}\bigg{)}^{\gamma i}2^{ \gamma i}\bigg{\{}\frac{\varphi^{+}_{Q_{2\rho,(2\rho)^{2}}(x_{0},t_{0})}\left( \frac{\mathcal{I}}{\rho}\right)}{\mathcal{I}^{2}}\iint\limits_{Q_{i}}(u+ \mathcal{I})^{\frac{\bar{\varkappa}}{l}(\frac{n+p}{ln})^{i}}dxdt+\] \[\quad+\rho^{-p}\iint\limits_{Q_{i}}(u+\mathcal{I})^{p-2+\frac{ \bar{\varkappa}}{l}(\frac{n+p}{ln})^{i}}dxdt+\frac{a^{+}_{Q_{2\rho,(2\rho)^{2} }(x_{0},t_{0})}}{\rho^{q}}\iint\limits_{Q_{i}}(u+\mathcal{I})^{q-2+\frac{\bar {\varkappa}}{l}(\frac{n+p}{ln})^{i}}dxdt\bigg{\}}^{1+\frac{p}{n}}\leqslant\] \[\leqslant\gamma\bigg{(}\dfrac{n+p}{ln}\bigg{)}^{\gamma i}2^{ \gamma i}\bigg{\{}\rho^{-p}\iint\limits_{Q_{i}}(u+\mathcal{I})^{p-2+\frac{ \bar{\varkappa}}{l}(\frac{n+p}{ln})^{i}}dxdt+\frac{a^{+}_{Q_{2\rho,(2\rho)^{2} }(x_{0},t_{0})}}{\rho^{q}}\iint\limits_{Q_{i}}(u+\mathcal{I})^{q-2+\frac{\bar {\varkappa}}{l}(\frac{n+p}{ln})^{i}}dxdt\bigg{\}}^{1+\frac{p}{n}},\] which by the Holder inequality and (4.26) yields \[\frac{\mathcal{I}^{p-2}}{\rho^{n+p}}\iint\limits_{Q_{i+1}}\Big{(} \dfrac{u}{\mathcal{I}}+1\Big{)}^{y_{i+1}}\,dxdt\leqslant\gamma\bigg{(}\dfrac{n +p}{ln}\bigg{)}^{\gamma i}2^{\gamma i}\bigg{\{}\frac{\mathcal{I}^{p-2}}{\rho^ {n+p}}\iint\limits_{Q_{i}}\Big{(}\dfrac{u}{\mathcal{I}}+1\Big{)}^{p-2+\frac{ \bar{\varkappa}}{l}(\frac{n+p}{ln})^{i}}\,dxdt+\] \[\quad\quad\quad\quad\quad\quad\quad+a^{+}_{Q_{2\rho,(2\rho)^{2} }(x_{0},t_{0})}\frac{\mathcal{I}^{q-2}}{\rho^{n+q}}\iint\limits_{Q_{i}}\Big{(} \dfrac{u}{\mathcal{I}}+1\Big{)}^{q-2+\frac{\bar{\varkappa}}{l}(\frac{n+p}{ln}) ^{i}}\,dxdt\bigg{\}}^{1+\frac{p}{n}}\leqslant\] \[\leqslant\gamma\bigg{(}\dfrac{n+p}{ln}\bigg{)}^{\gamma i}2^{ \gamma i}\bigg{\{}\frac{\mathcal{I}^{p-2}}{\rho^{n+p}}\bigg{(}\iint\limits_{Q _{i}}\Big{(}\dfrac{u}{\mathcal{I}}+1\Big{)}^{p-2+\bar{\varkappa}(\frac{n+p}{ ln})^{i}}\,dxdt\bigg{)}^{\frac{1}{l}}\bigg{(}\iint\limits_{Q_{i}}\Big{(}\dfrac{u}{ \mathcal{I}}+1\Big{)}^{p-2}\,dxdt\bigg{)}^{1-\frac{1}{l}}+\] \[+a^{+}_{Q_{2\rho,(2\rho)^{2}}(x_{0},t_{0})}\frac{\mathcal{I}^{q- 2}}{\rho^{n+q}}\bigg{(}\iint\limits_{Q_{i}}\Big{(}\dfrac{u}{\mathcal{I}}+1 \Big{)}^{q-2+\bar{\varkappa}(\frac{n+p}{ln})^{i}}\,dxdt\bigg{)}^{\frac{1}{l}} \bigg{(}\iint\limits_{Q_{i}}\Big{(}\dfrac{u}{\mathcal{I}}+1\Big{)}^{q-2}\,dxdt \bigg{)}^{1-\frac{1}{l}}\bigg{\}}^{1+\frac{p}{n}}\leqslant\] \[\leqslant\gamma\bigg{(}\dfrac{n+p}{ln}\bigg{)}^{\gamma i}2^{ \gamma i}\bigg{\{}\frac{\mathcal{I}^{p-2}}{\rho^{n+p}}\iint\limits_{Q_{i}} \Big{(}\dfrac{u}{\mathcal{I}}+1\Big{)}^{y_{i}}\,dxdt+a^{+}_{Q_{2\rho,(2\rho) ^{2}}(x_{0},t_{0})}\frac{\mathcal{I}^{q-2}}{\rho^{n+q}}\iint\limits_{Q_{i}} \Big{(}\dfrac{u}{\mathcal{I}}+1\Big{)}^{z_{i}}\,dxdt\bigg{\}}^{\frac{n+p}{ln}}. \tag{4.27}\] Similarly, \[a^{-}_{Q_{2\rho,(2\rho)^{2}}(x_{0},t_{0})}\iint\limits_{Q_{i+1}}(u+ \mathcal{I})^{y_{i+1}}dxdt\leqslant\gamma\rho^{q-p}\bigg{(}\sup\limits_{t_{0}<t <t_{0}+\eta_{i}}\int\limits_{\dot{B}_{i}}(u+\mathcal{I})^{\frac{p}{l}(\frac{n+p }{l_{n}})^{i}}(\zeta_{1}(x)\zeta_{2}(t))^{q}dx\bigg{)}^{\frac{p}{n}}\times\\ \times\iint\limits_{\dot{Q}_{i}}a(x,t)(u+\mathcal{I})^{-2+\frac{ \widetilde{\rho}}{l}(\frac{n+p}{l_{n}})^{i}}|\big{(}\nabla u\zeta_{1}(x)\zeta_{ 2}(t)\big{)}|^{q}dxdt\leqslant\\ \leqslant\gamma\bigg{(}\frac{n+p}{ln}\bigg{)}^{\gamma i}2^{\gamma i }\bigg{\{}\frac{\varphi^{+}_{Q_{2\rho,(2\rho)^{2}}(x_{0},t_{0})}\big{(} \frac{\mathcal{I}}{\mathcal{I}}\big{)}}{\mathcal{I}^{2}}\iint\limits_{\dot{Q }_{i}}(u+\mathcal{I})^{\frac{p}{l}(\frac{n+p}{l_{n}})^{i}}dxdt+\\ +\rho^{-p}\iint\limits_{\dot{Q}_{i}}(u+\mathcal{I})^{p-2+\frac{ \widetilde{\rho}}{l}(\frac{n+p}{l_{n}})^{i}}dxdt+\frac{a^{+}_{Q_{2\rho,(2\rho )^{2}}(x_{0},t_{0})}}{\rho^{q}}\iint\limits_{\dot{Q}_{i}}(u+\mathcal{I})^{q-2+ \frac{\widetilde{\rho}}{l}(\frac{n+p}{l_{n}})^{i}}dxdt\bigg{\}}^{1+\frac{p}{n}} \leqslant\\ \leqslant\gamma\bigg{(}\frac{n+p}{ln}\bigg{)}^{\gamma i}2^{ \gamma i}\bigg{\{}\rho^{-p}\iint\limits_{\dot{Q}_{i}}(u+\mathcal{I})^{p-2+ \frac{\widetilde{\rho}}{l}(\frac{n+p}{l_{n}})^{i}}dxdt+\frac{a^{+}_{Q_{2\rho,( 2\rho)^{2}}(x_{0},t_{0})}}{\rho^{q}}\iint\limits_{\dot{Q}_{i}}(u+\mathcal{I}) ^{q-2+\frac{\widetilde{\rho}}{l}(\frac{n+p}{l_{n}})^{i}}dxdt\bigg{\}}^{1+\frac {p}{n}},\] which by the Holder inequality and (4.26) yields \[a^{-}_{Q_{2\rho,(2\rho)^{2}}(x_{0},t_{0})}\frac{\mathcal{I}^{q-2 }}{\rho^{n+q}}\iint\limits_{\dot{Q}_{i+1}}\Big{(}\frac{u}{\mathcal{I}}+1\Big{)} ^{z_{i+1}}dxdt\leqslant\\ \leqslant\gamma\bigg{(}\frac{n+p}{ln}\bigg{)}^{\gamma i}2^{\gamma i }\bigg{\{}\frac{\mathcal{I}^{p-2}}{\rho^{n+p}}\iint\limits_{\dot{Q}_{i}}\Big{(} \frac{u}{\mathcal{I}}+1\Big{)}^{y_{i}}dxdt+a^{+}_{Q_{2\rho,(2\rho)^{2}}(x_{0},t_{0})}\frac{\mathcal{I}^{q-2}}{\rho^{n+q}}\iint\limits_{\dot{Q}_{i}}\Big{(} \frac{u}{\mathcal{I}}+1\Big{)}^{z_{i}}\,dxdt\bigg{\}}^{\frac{n+p}{ln}}. \tag{4.28}\] Furthermore, \[a^{+}_{Q_{2\rho,(2\rho)^{2}}(x_{0},t_{0})}\frac{\mathcal{I}^{q- 2}}{\rho^{n+q}}\iint\limits_{\dot{Q}_{i+1}}\Big{(}\frac{u}{\mathcal{I}}+1 \Big{)}^{z_{i+1}}dxdt\leqslant a^{-}_{Q_{2\rho,(2\rho)^{2}}(x_{0},t_{0})} \frac{\mathcal{I}^{q-2}}{\rho^{n+q}}\iint\limits_{\dot{Q}_{i+1}}\Big{(}\frac{ u}{\mathcal{I}}+1\Big{)}^{z_{i+1}}dxdt+\\ +\gamma\rho^{\alpha-n-q}\mathcal{I}^{q-2}\iint\limits_{\dot{Q}_{i +1}}\big{(}\frac{u}{\mathcal{I}}+1\big{)}^{z_{i+1}}dxdt. \tag{4.29}\] To estimate the second term on the right-hand side of (4.29) we use the Holder inequality, by our choice of \(l\) \[\rho^{\alpha-n-q}\mathcal{I}^{q-2}\iint\limits_{\dot{Q}_{i+1}} \Big{(}\frac{u}{\mathcal{I}}+1\Big{)}^{z_{i+1}}dxdt=\rho^{\alpha-n-q}\iint \limits_{\dot{Q}_{i+1}}\big{(}\frac{u}{\mathcal{I}}+1\big{)}^{\widetilde{ \kappa}(\frac{n+p}{ln})^{i+1}}(u+\mathcal{I})^{\frac{p-2}{l}+\frac{(p-2)(l-1)}{ l}+q-p}dxdt\leqslant\\ \leqslant\gamma\rho^{\alpha-n-q}\bigg{(}\iint\limits_{\dot{Q}_{i +1}}\Big{(}\frac{u}{\mathcal{I}}+1\Big{)}^{\widetilde{\kappa}\frac{n+p}{n}( \frac{n+p}{ln})^{i}}(u+\mathcal{I})^{p-2}dxdt\bigg{)}^{\frac{1}{l}}\bigg{(} \iint\limits_{\dot{Q}_{i+1}}(u+\mathcal{I})^{s}dxdt\bigg{)}^{\frac{q-p}{s-p+2}}\leqslant\\ \leqslant\gamma\rho^{\alpha+p-q-\frac{(n+p)(q-p)}{s-p+2}}\bigg{(} \frac{\mathcal{I}^{p-2}}{\rho^{n+p}}\iint\limits_{\dot{Q}_{i+1}}\Big{(}\frac{u }{\mathcal{I}}+1\Big{)}^{p-2+\widetilde{\kappa}\frac{n+p}{n}(\frac{n+p}{ln})^{ i}}dxdt\bigg{)}^{\frac{1}{l}}\big{(}d^{s}+\mathcal{I}^{s}|Q_{\rho,\eta}(x_{0},t_{0})| \big{)}^{\frac{q-p}{s-p+2}}\leqslant\\ \leqslant\gamma\bigg{(}\frac{\mathcal{I}^{p-2}}{\rho^{n+p}}\iint \limits_{\dot{Q}_{i+1}}\Big{(}\frac{u}{\mathcal{I}}+1\Big{)}^{p-2+\widetilde{ \kappa}\frac{n+p}{n}(\frac{n+p}{ln})^{i}}dxdt\bigg{)}^{\frac{1}{l}}.\] The integral on the right-hand side of this inequality we estimate similarly to (4.27) \[\left(\frac{\mathcal{I}^{p-2}}{\rho^{n+p}}\iint\limits_{Q_{i+1}} \left(\frac{u}{\mathcal{I}}+1\right)^{p-2+\tilde{\varkappa}\frac{n+p}{n}(\frac{n +p}{ln})^{i}}dxdt\right)^{\frac{1}{t}}\leqslant\\ \leqslant\gamma\bigg{(}\frac{n+p}{ln}\bigg{)}^{\gamma}\!\!2^{ \gamma i}\bigg{\{}\frac{\mathcal{I}^{p-2}}{\rho^{n+p}}\iint\limits_{Q_{i}} \left(\frac{u}{\mathcal{I}}+1\right)^{y_{i}}dxdt+a^{+}_{Q_{2\rho,(2\rho)^{2} }(x_{0},t_{0})}\frac{\mathcal{I}^{q-2}}{\rho^{n+q}}\iint\limits_{Q_{i}}\left( \frac{u}{\mathcal{I}}+1\right)^{z_{i}}dxdt\bigg{\}}^{\frac{n+p}{ln}}.\] Collecting estimates (4.27)-(4.29) we arrive at \[J_{i+1}:=\left(\frac{\mathcal{I}^{p-2}}{\rho^{n+p}}\iint\limits_ {Q_{i+1}}\left(\frac{u}{\mathcal{I}}+1\right)^{y_{i+1}}dxdt+a^{+}_{Q_{2\rho,(2 \rho)^{2}}(x_{0},t_{0})}\frac{\mathcal{I}^{q-2}}{\rho^{n+q}}\iint\limits_{Q_{ i+1}}\left(\frac{u}{\mathcal{I}}+1\right)^{z_{i+1}}dxdt\right)^{(\frac{ln}{n+p})^{i+1}} \leqslant\\ \leqslant\gamma\bigg{(}\frac{n+p}{ln}\bigg{)}^{\gamma i(\frac{ln }{n+p})^{i+1}}2^{\gamma i(\frac{ln}{n+p})^{i+1}}J_{i},\quad i=1,2,...,j.\] From this, after a finite number of iterations, using (4.26), we obtain (4.19), which completes the proof of the lemma. **Lemma 4.4**.: _For all \(\delta\in(0,\frac{5}{8})\) there holds_ \[\frac{1}{\rho}\iint\limits_{Q_{\frac{13}{8}\rho,\delta q}(x_{0},t_{0})}|\nabla u |^{p-1}dxdt+\frac{1}{\rho}\iint\limits_{Q_{\frac{13}{8}\rho,\delta q}(x_{0},t_ {0})}a(x,t)|\nabla u|^{q-1}dxdt\leqslant\gamma\delta^{\frac{\varepsilon}{p(1+2 \varepsilon)}}\mathcal{I}\rho^{n}, \tag{4.30}\] _where \(\eta=\frac{\mathcal{I}^{2}}{\varphi^{+}_{Q_{2\rho,(2\rho)^{2}}(x_{0},t_{0})} \big{(}\frac{\mathcal{I}}{\rho}\big{)}}\), \(\varepsilon=\frac{lp-n(l-1)}{4(q-1)ln}\) and \(l=\frac{s-p+2}{s-q+2}\)._ Proof.: By the Holder inequality we have \[\frac{1}{\rho}\iint\limits_{Q_{\frac{13}{8}\rho,\delta q}(x_{0},t _{0})}|\nabla u|^{p-1}dxdt+\frac{1}{\rho}\iint\limits_{Q_{\frac{13}{8}\rho, \delta q}(x_{0},t_{0})}a(x,t)|\nabla u|^{q-1}dxdt\leqslant\\ \leqslant\gamma\frac{1}{\rho}\bigg{(}\iint\limits_{Q_{\frac{13}{8} \rho,\delta\frac{5}{9}(x_{0},t_{0})}}\Big{(}\frac{u}{\mathcal{I}}+1\Big{)}^{-1 -\varepsilon}|\nabla u|^{p}dxdt\bigg{)}^{\frac{p-1}{p}}\bigg{(}\iint\limits_{Q _{\frac{13}{8}\rho,\delta q}(x_{0},t_{0})}\Big{(}\frac{u}{\mathcal{I}}+1 \Big{)}^{(1+\varepsilon)(p-1)}dxdt\bigg{)}^{\frac{1}{p}}+\\ +\gamma\frac{1}{\rho}\bigg{(}\iint\limits_{Q_{\frac{13}{8}\rho, \frac{5}{9}(x_{0},t_{0})}}a(x,t)\left(\frac{u}{\mathcal{I}}+1\right)^{-1- \varepsilon}|\nabla u|^{q}dxdt\bigg{)}^{\frac{q-1}{q}}\times\\ \times\bigg{(}\iint\limits_{Q_{\frac{13}{8}\rho,\delta q}(x_{0},t _{0})}a(x,t)\left(\frac{u}{\mathcal{I}}+1\right)^{(1+\varepsilon)(q-1)}dxdt \bigg{)}^{\frac{1}{q}}.\] By Lemma 4.3 \[\iint\limits_{Q_{\frac{13}{8}\rho,\delta q}(x_{0},t_{0})}\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! and \[\iint\limits_{Q_{\frac{13}{8}\rho,\delta_{9}}(x_{0},t_{0})}a(x,t) \left(\frac{u}{\mathcal{I}}+1\right)^{(1+\varepsilon)(q-1)}dxdt\leqslant\\ \leqslant a_{Q_{2\rho,(2\rho)^{2}}(x_{0},t_{0})}^{+}\iint\limits_{ Q_{\frac{13}{8}\rho,\delta_{9}}(x_{0},t_{0})}\left(\frac{u}{\mathcal{I}}+1\right)^{(1+ \varepsilon)(q-1)}dxdt\leqslant\gamma\delta^{\frac{\varepsilon}{1+2\varepsilon }}\rho^{n+q}\mathcal{I}^{2-q}.\] By Lemma 2.4 with the appropriate choice of \(\zeta_{1}(x)\), \(\zeta_{2}(t)\), \(|\nabla\zeta_{1}(x)|\leqslant\frac{8}{\rho}\), \(|\frac{d}{dt}\zeta_{2}(t)|\leqslant\frac{8}{\eta}\) we obtain \[\iint\limits_{Q_{\frac{13}{8}\rho,\frac{5}{9}}(x_{0},t_{0})}\left( \frac{u}{\mathcal{I}}+1\right)^{-1-\varepsilon}|\nabla u|^{p}dxdt+\iint\limits _{Q_{\frac{13}{8}\rho,\frac{5}{9}}(x_{0},t_{0})}a(x,t)\left(\frac{u}{ \mathcal{I}}+1\right)^{-1-\varepsilon}|\nabla u|^{q}dxdt\leqslant\\ \leqslant\gamma\bigg{(}\frac{\mathcal{I}^{2}}{\eta}\iint\limits _{Q_{\frac{7}{2}\rho,\frac{3}{9}}(x_{0},t_{0})}\left(\frac{u}{\mathcal{I}}+1 \right)^{1-\varepsilon}dxdt+\frac{\mathcal{I}^{p}}{\rho^{p}}\iint\limits_{Q_{ \frac{7}{2}\rho,\frac{3}{9}}(x_{0},t_{0})}\left(\frac{u}{\mathcal{I}}+1 \right)^{p-1-\varepsilon}dxdt+\\ +a_{Q_{2\rho,(2\rho)^{2}}(x_{0},t_{0})}^{+}\frac{\mathcal{I}^{q} }{\rho^{q}}\iint\limits_{Q_{\frac{7}{2}\rho,\frac{3}{9}}(x_{0},t_{0})}\left( \frac{u}{\mathcal{I}}+1\right)^{q-1-\varepsilon}dxdt\bigg{)}\leqslant\\ \leqslant\gamma\bigg{(}\frac{\mathcal{I}^{p}}{\rho^{p}}\iint \limits_{Q_{\frac{7}{2}\rho,\frac{3}{9}}(x_{0},t_{0})}\left(\frac{u}{ \mathcal{I}}+1\right)^{p-1-\varepsilon}dxdt+a_{Q_{2\rho,(2\rho)^{2}}(x_{0},t_{0 })}^{+}\frac{\mathcal{I}^{q}}{\rho^{q}}\iint\limits_{Q_{\frac{7}{2}\rho,\frac {3}{9}}(x_{0},t_{0})}\left(\frac{u}{\mathcal{I}}+1\right)^{q-1-\varepsilon} dxdt\bigg{)},\] which by Lemma 4.3 yields \[\iint\limits_{Q_{\frac{13}{8}\rho,\frac{5}{9}}(x_{0},t_{0})}\left(\frac{u}{ \mathcal{I}}+1\right)^{-1-\varepsilon}|\nabla u|^{p}dxdt+\iint\limits_{Q_{ \frac{13}{8}\rho,\frac{5}{9}}(x_{0},t_{0})}a(x,t)\left(\frac{u}{\mathcal{I}} +1\right)^{-1-\varepsilon}|\nabla u|^{q}dxdt\leqslant\gamma\mathcal{I}^{2} \rho^{n}.\] Collecting the previous inequalities we arrive at the required (4.30), which completes the proof of the lemma. **Lemma 4.5**.: _There exists \(\delta_{0}\in(0,\frac{5}{8})\) such that_ \[\inf\limits_{t_{0}<t<t_{0}+\delta_{0}\eta}\oint\limits_{B_{\frac{7}{2}\rho}(x_ {0})}u(x,t)dx\geqslant\frac{\mathcal{I}}{2^{n+1}}, \tag{4.31}\] _where \(\eta=\frac{\mathcal{I}^{2}}{\varphi_{Q_{2\rho,(2\rho)^{2}}(x_{0},t_{0})}^{+}( \frac{\mathcal{I}}{\rho})}\)._ Proof.: Let \(\zeta(x)\in C_{0}^{1}(B_{\frac{3}{2}\rho}(x_{0}))\), \(\zeta(x)=1\) in \(B_{\rho}(x_{0})\), \(0\leqslant\zeta(x)\leqslant 1\), \(|\nabla\zeta(x)|\leqslant\frac{2}{\rho}\), test (1.4) by \(\zeta^{q}(x)\), then by Lemma 4.4 we obtain for any \(\delta\in(0,\frac{5}{8})\) \[\int\limits_{B_{\rho}(x_{0})}u(x,t_{0})dx\leqslant\int\limits_{B_{ \frac{3}{2}\rho}(x_{0})}u(x,t)dx+\frac{\gamma}{\rho}\iint\limits_{Q_{\frac{3}{2} \rho,\delta_{9}}(x_{0},t_{0})}|\nabla u|^{p-1}dxdt+\\ +\frac{\gamma}{\rho}\iint\limits_{Q_{\frac{3}{2}\rho,\delta_{9}}( x_{0},t_{0})}a(x,t)|\nabla u|^{q-1}dxdt\leqslant\int\limits_{B_{\frac{3}{2}\rho}(x_{0})}u(x,t )dx+\gamma\delta^{\frac{\varepsilon}{1+2\varepsilon q}}\mathcal{I}\rho^{n}, \quad t_{0}<t<t_{0}+\delta\eta,\] from which the required (4.31) follows, provided that \(\delta\) is small enough. To complete the proof of Theorem 1.1 we note that by the Holder inequality and Lemma 4.3 \[\int\hskip-11.381102pt-\hskip-11.381102pt-\hskip-11.381102pt- \hskip-11.381102pt-\hskip-11.381102pt-\hskip-11.381102pt-\hskip-11.381102pt- \hskip-11.381102pt-\hskip-11.381102pt-\hskip-11.381102pt-\hskip-11.381102pt- \hskip-11.381102pt-\hskip-11.381102pt-\hskip-11.381102pt-\hskip-11.381102pt- \hskip-11.381102pt-\hskip-11.381102pt-\hskip-11.381102pt-\hskip-11.381102pt- \hskip-11.381102pt-\hskip-11.381102pt-\hskip-11.381102pt-\hskip-11.381102pt- \hskip-11.381102pt-\hskip-11.381102pt-\hskip-11.381102pt-\hskip-11.381102pt- \hskip-11.381102pt-\hskip-11.381102pt-\hskip-11.381102pt-\hskip-11.381102pt- \hskip-11.381102pt-\hskip-11.381102pt-\hskip-11.381102pt-\hskip-11.381102pt- \hskip-11.381102pt-\hskip-11.381102pt-\hskip-11.381102pt-\hskip-11.381102pt- \hskip-11.381102pt-\hskip-11.381102pt-\hskip-11.381102pt-\hskip-11.381102pt- \hskip-11.381102pt-\hskip-11.381102pt-\hskip-11.381102pt-\hskip-11.381102pt- \hskip-11.381102pt-\hskip-11.381102pt-\hskip-11.381102pt-\hskip-11.381102pt- \hskip-11.381102pt-\hskip-11.381102pt-\hskip-11.381102pt-\hskip-11.381102pt- \hskip-11.381102pt-\hskip-11.381102pt-\hskip-11.381102pt-\hskip-11.381102pt- \hskip-11.381102pt-\hskip-11.381102pt-\hskip-11.381102pt-\hskip-11.381102pt- \hskip-11.381102pt-\hskip-11.381102pt-\hskip-11.381102pt-\hskip-11.381102pt- \hskip-11.381102pt-\hskip-11.381102pt-\hskip-11.381102pt-\hskip-11.381102pt- \hskip-11.381102pt-\hskip-11.381102pt-\hskip-11.381102pt-\hskip-11.381102pt- \hskip-11.381102pt-\hskip-11.381102pt-\hskip-11.381102pt-\hskip-11.381102pt- \hskip-11.381102pt-\hskip-11.381102pt-\hskip-11.381102pt-\hskip-11.381102pt- \hskip-11.381102pt-\hskip-11.381102pt-\hskip-11.381102pt-\hskip-11.381102pt- \hskip-11.381102pt-\hskip-11.381102pt-\hskip-11.381102pt-\hskip-11.381102pt- \hskip-11.381102pt-\hskip-11.381102pt-\hskip-1.381102pt-\hskip-11.381102pt- \hskip-11.381102pt-\hskip-11.381102pt-\hskip-11.381102pt-\hskip-11.381102pt- \hskip-1.381102pt-\hskip-11.381102pt-\hskip-11.381102pt-\hskip-1.381102pt- \hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt- \hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt- \hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt- \hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt- \hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt- \hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt- \hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt- \hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt- \hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt- \hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt- \hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt- \hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt- \hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt- \hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt- \hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt- \hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt- \hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt- \hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt- \hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt- \hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt- \hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt- \hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt- \hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt- \hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt- \hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt- \hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt- \hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt- \hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt- \hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt- \hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt- \hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt- \hskip-1.38102pt-\hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt- \hskip-1.38102pt-\hskip-1.381102pt-\hskip-1.381102pt-\hskip-1.381102pt- \hskip-1.